Test Report: KVM_Linux_crio 19349

                    
                      0359be70ee85a493d9f37ccc73e8278336c81275:2024-07-31:35584
                    
                

Test fail (30/320)

Order failed test Duration
43 TestAddons/parallel/Ingress 151.82
45 TestAddons/parallel/MetricsServer 349.22
54 TestAddons/StoppedEnableDisable 154.38
173 TestMultiControlPlane/serial/StopSecondaryNode 141.75
175 TestMultiControlPlane/serial/RestartSecondaryNode 60.39
177 TestMultiControlPlane/serial/RestartClusterKeepsNodes 295.1
180 TestMultiControlPlane/serial/StopCluster 172.82
240 TestMultiNode/serial/RestartKeepsNodes 335.37
242 TestMultiNode/serial/StopMultiNode 141.34
249 TestPreload 217.74
257 TestKubernetesUpgrade 408.99
291 TestPause/serial/SecondStartNoReconfiguration 36.76
328 TestStartStop/group/old-k8s-version/serial/FirstStart 272.91
348 TestStartStop/group/no-preload/serial/Stop 138.92
352 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.19
354 TestStartStop/group/embed-certs/serial/Stop 139.1
355 TestStartStop/group/old-k8s-version/serial/DeployApp 0.47
356 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 100.31
357 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
359 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
360 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
365 TestStartStop/group/old-k8s-version/serial/SecondStart 744.17
366 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.13
367 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.09
368 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.09
369 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.38
370 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 430.42
371 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 378.22
372 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 335.19
373 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 139.62
x
+
TestAddons/parallel/Ingress (151.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-190022 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-190022 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-190022 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [3744fa0a-cce8-4eb8-9ae6-27e8475392e6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [3744fa0a-cce8-4eb8-9ae6-27e8475392e6] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004268882s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-190022 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-190022 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.084302717s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-190022 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-190022 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.140
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-190022 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-190022 addons disable ingress-dns --alsologtostderr -v=1: (1.219038672s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-190022 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-190022 addons disable ingress --alsologtostderr -v=1: (7.662848364s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-190022 -n addons-190022
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-190022 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-190022 logs -n 25: (1.180241983s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-893685                                                                     | download-only-893685 | jenkins | v1.33.1 | 31 Jul 24 16:41 UTC | 31 Jul 24 16:41 UTC |
	| delete  | -p download-only-133798                                                                     | download-only-133798 | jenkins | v1.33.1 | 31 Jul 24 16:41 UTC | 31 Jul 24 16:41 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-720978 | jenkins | v1.33.1 | 31 Jul 24 16:41 UTC |                     |
	|         | binary-mirror-720978                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:43559                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-720978                                                                     | binary-mirror-720978 | jenkins | v1.33.1 | 31 Jul 24 16:41 UTC | 31 Jul 24 16:41 UTC |
	| addons  | disable dashboard -p                                                                        | addons-190022        | jenkins | v1.33.1 | 31 Jul 24 16:41 UTC |                     |
	|         | addons-190022                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-190022        | jenkins | v1.33.1 | 31 Jul 24 16:41 UTC |                     |
	|         | addons-190022                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-190022 --wait=true                                                                | addons-190022        | jenkins | v1.33.1 | 31 Jul 24 16:41 UTC | 31 Jul 24 16:44 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-190022 addons disable                                                                | addons-190022        | jenkins | v1.33.1 | 31 Jul 24 16:44 UTC | 31 Jul 24 16:44 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-190022        | jenkins | v1.33.1 | 31 Jul 24 16:44 UTC | 31 Jul 24 16:44 UTC |
	|         | addons-190022                                                                               |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-190022        | jenkins | v1.33.1 | 31 Jul 24 16:44 UTC | 31 Jul 24 16:44 UTC |
	|         | addons-190022                                                                               |                      |         |         |                     |                     |
	| addons  | addons-190022 addons disable                                                                | addons-190022        | jenkins | v1.33.1 | 31 Jul 24 16:44 UTC | 31 Jul 24 16:44 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-190022 addons disable                                                                | addons-190022        | jenkins | v1.33.1 | 31 Jul 24 16:44 UTC | 31 Jul 24 16:44 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ip      | addons-190022 ip                                                                            | addons-190022        | jenkins | v1.33.1 | 31 Jul 24 16:44 UTC | 31 Jul 24 16:44 UTC |
	| addons  | addons-190022 addons disable                                                                | addons-190022        | jenkins | v1.33.1 | 31 Jul 24 16:44 UTC | 31 Jul 24 16:44 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-190022        | jenkins | v1.33.1 | 31 Jul 24 16:44 UTC | 31 Jul 24 16:44 UTC |
	|         | -p addons-190022                                                                            |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-190022        | jenkins | v1.33.1 | 31 Jul 24 16:44 UTC | 31 Jul 24 16:44 UTC |
	|         | -p addons-190022                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-190022 ssh cat                                                                       | addons-190022        | jenkins | v1.33.1 | 31 Jul 24 16:44 UTC | 31 Jul 24 16:44 UTC |
	|         | /opt/local-path-provisioner/pvc-f9db1751-d5be-4a5c-a915-8af812dc20b1_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-190022 addons disable                                                                | addons-190022        | jenkins | v1.33.1 | 31 Jul 24 16:44 UTC | 31 Jul 24 16:44 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-190022 addons disable                                                                | addons-190022        | jenkins | v1.33.1 | 31 Jul 24 16:44 UTC | 31 Jul 24 16:45 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-190022 ssh curl -s                                                                   | addons-190022        | jenkins | v1.33.1 | 31 Jul 24 16:45 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-190022 addons                                                                        | addons-190022        | jenkins | v1.33.1 | 31 Jul 24 16:45 UTC | 31 Jul 24 16:45 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-190022 addons                                                                        | addons-190022        | jenkins | v1.33.1 | 31 Jul 24 16:45 UTC | 31 Jul 24 16:45 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-190022 ip                                                                            | addons-190022        | jenkins | v1.33.1 | 31 Jul 24 16:47 UTC | 31 Jul 24 16:47 UTC |
	| addons  | addons-190022 addons disable                                                                | addons-190022        | jenkins | v1.33.1 | 31 Jul 24 16:47 UTC | 31 Jul 24 16:47 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-190022 addons disable                                                                | addons-190022        | jenkins | v1.33.1 | 31 Jul 24 16:47 UTC | 31 Jul 24 16:47 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 16:41:38
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 16:41:38.474649   16404 out.go:291] Setting OutFile to fd 1 ...
	I0731 16:41:38.474750   16404 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 16:41:38.474755   16404 out.go:304] Setting ErrFile to fd 2...
	I0731 16:41:38.474759   16404 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 16:41:38.474953   16404 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19349-8084/.minikube/bin
	I0731 16:41:38.475549   16404 out.go:298] Setting JSON to false
	I0731 16:41:38.476305   16404 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1442,"bootTime":1722442656,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 16:41:38.476359   16404 start.go:139] virtualization: kvm guest
	I0731 16:41:38.478496   16404 out.go:177] * [addons-190022] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 16:41:38.480002   16404 notify.go:220] Checking for updates...
	I0731 16:41:38.480029   16404 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 16:41:38.481475   16404 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 16:41:38.482914   16404 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 16:41:38.484446   16404 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19349-8084/.minikube
	I0731 16:41:38.485804   16404 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 16:41:38.487022   16404 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 16:41:38.488479   16404 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 16:41:38.519713   16404 out.go:177] * Using the kvm2 driver based on user configuration
	I0731 16:41:38.521035   16404 start.go:297] selected driver: kvm2
	I0731 16:41:38.521051   16404 start.go:901] validating driver "kvm2" against <nil>
	I0731 16:41:38.521061   16404 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 16:41:38.521715   16404 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 16:41:38.521777   16404 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19349-8084/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 16:41:38.535923   16404 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 16:41:38.535968   16404 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 16:41:38.536216   16404 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 16:41:38.536245   16404 cni.go:84] Creating CNI manager for ""
	I0731 16:41:38.536252   16404 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 16:41:38.536259   16404 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 16:41:38.536314   16404 start.go:340] cluster config:
	{Name:addons-190022 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-190022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 16:41:38.536466   16404 iso.go:125] acquiring lock: {Name:mk4494bb5a63902bf12a909c3109db85229bc516 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 16:41:38.538332   16404 out.go:177] * Starting "addons-190022" primary control-plane node in "addons-190022" cluster
	I0731 16:41:38.539504   16404 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 16:41:38.539535   16404 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0731 16:41:38.539544   16404 cache.go:56] Caching tarball of preloaded images
	I0731 16:41:38.539621   16404 preload.go:172] Found /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 16:41:38.539634   16404 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 16:41:38.539946   16404 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/config.json ...
	I0731 16:41:38.539970   16404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/config.json: {Name:mkd8f9cca2cc4c776d5bec228678fc5030cb0e7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 16:41:38.540111   16404 start.go:360] acquireMachinesLock for addons-190022: {Name:mkd78eacfab7d6c3058c6674434b3d889ec957e0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 16:41:38.540166   16404 start.go:364] duration metric: took 40.488µs to acquireMachinesLock for "addons-190022"
	I0731 16:41:38.540186   16404 start.go:93] Provisioning new machine with config: &{Name:addons-190022 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:addons-190022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 16:41:38.540246   16404 start.go:125] createHost starting for "" (driver="kvm2")
	I0731 16:41:38.541685   16404 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0731 16:41:38.541853   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:41:38.541903   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:41:38.555736   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36163
	I0731 16:41:38.556207   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:41:38.556759   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:41:38.556787   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:41:38.557120   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:41:38.557264   16404 main.go:141] libmachine: (addons-190022) Calling .GetMachineName
	I0731 16:41:38.557384   16404 main.go:141] libmachine: (addons-190022) Calling .DriverName
	I0731 16:41:38.557537   16404 start.go:159] libmachine.API.Create for "addons-190022" (driver="kvm2")
	I0731 16:41:38.557562   16404 client.go:168] LocalClient.Create starting
	I0731 16:41:38.557607   16404 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem
	I0731 16:41:38.665969   16404 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem
	I0731 16:41:38.721645   16404 main.go:141] libmachine: Running pre-create checks...
	I0731 16:41:38.721667   16404 main.go:141] libmachine: (addons-190022) Calling .PreCreateCheck
	I0731 16:41:38.722161   16404 main.go:141] libmachine: (addons-190022) Calling .GetConfigRaw
	I0731 16:41:38.722588   16404 main.go:141] libmachine: Creating machine...
	I0731 16:41:38.722601   16404 main.go:141] libmachine: (addons-190022) Calling .Create
	I0731 16:41:38.722746   16404 main.go:141] libmachine: (addons-190022) Creating KVM machine...
	I0731 16:41:38.724081   16404 main.go:141] libmachine: (addons-190022) DBG | found existing default KVM network
	I0731 16:41:38.724921   16404 main.go:141] libmachine: (addons-190022) DBG | I0731 16:41:38.724770   16426 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012f990}
	I0731 16:41:38.724956   16404 main.go:141] libmachine: (addons-190022) DBG | created network xml: 
	I0731 16:41:38.724977   16404 main.go:141] libmachine: (addons-190022) DBG | <network>
	I0731 16:41:38.724985   16404 main.go:141] libmachine: (addons-190022) DBG |   <name>mk-addons-190022</name>
	I0731 16:41:38.724990   16404 main.go:141] libmachine: (addons-190022) DBG |   <dns enable='no'/>
	I0731 16:41:38.724998   16404 main.go:141] libmachine: (addons-190022) DBG |   
	I0731 16:41:38.725007   16404 main.go:141] libmachine: (addons-190022) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0731 16:41:38.725015   16404 main.go:141] libmachine: (addons-190022) DBG |     <dhcp>
	I0731 16:41:38.725022   16404 main.go:141] libmachine: (addons-190022) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0731 16:41:38.725030   16404 main.go:141] libmachine: (addons-190022) DBG |     </dhcp>
	I0731 16:41:38.725035   16404 main.go:141] libmachine: (addons-190022) DBG |   </ip>
	I0731 16:41:38.725042   16404 main.go:141] libmachine: (addons-190022) DBG |   
	I0731 16:41:38.725047   16404 main.go:141] libmachine: (addons-190022) DBG | </network>
	I0731 16:41:38.725054   16404 main.go:141] libmachine: (addons-190022) DBG | 
	I0731 16:41:38.730472   16404 main.go:141] libmachine: (addons-190022) DBG | trying to create private KVM network mk-addons-190022 192.168.39.0/24...
	I0731 16:41:38.792592   16404 main.go:141] libmachine: (addons-190022) DBG | private KVM network mk-addons-190022 192.168.39.0/24 created
	I0731 16:41:38.792623   16404 main.go:141] libmachine: (addons-190022) DBG | I0731 16:41:38.792515   16426 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19349-8084/.minikube
	I0731 16:41:38.792642   16404 main.go:141] libmachine: (addons-190022) Setting up store path in /home/jenkins/minikube-integration/19349-8084/.minikube/machines/addons-190022 ...
	I0731 16:41:38.792687   16404 main.go:141] libmachine: (addons-190022) Building disk image from file:///home/jenkins/minikube-integration/19349-8084/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0731 16:41:38.792721   16404 main.go:141] libmachine: (addons-190022) Downloading /home/jenkins/minikube-integration/19349-8084/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19349-8084/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0731 16:41:39.062859   16404 main.go:141] libmachine: (addons-190022) DBG | I0731 16:41:39.062711   16426 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/addons-190022/id_rsa...
	I0731 16:41:39.187616   16404 main.go:141] libmachine: (addons-190022) DBG | I0731 16:41:39.187440   16426 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/addons-190022/addons-190022.rawdisk...
	I0731 16:41:39.187648   16404 main.go:141] libmachine: (addons-190022) DBG | Writing magic tar header
	I0731 16:41:39.187663   16404 main.go:141] libmachine: (addons-190022) DBG | Writing SSH key tar header
	I0731 16:41:39.188170   16404 main.go:141] libmachine: (addons-190022) DBG | I0731 16:41:39.188061   16426 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19349-8084/.minikube/machines/addons-190022 ...
	I0731 16:41:39.188199   16404 main.go:141] libmachine: (addons-190022) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/addons-190022
	I0731 16:41:39.188213   16404 main.go:141] libmachine: (addons-190022) Setting executable bit set on /home/jenkins/minikube-integration/19349-8084/.minikube/machines/addons-190022 (perms=drwx------)
	I0731 16:41:39.188225   16404 main.go:141] libmachine: (addons-190022) Setting executable bit set on /home/jenkins/minikube-integration/19349-8084/.minikube/machines (perms=drwxr-xr-x)
	I0731 16:41:39.188232   16404 main.go:141] libmachine: (addons-190022) Setting executable bit set on /home/jenkins/minikube-integration/19349-8084/.minikube (perms=drwxr-xr-x)
	I0731 16:41:39.188260   16404 main.go:141] libmachine: (addons-190022) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19349-8084/.minikube/machines
	I0731 16:41:39.188282   16404 main.go:141] libmachine: (addons-190022) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19349-8084/.minikube
	I0731 16:41:39.188294   16404 main.go:141] libmachine: (addons-190022) Setting executable bit set on /home/jenkins/minikube-integration/19349-8084 (perms=drwxrwxr-x)
	I0731 16:41:39.188308   16404 main.go:141] libmachine: (addons-190022) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19349-8084
	I0731 16:41:39.188326   16404 main.go:141] libmachine: (addons-190022) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0731 16:41:39.188338   16404 main.go:141] libmachine: (addons-190022) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0731 16:41:39.188345   16404 main.go:141] libmachine: (addons-190022) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0731 16:41:39.188358   16404 main.go:141] libmachine: (addons-190022) Creating domain...
	I0731 16:41:39.188366   16404 main.go:141] libmachine: (addons-190022) DBG | Checking permissions on dir: /home/jenkins
	I0731 16:41:39.188381   16404 main.go:141] libmachine: (addons-190022) DBG | Checking permissions on dir: /home
	I0731 16:41:39.188394   16404 main.go:141] libmachine: (addons-190022) DBG | Skipping /home - not owner
	I0731 16:41:39.189573   16404 main.go:141] libmachine: (addons-190022) define libvirt domain using xml: 
	I0731 16:41:39.189607   16404 main.go:141] libmachine: (addons-190022) <domain type='kvm'>
	I0731 16:41:39.189619   16404 main.go:141] libmachine: (addons-190022)   <name>addons-190022</name>
	I0731 16:41:39.189624   16404 main.go:141] libmachine: (addons-190022)   <memory unit='MiB'>4000</memory>
	I0731 16:41:39.189630   16404 main.go:141] libmachine: (addons-190022)   <vcpu>2</vcpu>
	I0731 16:41:39.189642   16404 main.go:141] libmachine: (addons-190022)   <features>
	I0731 16:41:39.189647   16404 main.go:141] libmachine: (addons-190022)     <acpi/>
	I0731 16:41:39.189651   16404 main.go:141] libmachine: (addons-190022)     <apic/>
	I0731 16:41:39.189656   16404 main.go:141] libmachine: (addons-190022)     <pae/>
	I0731 16:41:39.189660   16404 main.go:141] libmachine: (addons-190022)     
	I0731 16:41:39.189665   16404 main.go:141] libmachine: (addons-190022)   </features>
	I0731 16:41:39.189670   16404 main.go:141] libmachine: (addons-190022)   <cpu mode='host-passthrough'>
	I0731 16:41:39.189675   16404 main.go:141] libmachine: (addons-190022)   
	I0731 16:41:39.189684   16404 main.go:141] libmachine: (addons-190022)   </cpu>
	I0731 16:41:39.189689   16404 main.go:141] libmachine: (addons-190022)   <os>
	I0731 16:41:39.189698   16404 main.go:141] libmachine: (addons-190022)     <type>hvm</type>
	I0731 16:41:39.189729   16404 main.go:141] libmachine: (addons-190022)     <boot dev='cdrom'/>
	I0731 16:41:39.189750   16404 main.go:141] libmachine: (addons-190022)     <boot dev='hd'/>
	I0731 16:41:39.189760   16404 main.go:141] libmachine: (addons-190022)     <bootmenu enable='no'/>
	I0731 16:41:39.189768   16404 main.go:141] libmachine: (addons-190022)   </os>
	I0731 16:41:39.189777   16404 main.go:141] libmachine: (addons-190022)   <devices>
	I0731 16:41:39.189787   16404 main.go:141] libmachine: (addons-190022)     <disk type='file' device='cdrom'>
	I0731 16:41:39.189804   16404 main.go:141] libmachine: (addons-190022)       <source file='/home/jenkins/minikube-integration/19349-8084/.minikube/machines/addons-190022/boot2docker.iso'/>
	I0731 16:41:39.189829   16404 main.go:141] libmachine: (addons-190022)       <target dev='hdc' bus='scsi'/>
	I0731 16:41:39.189842   16404 main.go:141] libmachine: (addons-190022)       <readonly/>
	I0731 16:41:39.189853   16404 main.go:141] libmachine: (addons-190022)     </disk>
	I0731 16:41:39.189869   16404 main.go:141] libmachine: (addons-190022)     <disk type='file' device='disk'>
	I0731 16:41:39.189881   16404 main.go:141] libmachine: (addons-190022)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0731 16:41:39.189895   16404 main.go:141] libmachine: (addons-190022)       <source file='/home/jenkins/minikube-integration/19349-8084/.minikube/machines/addons-190022/addons-190022.rawdisk'/>
	I0731 16:41:39.189906   16404 main.go:141] libmachine: (addons-190022)       <target dev='hda' bus='virtio'/>
	I0731 16:41:39.189937   16404 main.go:141] libmachine: (addons-190022)     </disk>
	I0731 16:41:39.189961   16404 main.go:141] libmachine: (addons-190022)     <interface type='network'>
	I0731 16:41:39.189972   16404 main.go:141] libmachine: (addons-190022)       <source network='mk-addons-190022'/>
	I0731 16:41:39.189988   16404 main.go:141] libmachine: (addons-190022)       <model type='virtio'/>
	I0731 16:41:39.190000   16404 main.go:141] libmachine: (addons-190022)     </interface>
	I0731 16:41:39.190010   16404 main.go:141] libmachine: (addons-190022)     <interface type='network'>
	I0731 16:41:39.190018   16404 main.go:141] libmachine: (addons-190022)       <source network='default'/>
	I0731 16:41:39.190025   16404 main.go:141] libmachine: (addons-190022)       <model type='virtio'/>
	I0731 16:41:39.190031   16404 main.go:141] libmachine: (addons-190022)     </interface>
	I0731 16:41:39.190038   16404 main.go:141] libmachine: (addons-190022)     <serial type='pty'>
	I0731 16:41:39.190044   16404 main.go:141] libmachine: (addons-190022)       <target port='0'/>
	I0731 16:41:39.190058   16404 main.go:141] libmachine: (addons-190022)     </serial>
	I0731 16:41:39.190069   16404 main.go:141] libmachine: (addons-190022)     <console type='pty'>
	I0731 16:41:39.190086   16404 main.go:141] libmachine: (addons-190022)       <target type='serial' port='0'/>
	I0731 16:41:39.190095   16404 main.go:141] libmachine: (addons-190022)     </console>
	I0731 16:41:39.190101   16404 main.go:141] libmachine: (addons-190022)     <rng model='virtio'>
	I0731 16:41:39.190110   16404 main.go:141] libmachine: (addons-190022)       <backend model='random'>/dev/random</backend>
	I0731 16:41:39.190115   16404 main.go:141] libmachine: (addons-190022)     </rng>
	I0731 16:41:39.190124   16404 main.go:141] libmachine: (addons-190022)     
	I0731 16:41:39.190132   16404 main.go:141] libmachine: (addons-190022)     
	I0731 16:41:39.190145   16404 main.go:141] libmachine: (addons-190022)   </devices>
	I0731 16:41:39.190157   16404 main.go:141] libmachine: (addons-190022) </domain>
	I0731 16:41:39.190166   16404 main.go:141] libmachine: (addons-190022) 
	I0731 16:41:39.196001   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:02:9a:1f in network default
	I0731 16:41:39.196507   16404 main.go:141] libmachine: (addons-190022) Ensuring networks are active...
	I0731 16:41:39.196524   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:41:39.197278   16404 main.go:141] libmachine: (addons-190022) Ensuring network default is active
	I0731 16:41:39.197493   16404 main.go:141] libmachine: (addons-190022) Ensuring network mk-addons-190022 is active
	I0731 16:41:39.197925   16404 main.go:141] libmachine: (addons-190022) Getting domain xml...
	I0731 16:41:39.198553   16404 main.go:141] libmachine: (addons-190022) Creating domain...
	I0731 16:41:40.602781   16404 main.go:141] libmachine: (addons-190022) Waiting to get IP...
	I0731 16:41:40.603516   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:41:40.603931   16404 main.go:141] libmachine: (addons-190022) DBG | unable to find current IP address of domain addons-190022 in network mk-addons-190022
	I0731 16:41:40.603958   16404 main.go:141] libmachine: (addons-190022) DBG | I0731 16:41:40.603923   16426 retry.go:31] will retry after 299.522845ms: waiting for machine to come up
	I0731 16:41:40.905435   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:41:40.905923   16404 main.go:141] libmachine: (addons-190022) DBG | unable to find current IP address of domain addons-190022 in network mk-addons-190022
	I0731 16:41:40.905950   16404 main.go:141] libmachine: (addons-190022) DBG | I0731 16:41:40.905870   16426 retry.go:31] will retry after 318.334424ms: waiting for machine to come up
	I0731 16:41:41.225407   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:41:41.225879   16404 main.go:141] libmachine: (addons-190022) DBG | unable to find current IP address of domain addons-190022 in network mk-addons-190022
	I0731 16:41:41.225907   16404 main.go:141] libmachine: (addons-190022) DBG | I0731 16:41:41.225819   16426 retry.go:31] will retry after 298.274864ms: waiting for machine to come up
	I0731 16:41:41.525265   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:41:41.525725   16404 main.go:141] libmachine: (addons-190022) DBG | unable to find current IP address of domain addons-190022 in network mk-addons-190022
	I0731 16:41:41.525753   16404 main.go:141] libmachine: (addons-190022) DBG | I0731 16:41:41.525670   16426 retry.go:31] will retry after 393.737403ms: waiting for machine to come up
	I0731 16:41:41.921291   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:41:41.921741   16404 main.go:141] libmachine: (addons-190022) DBG | unable to find current IP address of domain addons-190022 in network mk-addons-190022
	I0731 16:41:41.921768   16404 main.go:141] libmachine: (addons-190022) DBG | I0731 16:41:41.921690   16426 retry.go:31] will retry after 651.921555ms: waiting for machine to come up
	I0731 16:41:42.576857   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:41:42.577388   16404 main.go:141] libmachine: (addons-190022) DBG | unable to find current IP address of domain addons-190022 in network mk-addons-190022
	I0731 16:41:42.577413   16404 main.go:141] libmachine: (addons-190022) DBG | I0731 16:41:42.577309   16426 retry.go:31] will retry after 625.355859ms: waiting for machine to come up
	I0731 16:41:43.204131   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:41:43.204527   16404 main.go:141] libmachine: (addons-190022) DBG | unable to find current IP address of domain addons-190022 in network mk-addons-190022
	I0731 16:41:43.204586   16404 main.go:141] libmachine: (addons-190022) DBG | I0731 16:41:43.204501   16426 retry.go:31] will retry after 857.401115ms: waiting for machine to come up
	I0731 16:41:44.063071   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:41:44.063478   16404 main.go:141] libmachine: (addons-190022) DBG | unable to find current IP address of domain addons-190022 in network mk-addons-190022
	I0731 16:41:44.063509   16404 main.go:141] libmachine: (addons-190022) DBG | I0731 16:41:44.063414   16426 retry.go:31] will retry after 1.331583997s: waiting for machine to come up
	I0731 16:41:45.396247   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:41:45.396731   16404 main.go:141] libmachine: (addons-190022) DBG | unable to find current IP address of domain addons-190022 in network mk-addons-190022
	I0731 16:41:45.396757   16404 main.go:141] libmachine: (addons-190022) DBG | I0731 16:41:45.396687   16426 retry.go:31] will retry after 1.121424428s: waiting for machine to come up
	I0731 16:41:46.520037   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:41:46.520369   16404 main.go:141] libmachine: (addons-190022) DBG | unable to find current IP address of domain addons-190022 in network mk-addons-190022
	I0731 16:41:46.520409   16404 main.go:141] libmachine: (addons-190022) DBG | I0731 16:41:46.520312   16426 retry.go:31] will retry after 1.846743517s: waiting for machine to come up
	I0731 16:41:48.369541   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:41:48.369963   16404 main.go:141] libmachine: (addons-190022) DBG | unable to find current IP address of domain addons-190022 in network mk-addons-190022
	I0731 16:41:48.369992   16404 main.go:141] libmachine: (addons-190022) DBG | I0731 16:41:48.369903   16426 retry.go:31] will retry after 2.862497152s: waiting for machine to come up
	I0731 16:41:51.235923   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:41:51.236392   16404 main.go:141] libmachine: (addons-190022) DBG | unable to find current IP address of domain addons-190022 in network mk-addons-190022
	I0731 16:41:51.236419   16404 main.go:141] libmachine: (addons-190022) DBG | I0731 16:41:51.236351   16426 retry.go:31] will retry after 3.250256872s: waiting for machine to come up
	I0731 16:41:54.488065   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:41:54.488396   16404 main.go:141] libmachine: (addons-190022) DBG | unable to find current IP address of domain addons-190022 in network mk-addons-190022
	I0731 16:41:54.488419   16404 main.go:141] libmachine: (addons-190022) DBG | I0731 16:41:54.488357   16426 retry.go:31] will retry after 3.524085571s: waiting for machine to come up
	I0731 16:41:58.016962   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:41:58.017391   16404 main.go:141] libmachine: (addons-190022) DBG | unable to find current IP address of domain addons-190022 in network mk-addons-190022
	I0731 16:41:58.017419   16404 main.go:141] libmachine: (addons-190022) DBG | I0731 16:41:58.017343   16426 retry.go:31] will retry after 3.777226244s: waiting for machine to come up
	I0731 16:42:01.798205   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:01.798683   16404 main.go:141] libmachine: (addons-190022) Found IP for machine: 192.168.39.140
	I0731 16:42:01.798700   16404 main.go:141] libmachine: (addons-190022) Reserving static IP address...
	I0731 16:42:01.798708   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has current primary IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:01.799040   16404 main.go:141] libmachine: (addons-190022) DBG | unable to find host DHCP lease matching {name: "addons-190022", mac: "52:54:00:b8:3c:34", ip: "192.168.39.140"} in network mk-addons-190022
	I0731 16:42:01.868230   16404 main.go:141] libmachine: (addons-190022) DBG | Getting to WaitForSSH function...
	I0731 16:42:01.868261   16404 main.go:141] libmachine: (addons-190022) Reserved static IP address: 192.168.39.140
	I0731 16:42:01.868276   16404 main.go:141] libmachine: (addons-190022) Waiting for SSH to be available...
	I0731 16:42:01.870777   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:01.871313   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:01.871342   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:01.871511   16404 main.go:141] libmachine: (addons-190022) DBG | Using SSH client type: external
	I0731 16:42:01.871554   16404 main.go:141] libmachine: (addons-190022) DBG | Using SSH private key: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/addons-190022/id_rsa (-rw-------)
	I0731 16:42:01.871585   16404 main.go:141] libmachine: (addons-190022) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.140 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19349-8084/.minikube/machines/addons-190022/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 16:42:01.871596   16404 main.go:141] libmachine: (addons-190022) DBG | About to run SSH command:
	I0731 16:42:01.871605   16404 main.go:141] libmachine: (addons-190022) DBG | exit 0
	I0731 16:42:02.007321   16404 main.go:141] libmachine: (addons-190022) DBG | SSH cmd err, output: <nil>: 
	I0731 16:42:02.007710   16404 main.go:141] libmachine: (addons-190022) KVM machine creation complete!
	I0731 16:42:02.007981   16404 main.go:141] libmachine: (addons-190022) Calling .GetConfigRaw
	I0731 16:42:02.008586   16404 main.go:141] libmachine: (addons-190022) Calling .DriverName
	I0731 16:42:02.008825   16404 main.go:141] libmachine: (addons-190022) Calling .DriverName
	I0731 16:42:02.009048   16404 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0731 16:42:02.009068   16404 main.go:141] libmachine: (addons-190022) Calling .GetState
	I0731 16:42:02.010319   16404 main.go:141] libmachine: Detecting operating system of created instance...
	I0731 16:42:02.010336   16404 main.go:141] libmachine: Waiting for SSH to be available...
	I0731 16:42:02.010356   16404 main.go:141] libmachine: Getting to WaitForSSH function...
	I0731 16:42:02.010368   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHHostname
	I0731 16:42:02.012728   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:02.013036   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:02.013073   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:02.013185   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHPort
	I0731 16:42:02.013341   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:02.013484   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:02.013655   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHUsername
	I0731 16:42:02.013792   16404 main.go:141] libmachine: Using SSH client type: native
	I0731 16:42:02.014003   16404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0731 16:42:02.014016   16404 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0731 16:42:02.118535   16404 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 16:42:02.118564   16404 main.go:141] libmachine: Detecting the provisioner...
	I0731 16:42:02.118574   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHHostname
	I0731 16:42:02.121222   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:02.121625   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:02.121658   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:02.121801   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHPort
	I0731 16:42:02.122032   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:02.122255   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:02.122416   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHUsername
	I0731 16:42:02.122588   16404 main.go:141] libmachine: Using SSH client type: native
	I0731 16:42:02.122766   16404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0731 16:42:02.122779   16404 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0731 16:42:02.227339   16404 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0731 16:42:02.227443   16404 main.go:141] libmachine: found compatible host: buildroot
	I0731 16:42:02.227456   16404 main.go:141] libmachine: Provisioning with buildroot...
	I0731 16:42:02.227465   16404 main.go:141] libmachine: (addons-190022) Calling .GetMachineName
	I0731 16:42:02.227740   16404 buildroot.go:166] provisioning hostname "addons-190022"
	I0731 16:42:02.227764   16404 main.go:141] libmachine: (addons-190022) Calling .GetMachineName
	I0731 16:42:02.227979   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHHostname
	I0731 16:42:02.230544   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:02.230923   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:02.230960   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:02.231130   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHPort
	I0731 16:42:02.231318   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:02.231475   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:02.231667   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHUsername
	I0731 16:42:02.231824   16404 main.go:141] libmachine: Using SSH client type: native
	I0731 16:42:02.231970   16404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0731 16:42:02.231981   16404 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-190022 && echo "addons-190022" | sudo tee /etc/hostname
	I0731 16:42:02.349582   16404 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-190022
	
	I0731 16:42:02.349646   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHHostname
	I0731 16:42:02.352401   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:02.352696   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:02.352715   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:02.352898   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHPort
	I0731 16:42:02.353108   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:02.353266   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:02.353409   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHUsername
	I0731 16:42:02.353560   16404 main.go:141] libmachine: Using SSH client type: native
	I0731 16:42:02.353723   16404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0731 16:42:02.353739   16404 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-190022' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-190022/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-190022' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 16:42:02.467252   16404 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 16:42:02.467285   16404 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19349-8084/.minikube CaCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19349-8084/.minikube}
	I0731 16:42:02.467317   16404 buildroot.go:174] setting up certificates
	I0731 16:42:02.467327   16404 provision.go:84] configureAuth start
	I0731 16:42:02.467337   16404 main.go:141] libmachine: (addons-190022) Calling .GetMachineName
	I0731 16:42:02.467621   16404 main.go:141] libmachine: (addons-190022) Calling .GetIP
	I0731 16:42:02.470250   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:02.470553   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:02.470578   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:02.470741   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHHostname
	I0731 16:42:02.472797   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:02.473147   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:02.473185   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:02.473301   16404 provision.go:143] copyHostCerts
	I0731 16:42:02.473371   16404 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem (1082 bytes)
	I0731 16:42:02.473521   16404 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem (1123 bytes)
	I0731 16:42:02.473628   16404 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem (1675 bytes)
	I0731 16:42:02.473717   16404 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem org=jenkins.addons-190022 san=[127.0.0.1 192.168.39.140 addons-190022 localhost minikube]
	I0731 16:42:02.602461   16404 provision.go:177] copyRemoteCerts
	I0731 16:42:02.602518   16404 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 16:42:02.602540   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHHostname
	I0731 16:42:02.605284   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:02.605652   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:02.605681   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:02.605865   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHPort
	I0731 16:42:02.606020   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:02.606162   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHUsername
	I0731 16:42:02.606293   16404 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/addons-190022/id_rsa Username:docker}
	I0731 16:42:02.688592   16404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 16:42:02.711250   16404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0731 16:42:02.733389   16404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 16:42:02.754554   16404 provision.go:87] duration metric: took 287.214664ms to configureAuth
	I0731 16:42:02.754589   16404 buildroot.go:189] setting minikube options for container-runtime
	I0731 16:42:02.754791   16404 config.go:182] Loaded profile config "addons-190022": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 16:42:02.754897   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHHostname
	I0731 16:42:02.757654   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:02.758008   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:02.758037   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:02.758247   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHPort
	I0731 16:42:02.758418   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:02.758628   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:02.758744   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHUsername
	I0731 16:42:02.758885   16404 main.go:141] libmachine: Using SSH client type: native
	I0731 16:42:02.759076   16404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0731 16:42:02.759093   16404 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 16:42:03.008242   16404 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 16:42:03.008266   16404 main.go:141] libmachine: Checking connection to Docker...
	I0731 16:42:03.008273   16404 main.go:141] libmachine: (addons-190022) Calling .GetURL
	I0731 16:42:03.009604   16404 main.go:141] libmachine: (addons-190022) DBG | Using libvirt version 6000000
	I0731 16:42:03.011774   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:03.012098   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:03.012121   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:03.012264   16404 main.go:141] libmachine: Docker is up and running!
	I0731 16:42:03.012279   16404 main.go:141] libmachine: Reticulating splines...
	I0731 16:42:03.012286   16404 client.go:171] duration metric: took 24.454717478s to LocalClient.Create
	I0731 16:42:03.012310   16404 start.go:167] duration metric: took 24.454773022s to libmachine.API.Create "addons-190022"
	I0731 16:42:03.012326   16404 start.go:293] postStartSetup for "addons-190022" (driver="kvm2")
	I0731 16:42:03.012337   16404 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 16:42:03.012356   16404 main.go:141] libmachine: (addons-190022) Calling .DriverName
	I0731 16:42:03.012555   16404 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 16:42:03.012579   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHHostname
	I0731 16:42:03.014708   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:03.015031   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:03.015067   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:03.015251   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHPort
	I0731 16:42:03.015424   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:03.015575   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHUsername
	I0731 16:42:03.015678   16404 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/addons-190022/id_rsa Username:docker}
	I0731 16:42:03.096693   16404 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 16:42:03.100483   16404 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 16:42:03.100507   16404 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/addons for local assets ...
	I0731 16:42:03.100581   16404 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/files for local assets ...
	I0731 16:42:03.100620   16404 start.go:296] duration metric: took 88.285991ms for postStartSetup
	I0731 16:42:03.100655   16404 main.go:141] libmachine: (addons-190022) Calling .GetConfigRaw
	I0731 16:42:03.101203   16404 main.go:141] libmachine: (addons-190022) Calling .GetIP
	I0731 16:42:03.104137   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:03.104629   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:03.104660   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:03.104901   16404 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/config.json ...
	I0731 16:42:03.105094   16404 start.go:128] duration metric: took 24.564838378s to createHost
	I0731 16:42:03.105116   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHHostname
	I0731 16:42:03.107425   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:03.107791   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:03.107815   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:03.107980   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHPort
	I0731 16:42:03.108198   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:03.108372   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:03.108530   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHUsername
	I0731 16:42:03.108672   16404 main.go:141] libmachine: Using SSH client type: native
	I0731 16:42:03.108816   16404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0731 16:42:03.108825   16404 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 16:42:03.215407   16404 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722444123.186608497
	
	I0731 16:42:03.215427   16404 fix.go:216] guest clock: 1722444123.186608497
	I0731 16:42:03.215434   16404 fix.go:229] Guest: 2024-07-31 16:42:03.186608497 +0000 UTC Remote: 2024-07-31 16:42:03.105105177 +0000 UTC m=+24.661188828 (delta=81.50332ms)
	I0731 16:42:03.215475   16404 fix.go:200] guest clock delta is within tolerance: 81.50332ms
	I0731 16:42:03.215483   16404 start.go:83] releasing machines lock for "addons-190022", held for 24.67530545s
	I0731 16:42:03.215508   16404 main.go:141] libmachine: (addons-190022) Calling .DriverName
	I0731 16:42:03.215770   16404 main.go:141] libmachine: (addons-190022) Calling .GetIP
	I0731 16:42:03.218433   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:03.218762   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:03.218789   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:03.219017   16404 main.go:141] libmachine: (addons-190022) Calling .DriverName
	I0731 16:42:03.219566   16404 main.go:141] libmachine: (addons-190022) Calling .DriverName
	I0731 16:42:03.219779   16404 main.go:141] libmachine: (addons-190022) Calling .DriverName
	I0731 16:42:03.219882   16404 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 16:42:03.219928   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHHostname
	I0731 16:42:03.219993   16404 ssh_runner.go:195] Run: cat /version.json
	I0731 16:42:03.220020   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHHostname
	I0731 16:42:03.222583   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:03.222894   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:03.222911   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:03.222955   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:03.223160   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHPort
	I0731 16:42:03.223292   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:03.223403   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:03.223407   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHUsername
	I0731 16:42:03.223428   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:03.223596   16404 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/addons-190022/id_rsa Username:docker}
	I0731 16:42:03.223611   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHPort
	I0731 16:42:03.223771   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:03.223903   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHUsername
	I0731 16:42:03.224037   16404 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/addons-190022/id_rsa Username:docker}
	I0731 16:42:03.348886   16404 ssh_runner.go:195] Run: systemctl --version
	I0731 16:42:03.354547   16404 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 16:42:03.506769   16404 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 16:42:03.512894   16404 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 16:42:03.512959   16404 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 16:42:03.530633   16404 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 16:42:03.530656   16404 start.go:495] detecting cgroup driver to use...
	I0731 16:42:03.530720   16404 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 16:42:03.550022   16404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 16:42:03.565106   16404 docker.go:217] disabling cri-docker service (if available) ...
	I0731 16:42:03.565162   16404 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 16:42:03.580074   16404 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 16:42:03.592634   16404 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 16:42:03.706677   16404 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 16:42:03.844950   16404 docker.go:233] disabling docker service ...
	I0731 16:42:03.845019   16404 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 16:42:03.858774   16404 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 16:42:03.871923   16404 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 16:42:04.005544   16404 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 16:42:04.137780   16404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 16:42:04.150925   16404 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 16:42:04.167713   16404 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 16:42:04.167777   16404 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 16:42:04.177583   16404 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 16:42:04.177649   16404 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 16:42:04.187498   16404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 16:42:04.197571   16404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 16:42:04.207555   16404 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 16:42:04.217155   16404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 16:42:04.226559   16404 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 16:42:04.241976   16404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 16:42:04.251421   16404 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 16:42:04.259923   16404 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 16:42:04.259984   16404 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 16:42:04.271575   16404 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 16:42:04.280100   16404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 16:42:04.391524   16404 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 16:42:04.521477   16404 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 16:42:04.521570   16404 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 16:42:04.525844   16404 start.go:563] Will wait 60s for crictl version
	I0731 16:42:04.525898   16404 ssh_runner.go:195] Run: which crictl
	I0731 16:42:04.529120   16404 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 16:42:04.565684   16404 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 16:42:04.565815   16404 ssh_runner.go:195] Run: crio --version
	I0731 16:42:04.597647   16404 ssh_runner.go:195] Run: crio --version
	I0731 16:42:04.624234   16404 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 16:42:04.625519   16404 main.go:141] libmachine: (addons-190022) Calling .GetIP
	I0731 16:42:04.627911   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:04.628266   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:04.628293   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:04.628472   16404 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 16:42:04.632180   16404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 16:42:04.643206   16404 kubeadm.go:883] updating cluster {Name:addons-190022 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:addons-190022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.140 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 16:42:04.643313   16404 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 16:42:04.643354   16404 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 16:42:04.673696   16404 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 16:42:04.673762   16404 ssh_runner.go:195] Run: which lz4
	I0731 16:42:04.677405   16404 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 16:42:04.681392   16404 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 16:42:04.681421   16404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 16:42:05.858907   16404 crio.go:462] duration metric: took 1.181525886s to copy over tarball
	I0731 16:42:05.858980   16404 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 16:42:08.068359   16404 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.209356148s)
	I0731 16:42:08.068392   16404 crio.go:469] duration metric: took 2.209450869s to extract the tarball
	I0731 16:42:08.068408   16404 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 16:42:08.105425   16404 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 16:42:08.149060   16404 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 16:42:08.149080   16404 cache_images.go:84] Images are preloaded, skipping loading
	I0731 16:42:08.149087   16404 kubeadm.go:934] updating node { 192.168.39.140 8443 v1.30.3 crio true true} ...
	I0731 16:42:08.149182   16404 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-190022 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.140
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-190022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 16:42:08.149244   16404 ssh_runner.go:195] Run: crio config
	I0731 16:42:08.194277   16404 cni.go:84] Creating CNI manager for ""
	I0731 16:42:08.194300   16404 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 16:42:08.194311   16404 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 16:42:08.194364   16404 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.140 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-190022 NodeName:addons-190022 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.140"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.140 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 16:42:08.194512   16404 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.140
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-190022"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.140
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.140"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 16:42:08.194574   16404 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 16:42:08.204098   16404 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 16:42:08.204171   16404 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 16:42:08.213251   16404 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0731 16:42:08.228899   16404 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 16:42:08.243601   16404 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0731 16:42:08.258573   16404 ssh_runner.go:195] Run: grep 192.168.39.140	control-plane.minikube.internal$ /etc/hosts
	I0731 16:42:08.261931   16404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.140	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 16:42:08.273138   16404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 16:42:08.400122   16404 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 16:42:08.416751   16404 certs.go:68] Setting up /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022 for IP: 192.168.39.140
	I0731 16:42:08.416779   16404 certs.go:194] generating shared ca certs ...
	I0731 16:42:08.416802   16404 certs.go:226] acquiring lock for ca certs: {Name:mk49eed439085f9c30746706460ce213570d1997 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 16:42:08.416960   16404 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key
	I0731 16:42:08.546599   16404 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt ...
	I0731 16:42:08.546627   16404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt: {Name:mk98fd99858826e16dd06829f67d17f3bbd5dba5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 16:42:08.546788   16404 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key ...
	I0731 16:42:08.546799   16404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key: {Name:mk758a5c4c2c0948d134585d8091c39b75aa53cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 16:42:08.546868   16404 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key
	I0731 16:42:08.698767   16404 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.crt ...
	I0731 16:42:08.698796   16404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.crt: {Name:mk0402e4fdafd0dadcc3a759a146f0382cb7f698 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 16:42:08.698949   16404 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key ...
	I0731 16:42:08.698960   16404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key: {Name:mk3396450d02245824d44252cd53d5e243110597 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 16:42:08.699020   16404 certs.go:256] generating profile certs ...
	I0731 16:42:08.699069   16404 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/client.key
	I0731 16:42:08.699102   16404 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/client.crt with IP's: []
	I0731 16:42:08.863211   16404 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/client.crt ...
	I0731 16:42:08.863241   16404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/client.crt: {Name:mkc08d8f316dbe9990d761b84b24b31787979fd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 16:42:08.863398   16404 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/client.key ...
	I0731 16:42:08.863425   16404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/client.key: {Name:mk236cf66608a04fccc0dc3a978aa705916000df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 16:42:08.863507   16404 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/apiserver.key.202be5a6
	I0731 16:42:08.863525   16404 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/apiserver.crt.202be5a6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.140]
	I0731 16:42:09.004841   16404 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/apiserver.crt.202be5a6 ...
	I0731 16:42:09.004871   16404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/apiserver.crt.202be5a6: {Name:mk22785ec0c5e4d2c0950a76981c34684f85c969 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 16:42:09.005019   16404 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/apiserver.key.202be5a6 ...
	I0731 16:42:09.005031   16404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/apiserver.key.202be5a6: {Name:mkaa5e980909d47dc81bfccb1a97b6a303c56f69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 16:42:09.005093   16404 certs.go:381] copying /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/apiserver.crt.202be5a6 -> /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/apiserver.crt
	I0731 16:42:09.005182   16404 certs.go:385] copying /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/apiserver.key.202be5a6 -> /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/apiserver.key
	I0731 16:42:09.005240   16404 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/proxy-client.key
	I0731 16:42:09.005258   16404 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/proxy-client.crt with IP's: []
	I0731 16:42:09.175359   16404 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/proxy-client.crt ...
	I0731 16:42:09.175397   16404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/proxy-client.crt: {Name:mk9fcfd895242a38b7cc4fdd83ba69a9c96b7633 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 16:42:09.175543   16404 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/proxy-client.key ...
	I0731 16:42:09.175553   16404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/proxy-client.key: {Name:mkedfa22e9ce237ebf95ee7a2debe0f3b934e2ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 16:42:09.175716   16404 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 16:42:09.175751   16404 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem (1082 bytes)
	I0731 16:42:09.175773   16404 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem (1123 bytes)
	I0731 16:42:09.175795   16404 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem (1675 bytes)
	I0731 16:42:09.177089   16404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 16:42:09.204217   16404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 16:42:09.226695   16404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 16:42:09.247974   16404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 16:42:09.270172   16404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0731 16:42:09.291628   16404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 16:42:09.313189   16404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 16:42:09.335478   16404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 16:42:09.357762   16404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 16:42:09.379615   16404 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 16:42:09.394999   16404 ssh_runner.go:195] Run: openssl version
	I0731 16:42:09.400758   16404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 16:42:09.410874   16404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 16:42:09.415180   16404 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 16:42 /usr/share/ca-certificates/minikubeCA.pem
	I0731 16:42:09.415231   16404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 16:42:09.420540   16404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 16:42:09.430259   16404 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 16:42:09.434185   16404 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 16:42:09.434230   16404 kubeadm.go:392] StartCluster: {Name:addons-190022 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:addons-190022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.140 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 16:42:09.434301   16404 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 16:42:09.434372   16404 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 16:42:09.470928   16404 cri.go:89] found id: ""
	I0731 16:42:09.470991   16404 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 16:42:09.480086   16404 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 16:42:09.491206   16404 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 16:42:09.503823   16404 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 16:42:09.503841   16404 kubeadm.go:157] found existing configuration files:
	
	I0731 16:42:09.503883   16404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 16:42:09.512469   16404 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 16:42:09.512530   16404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 16:42:09.521693   16404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 16:42:09.530442   16404 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 16:42:09.530494   16404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 16:42:09.538797   16404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 16:42:09.546978   16404 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 16:42:09.547032   16404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 16:42:09.555753   16404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 16:42:09.563797   16404 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 16:42:09.563846   16404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 16:42:09.572635   16404 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 16:42:09.625936   16404 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0731 16:42:09.626002   16404 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 16:42:09.744873   16404 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 16:42:09.745011   16404 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 16:42:09.745110   16404 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 16:42:09.953459   16404 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 16:42:10.101223   16404 out.go:204]   - Generating certificates and keys ...
	I0731 16:42:10.101356   16404 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 16:42:10.101440   16404 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 16:42:10.277611   16404 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0731 16:42:10.337591   16404 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0731 16:42:10.420205   16404 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0731 16:42:10.516040   16404 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0731 16:42:10.596247   16404 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0731 16:42:10.596554   16404 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-190022 localhost] and IPs [192.168.39.140 127.0.0.1 ::1]
	I0731 16:42:10.730169   16404 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0731 16:42:10.730349   16404 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-190022 localhost] and IPs [192.168.39.140 127.0.0.1 ::1]
	I0731 16:42:10.786611   16404 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0731 16:42:10.916279   16404 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0731 16:42:11.123724   16404 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0731 16:42:11.123997   16404 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 16:42:11.189266   16404 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 16:42:11.243585   16404 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0731 16:42:11.341228   16404 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 16:42:11.731499   16404 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 16:42:12.047912   16404 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 16:42:12.048793   16404 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 16:42:12.052466   16404 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 16:42:12.054583   16404 out.go:204]   - Booting up control plane ...
	I0731 16:42:12.054695   16404 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 16:42:12.054794   16404 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 16:42:12.054885   16404 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 16:42:12.070417   16404 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 16:42:12.071262   16404 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 16:42:12.071331   16404 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 16:42:12.202217   16404 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0731 16:42:12.202303   16404 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0731 16:42:12.704028   16404 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.224044ms
	I0731 16:42:12.704103   16404 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0731 16:42:17.702681   16404 kubeadm.go:310] [api-check] The API server is healthy after 5.001312549s
	I0731 16:42:17.715966   16404 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 16:42:17.727539   16404 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 16:42:17.763374   16404 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 16:42:17.763634   16404 kubeadm.go:310] [mark-control-plane] Marking the node addons-190022 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 16:42:17.774706   16404 kubeadm.go:310] [bootstrap-token] Using token: 09eco0.a7s4hcv0o7zkmmb3
	I0731 16:42:17.776220   16404 out.go:204]   - Configuring RBAC rules ...
	I0731 16:42:17.776357   16404 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 16:42:17.783231   16404 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 16:42:17.790236   16404 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 16:42:17.793454   16404 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 16:42:17.796736   16404 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 16:42:17.801710   16404 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 16:42:18.111165   16404 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 16:42:18.548044   16404 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 16:42:19.108771   16404 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 16:42:19.109627   16404 kubeadm.go:310] 
	I0731 16:42:19.109737   16404 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 16:42:19.109770   16404 kubeadm.go:310] 
	I0731 16:42:19.109890   16404 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 16:42:19.109900   16404 kubeadm.go:310] 
	I0731 16:42:19.109935   16404 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 16:42:19.110027   16404 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 16:42:19.110076   16404 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 16:42:19.110082   16404 kubeadm.go:310] 
	I0731 16:42:19.110160   16404 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 16:42:19.110172   16404 kubeadm.go:310] 
	I0731 16:42:19.110224   16404 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 16:42:19.110233   16404 kubeadm.go:310] 
	I0731 16:42:19.110297   16404 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 16:42:19.110394   16404 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 16:42:19.110498   16404 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 16:42:19.110514   16404 kubeadm.go:310] 
	I0731 16:42:19.110617   16404 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 16:42:19.110713   16404 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 16:42:19.110722   16404 kubeadm.go:310] 
	I0731 16:42:19.110922   16404 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 09eco0.a7s4hcv0o7zkmmb3 \
	I0731 16:42:19.111065   16404 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:460b24b51d10a3760f2391542a8a89e401359854f437f922cbee3f2d8ed6c856 \
	I0731 16:42:19.111104   16404 kubeadm.go:310] 	--control-plane 
	I0731 16:42:19.111131   16404 kubeadm.go:310] 
	I0731 16:42:19.111234   16404 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 16:42:19.111244   16404 kubeadm.go:310] 
	I0731 16:42:19.111356   16404 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 09eco0.a7s4hcv0o7zkmmb3 \
	I0731 16:42:19.111479   16404 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:460b24b51d10a3760f2391542a8a89e401359854f437f922cbee3f2d8ed6c856 
	I0731 16:42:19.111612   16404 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 16:42:19.111637   16404 cni.go:84] Creating CNI manager for ""
	I0731 16:42:19.111650   16404 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 16:42:19.113330   16404 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 16:42:19.114623   16404 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 16:42:19.125158   16404 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 16:42:19.142456   16404 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 16:42:19.142520   16404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:42:19.142578   16404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-190022 minikube.k8s.io/updated_at=2024_07_31T16_42_19_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1d737dad7efa60c56d30434fcd857dd3b14c91d9 minikube.k8s.io/name=addons-190022 minikube.k8s.io/primary=true
	I0731 16:42:19.182656   16404 ops.go:34] apiserver oom_adj: -16
	I0731 16:42:19.237915   16404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:42:19.738139   16404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:42:20.238042   16404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:42:20.738206   16404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:42:21.238157   16404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:42:21.738796   16404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:42:22.238549   16404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:42:22.738820   16404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:42:23.238839   16404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:42:23.738377   16404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:42:24.238819   16404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:42:24.738887   16404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:42:25.238667   16404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:42:25.738526   16404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:42:26.238076   16404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:42:26.738628   16404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:42:27.237982   16404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:42:27.738797   16404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:42:28.238742   16404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:42:28.739019   16404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:42:29.238828   16404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:42:29.737963   16404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:42:30.238275   16404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:42:30.737944   16404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:42:31.238670   16404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:42:31.738820   16404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:42:32.238718   16404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:42:32.738481   16404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:42:32.828588   16404 kubeadm.go:1113] duration metric: took 13.686125798s to wait for elevateKubeSystemPrivileges
	I0731 16:42:32.828624   16404 kubeadm.go:394] duration metric: took 23.394397952s to StartCluster
	I0731 16:42:32.828641   16404 settings.go:142] acquiring lock: {Name:mk8ac8269296bf0b887a00ff74caaad180005f89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 16:42:32.828795   16404 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 16:42:32.829209   16404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/kubeconfig: {Name:mkf2340bc1ada3c683f132aca37228d7588e1fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 16:42:32.829421   16404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0731 16:42:32.829450   16404 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.140 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 16:42:32.829492   16404 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0731 16:42:32.829568   16404 addons.go:69] Setting yakd=true in profile "addons-190022"
	I0731 16:42:32.829599   16404 addons.go:234] Setting addon yakd=true in "addons-190022"
	I0731 16:42:32.829609   16404 addons.go:69] Setting inspektor-gadget=true in profile "addons-190022"
	I0731 16:42:32.829635   16404 addons.go:234] Setting addon inspektor-gadget=true in "addons-190022"
	I0731 16:42:32.829635   16404 addons.go:69] Setting gcp-auth=true in profile "addons-190022"
	I0731 16:42:32.829653   16404 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-190022"
	I0731 16:42:32.829654   16404 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-190022"
	I0731 16:42:32.829674   16404 addons.go:69] Setting default-storageclass=true in profile "addons-190022"
	I0731 16:42:32.829681   16404 addons.go:69] Setting ingress=true in profile "addons-190022"
	I0731 16:42:32.829686   16404 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-190022"
	I0731 16:42:32.829686   16404 config.go:182] Loaded profile config "addons-190022": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 16:42:32.829691   16404 addons.go:69] Setting volumesnapshots=true in profile "addons-190022"
	I0731 16:42:32.829700   16404 addons.go:234] Setting addon ingress=true in "addons-190022"
	I0731 16:42:32.829701   16404 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-190022"
	I0731 16:42:32.829709   16404 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-190022"
	I0731 16:42:32.829718   16404 addons.go:69] Setting registry=true in profile "addons-190022"
	I0731 16:42:32.829667   16404 mustload.go:65] Loading cluster: addons-190022
	I0731 16:42:32.829748   16404 addons.go:69] Setting storage-provisioner=true in profile "addons-190022"
	I0731 16:42:32.829682   16404 addons.go:69] Setting volcano=true in profile "addons-190022"
	I0731 16:42:32.829776   16404 addons.go:234] Setting addon storage-provisioner=true in "addons-190022"
	I0731 16:42:32.829780   16404 addons.go:234] Setting addon volcano=true in "addons-190022"
	I0731 16:42:32.829644   16404 addons.go:69] Setting metrics-server=true in profile "addons-190022"
	I0731 16:42:32.829812   16404 host.go:66] Checking if "addons-190022" exists ...
	I0731 16:42:32.829826   16404 addons.go:234] Setting addon metrics-server=true in "addons-190022"
	I0731 16:42:32.829846   16404 host.go:66] Checking if "addons-190022" exists ...
	I0731 16:42:32.829887   16404 config.go:182] Loaded profile config "addons-190022": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 16:42:32.829659   16404 addons.go:69] Setting cloud-spanner=true in profile "addons-190022"
	I0731 16:42:32.829959   16404 addons.go:234] Setting addon cloud-spanner=true in "addons-190022"
	I0731 16:42:32.829988   16404 host.go:66] Checking if "addons-190022" exists ...
	I0731 16:42:32.830141   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.830173   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.830196   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.830209   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.830215   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.829674   16404 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-190022"
	I0731 16:42:32.830231   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.829711   16404 addons.go:234] Setting addon volumesnapshots=true in "addons-190022"
	I0731 16:42:32.830239   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.830258   16404 host.go:66] Checking if "addons-190022" exists ...
	I0731 16:42:32.829676   16404 addons.go:69] Setting helm-tiller=true in profile "addons-190022"
	I0731 16:42:32.830350   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.830150   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.830356   16404 addons.go:234] Setting addon helm-tiller=true in "addons-190022"
	I0731 16:42:32.830388   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.829727   16404 host.go:66] Checking if "addons-190022" exists ...
	I0731 16:42:32.830424   16404 host.go:66] Checking if "addons-190022" exists ...
	I0731 16:42:32.829732   16404 addons.go:69] Setting ingress-dns=true in profile "addons-190022"
	I0731 16:42:32.830467   16404 addons.go:234] Setting addon ingress-dns=true in "addons-190022"
	I0731 16:42:32.830496   16404 host.go:66] Checking if "addons-190022" exists ...
	I0731 16:42:32.830584   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.830619   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.830662   16404 host.go:66] Checking if "addons-190022" exists ...
	I0731 16:42:32.829735   16404 addons.go:234] Setting addon registry=true in "addons-190022"
	I0731 16:42:32.830736   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.830751   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.830747   16404 host.go:66] Checking if "addons-190022" exists ...
	I0731 16:42:32.830809   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.830836   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.830853   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.830881   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.831072   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.829806   16404 host.go:66] Checking if "addons-190022" exists ...
	I0731 16:42:32.830231   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.831169   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.829670   16404 host.go:66] Checking if "addons-190022" exists ...
	I0731 16:42:32.831482   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.831484   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.830398   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.831513   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.831555   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.829636   16404 host.go:66] Checking if "addons-190022" exists ...
	I0731 16:42:32.831690   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.831728   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.829725   16404 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-190022"
	I0731 16:42:32.831766   16404 host.go:66] Checking if "addons-190022" exists ...
	I0731 16:42:32.847203   16404 out.go:177] * Verifying Kubernetes components...
	I0731 16:42:32.849043   16404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 16:42:32.851049   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35457
	I0731 16:42:32.851552   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.851775   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44283
	I0731 16:42:32.851953   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44311
	I0731 16:42:32.851955   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46317
	I0731 16:42:32.852092   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.852107   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.852137   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.852382   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.852516   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.852611   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.852774   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.852796   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.853059   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.853100   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.853185   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.853411   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.853503   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.853413   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.853549   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.853821   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.853857   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.856473   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44825
	I0731 16:42:32.857592   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.858157   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.858173   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.858520   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34441
	I0731 16:42:32.858525   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.858850   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.859078   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.859138   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.859242   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.859255   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.859571   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.860185   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.860224   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.864109   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45095
	I0731 16:42:32.864472   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.864925   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.864937   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.865251   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.865426   16404 main.go:141] libmachine: (addons-190022) Calling .GetState
	I0731 16:42:32.869811   16404 addons.go:234] Setting addon default-storageclass=true in "addons-190022"
	I0731 16:42:32.869845   16404 host.go:66] Checking if "addons-190022" exists ...
	I0731 16:42:32.870182   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.870214   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.870351   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42213
	I0731 16:42:32.871551   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.871594   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.871853   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.871878   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.872186   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.872220   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.872612   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.872698   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36047
	I0731 16:42:32.871551   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.872826   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.872828   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.872856   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.875764   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.875961   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.875975   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.876701   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.876715   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.876771   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.876812   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42427
	I0731 16:42:32.877310   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.877372   16404 main.go:141] libmachine: (addons-190022) Calling .GetState
	I0731 16:42:32.877418   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.878807   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.878823   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.879080   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.879126   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.879217   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.879803   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.879875   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.880171   16404 host.go:66] Checking if "addons-190022" exists ...
	I0731 16:42:32.880655   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.880723   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.889744   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41189
	I0731 16:42:32.890696   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.891354   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.891380   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.891750   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.891916   16404 main.go:141] libmachine: (addons-190022) Calling .GetState
	I0731 16:42:32.893574   16404 main.go:141] libmachine: (addons-190022) Calling .DriverName
	I0731 16:42:32.895893   16404 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0731 16:42:32.897138   16404 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0731 16:42:32.897157   16404 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0731 16:42:32.897178   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHHostname
	I0731 16:42:32.901176   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:32.901605   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:32.901628   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:32.901927   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHPort
	I0731 16:42:32.902109   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:32.902257   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHUsername
	I0731 16:42:32.902401   16404 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/addons-190022/id_rsa Username:docker}
	I0731 16:42:32.905170   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36297
	I0731 16:42:32.906178   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.906798   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.906816   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.906886   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38935
	I0731 16:42:32.907230   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.907411   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43047
	I0731 16:42:32.907421   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41191
	I0731 16:42:32.907495   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.907827   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.907875   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.908007   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.908022   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.908997   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46259
	I0731 16:42:32.909010   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.909081   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.909184   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.909380   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.909445   16404 main.go:141] libmachine: (addons-190022) Calling .GetState
	I0731 16:42:32.909548   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.909561   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.909993   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.910027   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.910233   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.910260   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.910472   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.910625   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.911090   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.911096   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.911143   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.911252   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.911612   16404 main.go:141] libmachine: (addons-190022) Calling .DriverName
	I0731 16:42:32.911835   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.912416   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.912445   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.913662   16404 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.1
	I0731 16:42:32.915267   16404 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0731 16:42:32.915332   16404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0731 16:42:32.915350   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHHostname
	I0731 16:42:32.917483   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45351
	I0731 16:42:32.917961   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.919041   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.919061   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.919804   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.920231   16404 main.go:141] libmachine: (addons-190022) Calling .GetState
	I0731 16:42:32.921390   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:32.921823   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:32.921854   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:32.922058   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHPort
	I0731 16:42:32.922313   16404 main.go:141] libmachine: (addons-190022) Calling .DriverName
	I0731 16:42:32.922372   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:32.922650   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHUsername
	I0731 16:42:32.922802   16404 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/addons-190022/id_rsa Username:docker}
	I0731 16:42:32.923762   16404 out.go:177]   - Using image docker.io/registry:2.8.3
	I0731 16:42:32.925352   16404 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0731 16:42:32.927167   16404 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0731 16:42:32.927185   16404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0731 16:42:32.927204   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHHostname
	I0731 16:42:32.929242   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35889
	I0731 16:42:32.929720   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.930344   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.930363   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.930775   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.931003   16404 main.go:141] libmachine: (addons-190022) Calling .GetState
	I0731 16:42:32.931623   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:32.931708   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38647
	I0731 16:42:32.932250   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.932798   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:32.932827   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:32.932983   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.933004   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.933010   16404 main.go:141] libmachine: (addons-190022) Calling .DriverName
	I0731 16:42:32.933064   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHPort
	I0731 16:42:32.933339   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:32.933382   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.933476   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHUsername
	I0731 16:42:32.933580   16404 main.go:141] libmachine: (addons-190022) Calling .GetState
	I0731 16:42:32.933638   16404 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/addons-190022/id_rsa Username:docker}
	I0731 16:42:32.934633   16404 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0731 16:42:32.935567   16404 main.go:141] libmachine: (addons-190022) Calling .DriverName
	I0731 16:42:32.935926   16404 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 16:42:32.935954   16404 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 16:42:32.935971   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHHostname
	I0731 16:42:32.937018   16404 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0731 16:42:32.938570   16404 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0731 16:42:32.938937   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35187
	I0731 16:42:32.939305   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:32.939367   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.939858   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.939870   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.940239   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.940387   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:32.940412   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:32.940548   16404 main.go:141] libmachine: (addons-190022) Calling .DriverName
	I0731 16:42:32.940614   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHPort
	I0731 16:42:32.940864   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:32.941076   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHUsername
	I0731 16:42:32.941177   16404 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0731 16:42:32.941197   16404 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/addons-190022/id_rsa Username:docker}
	I0731 16:42:32.942601   16404 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0731 16:42:32.942623   16404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0731 16:42:32.942640   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHHostname
	I0731 16:42:32.944447   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37341
	I0731 16:42:32.944727   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.945122   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.945138   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.945454   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.945635   16404 main.go:141] libmachine: (addons-190022) Calling .GetState
	I0731 16:42:32.946962   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35137
	I0731 16:42:32.947298   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.947630   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.947641   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.948054   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.948192   16404 main.go:141] libmachine: (addons-190022) Calling .GetState
	I0731 16:42:32.950329   16404 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-190022"
	I0731 16:42:32.950359   16404 host.go:66] Checking if "addons-190022" exists ...
	I0731 16:42:32.950585   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.950607   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.950749   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40065
	I0731 16:42:32.951560   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:32.951586   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.951650   16404 main.go:141] libmachine: (addons-190022) Calling .DriverName
	I0731 16:42:32.951945   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:32.951961   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:32.952152   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.952161   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.952309   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHPort
	I0731 16:42:32.952453   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.952501   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:32.952622   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHUsername
	I0731 16:42:32.952660   16404 main.go:141] libmachine: (addons-190022) Calling .GetState
	I0731 16:42:32.952711   16404 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/addons-190022/id_rsa Username:docker}
	I0731 16:42:32.953354   16404 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0731 16:42:32.953942   16404 main.go:141] libmachine: (addons-190022) Calling .DriverName
	I0731 16:42:32.954933   16404 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0731 16:42:32.954949   16404 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0731 16:42:32.954966   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHHostname
	I0731 16:42:32.955790   16404 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 16:42:32.956942   16404 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 16:42:32.956956   16404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 16:42:32.956973   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHHostname
	I0731 16:42:32.958772   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:32.959224   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:32.959244   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:32.959392   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHPort
	I0731 16:42:32.959555   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:32.959701   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHUsername
	I0731 16:42:32.959808   16404 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/addons-190022/id_rsa Username:docker}
	I0731 16:42:32.960936   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:32.961295   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:32.961314   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:32.961559   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHPort
	I0731 16:42:32.961625   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39231
	I0731 16:42:32.961763   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:32.961871   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHUsername
	I0731 16:42:32.961933   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.962028   16404 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/addons-190022/id_rsa Username:docker}
	I0731 16:42:32.962392   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.962416   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.962906   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.963090   16404 main.go:141] libmachine: (addons-190022) Calling .GetState
	I0731 16:42:32.966429   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37857
	I0731 16:42:32.966559   16404 main.go:141] libmachine: (addons-190022) Calling .DriverName
	I0731 16:42:32.967075   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.967581   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.967604   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.967992   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.968211   16404 main.go:141] libmachine: (addons-190022) Calling .GetState
	I0731 16:42:32.968852   16404 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0731 16:42:32.969698   16404 main.go:141] libmachine: (addons-190022) Calling .DriverName
	I0731 16:42:32.971274   16404 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0731 16:42:32.971283   16404 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0731 16:42:32.972580   16404 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0731 16:42:32.972597   16404 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0731 16:42:32.972621   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHHostname
	I0731 16:42:32.974187   16404 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0731 16:42:32.974890   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46221
	I0731 16:42:32.976123   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:32.976508   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:32.976531   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:32.976802   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHPort
	I0731 16:42:32.976940   16404 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0731 16:42:32.976982   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:32.977133   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHUsername
	I0731 16:42:32.977268   16404 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/addons-190022/id_rsa Username:docker}
	I0731 16:42:32.979211   16404 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0731 16:42:32.979654   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.979701   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35989
	I0731 16:42:32.980046   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.980249   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.980269   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.980638   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.980790   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.980809   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.981255   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.981281   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.981440   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40659
	I0731 16:42:32.981577   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.981648   16404 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0731 16:42:32.981874   16404 main.go:141] libmachine: (addons-190022) Calling .GetState
	I0731 16:42:32.981933   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.982190   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45857
	I0731 16:42:32.982388   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.982403   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.982442   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.982697   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.983134   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.983169   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.983282   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.983301   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.983706   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.983986   16404 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0731 16:42:32.984109   16404 main.go:141] libmachine: (addons-190022) Calling .GetState
	I0731 16:42:32.984136   16404 main.go:141] libmachine: (addons-190022) Calling .DriverName
	I0731 16:42:32.985892   16404 main.go:141] libmachine: (addons-190022) Calling .DriverName
	I0731 16:42:32.986094   16404 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0731 16:42:32.986146   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:32.986159   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:32.986182   16404 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0731 16:42:32.986370   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:32.986386   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:32.986395   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:32.986406   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:32.986607   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:32.986625   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	W0731 16:42:32.986691   16404 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0731 16:42:32.987376   16404 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0731 16:42:32.987394   16404 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0731 16:42:32.987414   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHHostname
	I0731 16:42:32.987596   16404 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0731 16:42:32.987608   16404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0731 16:42:32.987623   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHHostname
	I0731 16:42:32.991533   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:32.991976   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:32.991995   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:32.992200   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHPort
	I0731 16:42:32.992261   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:32.992418   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:32.992658   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:32.992683   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:32.992702   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHUsername
	I0731 16:42:32.992735   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33793
	I0731 16:42:32.992868   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHPort
	I0731 16:42:32.993133   16404 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/addons-190022/id_rsa Username:docker}
	I0731 16:42:32.993337   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.993388   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:32.993573   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHUsername
	I0731 16:42:32.993730   16404 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/addons-190022/id_rsa Username:docker}
	I0731 16:42:32.993975   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.993984   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.994296   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.994514   16404 main.go:141] libmachine: (addons-190022) Calling .GetState
	I0731 16:42:32.996304   16404 main.go:141] libmachine: (addons-190022) Calling .DriverName
	I0731 16:42:32.996883   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40521
	I0731 16:42:32.997223   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.997626   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.997637   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.998130   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.998520   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.998558   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.998685   16404 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0731 16:42:32.999986   16404 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0731 16:42:33.000003   16404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0731 16:42:33.000020   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHHostname
	I0731 16:42:33.000592   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37877
	I0731 16:42:33.000955   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:33.001354   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:33.001371   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:33.001715   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:33.001882   16404 main.go:141] libmachine: (addons-190022) Calling .GetState
	I0731 16:42:33.003837   16404 main.go:141] libmachine: (addons-190022) Calling .DriverName
	I0731 16:42:33.003977   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:33.004436   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:33.004456   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:33.004660   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHPort
	I0731 16:42:33.004882   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:33.005062   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHUsername
	I0731 16:42:33.005161   16404 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/addons-190022/id_rsa Username:docker}
	I0731 16:42:33.005845   16404 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0731 16:42:33.007017   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33591
	I0731 16:42:33.007234   16404 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0731 16:42:33.007249   16404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0731 16:42:33.007266   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHHostname
	I0731 16:42:33.007336   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:33.007904   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:33.007921   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:33.008218   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:33.008436   16404 main.go:141] libmachine: (addons-190022) Calling .GetState
	I0731 16:42:33.010028   16404 main.go:141] libmachine: (addons-190022) Calling .DriverName
	I0731 16:42:33.010245   16404 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 16:42:33.010258   16404 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 16:42:33.010272   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHHostname
	I0731 16:42:33.010369   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:33.010918   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:33.010962   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:33.011070   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHPort
	I0731 16:42:33.011357   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:33.011567   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHUsername
	I0731 16:42:33.011676   16404 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/addons-190022/id_rsa Username:docker}
	I0731 16:42:33.012961   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:33.013243   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:33.013263   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:33.013390   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHPort
	I0731 16:42:33.013567   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:33.013673   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHUsername
	I0731 16:42:33.013846   16404 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/addons-190022/id_rsa Username:docker}
	W0731 16:42:33.015841   16404 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:44040->192.168.39.140:22: read: connection reset by peer
	I0731 16:42:33.015864   16404 retry.go:31] will retry after 247.440863ms: ssh: handshake failed: read tcp 192.168.39.1:44040->192.168.39.140:22: read: connection reset by peer
	I0731 16:42:33.020324   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36003
	I0731 16:42:33.020732   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:33.021427   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:33.021449   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:33.021743   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:33.021914   16404 main.go:141] libmachine: (addons-190022) Calling .GetState
	I0731 16:42:33.023331   16404 main.go:141] libmachine: (addons-190022) Calling .DriverName
	I0731 16:42:33.025027   16404 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0731 16:42:33.026340   16404 out.go:177]   - Using image docker.io/busybox:stable
	I0731 16:42:33.027567   16404 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0731 16:42:33.027587   16404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0731 16:42:33.027607   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHHostname
	I0731 16:42:33.030807   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:33.030893   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:33.030922   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:33.031091   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHPort
	I0731 16:42:33.031278   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:33.031446   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHUsername
	I0731 16:42:33.031583   16404 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/addons-190022/id_rsa Username:docker}
	I0731 16:42:33.265615   16404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0731 16:42:33.332382   16404 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 16:42:33.332405   16404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0731 16:42:33.339833   16404 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0731 16:42:33.339854   16404 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0731 16:42:33.349998   16404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0731 16:42:33.353012   16404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0731 16:42:33.367606   16404 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 16:42:33.367622   16404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0731 16:42:33.378529   16404 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0731 16:42:33.378546   16404 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0731 16:42:33.396639   16404 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0731 16:42:33.396656   16404 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0731 16:42:33.426308   16404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0731 16:42:33.438016   16404 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0731 16:42:33.438038   16404 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0731 16:42:33.441270   16404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 16:42:33.465712   16404 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0731 16:42:33.465735   16404 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0731 16:42:33.471037   16404 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0731 16:42:33.471056   16404 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0731 16:42:33.493278   16404 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0731 16:42:33.493298   16404 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0731 16:42:33.553308   16404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0731 16:42:33.594530   16404 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0731 16:42:33.594562   16404 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0731 16:42:33.606409   16404 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0731 16:42:33.606428   16404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0731 16:42:33.628927   16404 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 16:42:33.628949   16404 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 16:42:33.632034   16404 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0731 16:42:33.632055   16404 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0731 16:42:33.648116   16404 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0731 16:42:33.648139   16404 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0731 16:42:33.669676   16404 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0731 16:42:33.669699   16404 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0731 16:42:33.696925   16404 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0731 16:42:33.696946   16404 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0731 16:42:33.736844   16404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 16:42:33.770027   16404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0731 16:42:33.787306   16404 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0731 16:42:33.787334   16404 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0731 16:42:33.818265   16404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0731 16:42:33.826483   16404 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 16:42:33.826508   16404 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 16:42:33.927896   16404 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0731 16:42:33.927923   16404 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0731 16:42:33.932677   16404 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0731 16:42:33.932697   16404 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0731 16:42:33.944186   16404 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0731 16:42:33.944208   16404 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0731 16:42:33.999770   16404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 16:42:34.032781   16404 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0731 16:42:34.032812   16404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0731 16:42:34.104115   16404 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0731 16:42:34.104147   16404 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0731 16:42:34.135637   16404 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0731 16:42:34.135665   16404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0731 16:42:34.157568   16404 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0731 16:42:34.157594   16404 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0731 16:42:34.271258   16404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0731 16:42:34.274541   16404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0731 16:42:34.392662   16404 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0731 16:42:34.392694   16404 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0731 16:42:34.419055   16404 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0731 16:42:34.419080   16404 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0731 16:42:34.669547   16404 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0731 16:42:34.669573   16404 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0731 16:42:34.759785   16404 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0731 16:42:34.759807   16404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0731 16:42:34.983707   16404 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0731 16:42:34.983726   16404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0731 16:42:34.996570   16404 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0731 16:42:34.996594   16404 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0731 16:42:35.046379   16404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.780727917s)
	I0731 16:42:35.046437   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:35.046436   16404 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.714001981s)
	I0731 16:42:35.046451   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:35.046769   16404 main.go:141] libmachine: (addons-190022) DBG | Closing plugin on server side
	I0731 16:42:35.046830   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:35.046843   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:35.046857   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:35.046868   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:35.047119   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:35.047136   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:35.047424   16404 node_ready.go:35] waiting up to 6m0s for node "addons-190022" to be "Ready" ...
	I0731 16:42:35.054837   16404 node_ready.go:49] node "addons-190022" has status "Ready":"True"
	I0731 16:42:35.054861   16404 node_ready.go:38] duration metric: took 7.413255ms for node "addons-190022" to be "Ready" ...
	I0731 16:42:35.054871   16404 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 16:42:35.116799   16404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0731 16:42:35.124411   16404 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-5xsd7" in "kube-system" namespace to be "Ready" ...
	I0731 16:42:35.332733   16404 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0731 16:42:35.332763   16404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0731 16:42:35.540351   16404 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0731 16:42:35.540379   16404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0731 16:42:35.858012   16404 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.525576296s)
	I0731 16:42:35.858053   16404 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0731 16:42:35.999386   16404 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0731 16:42:35.999414   16404 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0731 16:42:36.332290   16404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0731 16:42:36.369383   16404 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-190022" context rescaled to 1 replicas
	I0731 16:42:37.294609   16404 pod_ready.go:102] pod "coredns-7db6d8ff4d-5xsd7" in "kube-system" namespace has status "Ready":"False"
	I0731 16:42:39.132206   16404 pod_ready.go:92] pod "coredns-7db6d8ff4d-5xsd7" in "kube-system" namespace has status "Ready":"True"
	I0731 16:42:39.132229   16404 pod_ready.go:81] duration metric: took 4.007792447s for pod "coredns-7db6d8ff4d-5xsd7" in "kube-system" namespace to be "Ready" ...
	I0731 16:42:39.132238   16404 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-dvscb" in "kube-system" namespace to be "Ready" ...
	I0731 16:42:39.143992   16404 pod_ready.go:92] pod "coredns-7db6d8ff4d-dvscb" in "kube-system" namespace has status "Ready":"True"
	I0731 16:42:39.144010   16404 pod_ready.go:81] duration metric: took 11.767028ms for pod "coredns-7db6d8ff4d-dvscb" in "kube-system" namespace to be "Ready" ...
	I0731 16:42:39.144018   16404 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-190022" in "kube-system" namespace to be "Ready" ...
	I0731 16:42:39.151975   16404 pod_ready.go:92] pod "etcd-addons-190022" in "kube-system" namespace has status "Ready":"True"
	I0731 16:42:39.152004   16404 pod_ready.go:81] duration metric: took 7.978626ms for pod "etcd-addons-190022" in "kube-system" namespace to be "Ready" ...
	I0731 16:42:39.152016   16404 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-190022" in "kube-system" namespace to be "Ready" ...
	I0731 16:42:39.162470   16404 pod_ready.go:92] pod "kube-apiserver-addons-190022" in "kube-system" namespace has status "Ready":"True"
	I0731 16:42:39.162495   16404 pod_ready.go:81] duration metric: took 10.471623ms for pod "kube-apiserver-addons-190022" in "kube-system" namespace to be "Ready" ...
	I0731 16:42:39.162507   16404 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-190022" in "kube-system" namespace to be "Ready" ...
	I0731 16:42:39.172680   16404 pod_ready.go:92] pod "kube-controller-manager-addons-190022" in "kube-system" namespace has status "Ready":"True"
	I0731 16:42:39.172708   16404 pod_ready.go:81] duration metric: took 10.192055ms for pod "kube-controller-manager-addons-190022" in "kube-system" namespace to be "Ready" ...
	I0731 16:42:39.172721   16404 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-p46dc" in "kube-system" namespace to be "Ready" ...
	I0731 16:42:39.532905   16404 pod_ready.go:92] pod "kube-proxy-p46dc" in "kube-system" namespace has status "Ready":"True"
	I0731 16:42:39.532940   16404 pod_ready.go:81] duration metric: took 360.210191ms for pod "kube-proxy-p46dc" in "kube-system" namespace to be "Ready" ...
	I0731 16:42:39.532954   16404 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-190022" in "kube-system" namespace to be "Ready" ...
	I0731 16:42:39.949713   16404 pod_ready.go:92] pod "kube-scheduler-addons-190022" in "kube-system" namespace has status "Ready":"True"
	I0731 16:42:39.949738   16404 pod_ready.go:81] duration metric: took 416.77646ms for pod "kube-scheduler-addons-190022" in "kube-system" namespace to be "Ready" ...
	I0731 16:42:39.949748   16404 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-zcd67" in "kube-system" namespace to be "Ready" ...
	I0731 16:42:39.961358   16404 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0731 16:42:39.961402   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHHostname
	I0731 16:42:39.964531   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:39.964965   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:39.965027   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:39.965180   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHPort
	I0731 16:42:39.965376   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:39.965578   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHUsername
	I0731 16:42:39.965711   16404 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/addons-190022/id_rsa Username:docker}
	I0731 16:42:40.180353   16404 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0731 16:42:40.235783   16404 addons.go:234] Setting addon gcp-auth=true in "addons-190022"
	I0731 16:42:40.235827   16404 host.go:66] Checking if "addons-190022" exists ...
	I0731 16:42:40.236154   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:40.236183   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:40.252038   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37795
	I0731 16:42:40.252490   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:40.252967   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:40.252988   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:40.253281   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:40.253725   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:40.253753   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:40.268706   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37685
	I0731 16:42:40.269067   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:40.269508   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:40.269527   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:40.269862   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:40.270109   16404 main.go:141] libmachine: (addons-190022) Calling .GetState
	I0731 16:42:40.271777   16404 main.go:141] libmachine: (addons-190022) Calling .DriverName
	I0731 16:42:40.271995   16404 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0731 16:42:40.272021   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHHostname
	I0731 16:42:40.274395   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:40.274736   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:40.274758   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:40.274918   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHPort
	I0731 16:42:40.275082   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:40.275298   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHUsername
	I0731 16:42:40.275461   16404 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/addons-190022/id_rsa Username:docker}
	I0731 16:42:41.236374   16404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.88634157s)
	I0731 16:42:41.236419   16404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.8833839s)
	I0731 16:42:41.236432   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:41.236448   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:41.236451   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:41.236462   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:41.236561   16404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.810214787s)
	I0731 16:42:41.236619   16404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.795327908s)
	I0731 16:42:41.236641   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:41.236655   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:41.236684   16404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.683348457s)
	I0731 16:42:41.236709   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:41.236722   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:41.236728   16404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.499860616s)
	I0731 16:42:41.236741   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:41.236748   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:41.236797   16404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.466732589s)
	I0731 16:42:41.236804   16404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.418514128s)
	I0731 16:42:41.236820   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:41.236823   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:41.236830   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:41.236834   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:41.236917   16404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.237119422s)
	I0731 16:42:41.236933   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:41.236949   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:41.237025   16404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.965738174s)
	I0731 16:42:41.237052   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:41.237062   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:41.237164   16404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.962592174s)
	W0731 16:42:41.237208   16404 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0731 16:42:41.237242   16404 retry.go:31] will retry after 213.271537ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0731 16:42:41.237306   16404 main.go:141] libmachine: (addons-190022) DBG | Closing plugin on server side
	I0731 16:42:41.237331   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:41.237344   16404 main.go:141] libmachine: (addons-190022) DBG | Closing plugin on server side
	I0731 16:42:41.237348   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:41.237360   16404 main.go:141] libmachine: (addons-190022) DBG | Closing plugin on server side
	I0731 16:42:41.237363   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:41.237369   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:41.237335   16404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.120506774s)
	I0731 16:42:41.237379   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:41.237383   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:41.237387   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:41.237373   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:41.237393   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:41.237396   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:41.237402   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:41.237405   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:41.237445   16404 main.go:141] libmachine: (addons-190022) DBG | Closing plugin on server side
	I0731 16:42:41.237453   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:41.237460   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:41.237476   16404 main.go:141] libmachine: (addons-190022) DBG | Closing plugin on server side
	I0731 16:42:41.237495   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:41.237500   16404 main.go:141] libmachine: (addons-190022) DBG | Closing plugin on server side
	I0731 16:42:41.237506   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:41.237514   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:41.237523   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:41.237524   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:41.237530   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:41.237533   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:41.237538   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:41.237541   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:41.237545   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:41.237549   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:41.237582   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:41.237896   16404 main.go:141] libmachine: (addons-190022) DBG | Closing plugin on server side
	I0731 16:42:41.237922   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:41.237928   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:41.237935   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:41.237941   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:41.237986   16404 main.go:141] libmachine: (addons-190022) DBG | Closing plugin on server side
	I0731 16:42:41.238003   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:41.238009   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:41.238594   16404 main.go:141] libmachine: (addons-190022) DBG | Closing plugin on server side
	I0731 16:42:41.238616   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:41.238628   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:41.238633   16404 main.go:141] libmachine: (addons-190022) DBG | Closing plugin on server side
	I0731 16:42:41.238638   16404 addons.go:475] Verifying addon registry=true in "addons-190022"
	I0731 16:42:41.238654   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:41.238661   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:41.238862   16404 main.go:141] libmachine: (addons-190022) DBG | Closing plugin on server side
	I0731 16:42:41.238891   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:41.238897   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:41.238902   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:41.238907   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:41.238952   16404 main.go:141] libmachine: (addons-190022) DBG | Closing plugin on server side
	I0731 16:42:41.238971   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:41.238978   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:41.238986   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:41.238993   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:41.239312   16404 main.go:141] libmachine: (addons-190022) DBG | Closing plugin on server side
	I0731 16:42:41.239351   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:41.239358   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:41.239366   16404 addons.go:475] Verifying addon metrics-server=true in "addons-190022"
	I0731 16:42:41.239399   16404 main.go:141] libmachine: (addons-190022) DBG | Closing plugin on server side
	I0731 16:42:41.239421   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:41.239428   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:41.239435   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:41.239441   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:41.239510   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:41.239515   16404 main.go:141] libmachine: (addons-190022) DBG | Closing plugin on server side
	I0731 16:42:41.239521   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:41.239539   16404 main.go:141] libmachine: (addons-190022) DBG | Closing plugin on server side
	I0731 16:42:41.238619   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:41.239562   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:41.239566   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:41.239570   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:41.239554   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:41.239603   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:41.239821   16404 main.go:141] libmachine: (addons-190022) DBG | Closing plugin on server side
	I0731 16:42:41.239851   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:41.239858   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:41.239866   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:41.239873   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:41.240266   16404 main.go:141] libmachine: (addons-190022) DBG | Closing plugin on server side
	I0731 16:42:41.240270   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:41.240282   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:41.240999   16404 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-190022 service yakd-dashboard -n yakd-dashboard
	
	I0731 16:42:41.241957   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:41.242002   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:41.242256   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:41.242269   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:41.242285   16404 main.go:141] libmachine: (addons-190022) DBG | Closing plugin on server side
	I0731 16:42:41.242353   16404 out.go:177] * Verifying registry addon...
	I0731 16:42:41.242450   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:41.242460   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:41.242467   16404 main.go:141] libmachine: (addons-190022) DBG | Closing plugin on server side
	I0731 16:42:41.242469   16404 addons.go:475] Verifying addon ingress=true in "addons-190022"
	I0731 16:42:41.244407   16404 out.go:177] * Verifying ingress addon...
	I0731 16:42:41.245376   16404 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0731 16:42:41.246244   16404 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0731 16:42:41.255780   16404 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0731 16:42:41.255800   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:41.275238   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:41.275262   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:41.275575   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:41.275594   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	W0731 16:42:41.275746   16404 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0731 16:42:41.281790   16404 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0731 16:42:41.281809   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:41.287842   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:41.287862   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:41.288146   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:41.288160   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:41.451168   16404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0731 16:42:41.768840   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:41.775533   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:41.926163   16404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.593809768s)
	I0731 16:42:41.926221   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:41.926236   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:41.926267   16404 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.65424921s)
	I0731 16:42:41.926699   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:41.926714   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:41.926711   16404 main.go:141] libmachine: (addons-190022) DBG | Closing plugin on server side
	I0731 16:42:41.926730   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:41.926737   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:41.926991   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:41.927004   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:41.927014   16404 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-190022"
	I0731 16:42:41.927694   16404 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0731 16:42:41.928693   16404 out.go:177] * Verifying csi-hostpath-driver addon...
	I0731 16:42:41.930135   16404 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0731 16:42:41.930785   16404 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0731 16:42:41.931288   16404 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0731 16:42:41.931303   16404 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0731 16:42:41.955719   16404 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0731 16:42:41.955742   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:41.962614   16404 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0731 16:42:41.962650   16404 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0731 16:42:42.005865   16404 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-zcd67" in "kube-system" namespace has status "Ready":"False"
	I0731 16:42:42.062379   16404 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0731 16:42:42.062405   16404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0731 16:42:42.147012   16404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0731 16:42:42.251044   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:42.252677   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:42.436089   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:42.750251   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:42.752506   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:42.936009   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:43.250857   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:43.251607   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:43.317518   16404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.866290674s)
	I0731 16:42:43.317576   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:43.317592   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:43.317894   16404 main.go:141] libmachine: (addons-190022) DBG | Closing plugin on server side
	I0731 16:42:43.317969   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:43.317987   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:43.317998   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:43.318006   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:43.318217   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:43.318231   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:43.441567   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:43.600612   16404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.453555314s)
	I0731 16:42:43.600674   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:43.600690   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:43.600983   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:43.601001   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:43.601009   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:43.601016   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:43.601016   16404 main.go:141] libmachine: (addons-190022) DBG | Closing plugin on server side
	I0731 16:42:43.601247   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:43.601261   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:43.603842   16404 addons.go:475] Verifying addon gcp-auth=true in "addons-190022"
	I0731 16:42:43.605166   16404 out.go:177] * Verifying gcp-auth addon...
	I0731 16:42:43.607250   16404 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0731 16:42:43.658930   16404 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0731 16:42:43.658955   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:43.788743   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:43.788894   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:43.936530   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:44.110910   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:44.257258   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:44.258273   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:44.435851   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:44.455641   16404 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-zcd67" in "kube-system" namespace has status "Ready":"False"
	I0731 16:42:44.611038   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:44.750709   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:44.753059   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:44.938327   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:45.110774   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:45.250401   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:45.251866   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:45.438053   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:45.611223   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:45.750735   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:45.752087   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:45.936659   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:46.111276   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:46.250921   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:46.251030   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:46.437066   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:46.456045   16404 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-zcd67" in "kube-system" namespace has status "Ready":"False"
	I0731 16:42:46.611562   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:46.750204   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:46.752563   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:46.936858   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:47.111048   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:47.250545   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:47.252154   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:47.436410   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:47.612912   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:47.978972   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:47.979035   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:47.979238   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:48.111080   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:48.250638   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:48.254161   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:48.436377   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:48.611514   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:48.751205   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:48.751247   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:48.936909   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:48.955105   16404 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-zcd67" in "kube-system" namespace has status "Ready":"False"
	I0731 16:42:49.111215   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:49.250450   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:49.250705   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:49.436728   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:49.611286   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:49.750085   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:49.751048   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:49.937594   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:50.111079   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:50.251353   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:50.251978   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:50.436400   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:50.610488   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:50.751856   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:50.752387   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:50.938235   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:50.956784   16404 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-zcd67" in "kube-system" namespace has status "Ready":"False"
	I0731 16:42:51.110898   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:51.250794   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:51.252567   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:51.436416   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:51.610917   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:51.750291   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:51.751142   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:51.936999   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:52.111094   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:52.249854   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:52.251984   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:52.437550   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:52.611798   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:52.751359   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:52.751622   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:52.936625   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:53.241084   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:53.250884   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:53.251400   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:53.436491   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:53.456064   16404 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-zcd67" in "kube-system" namespace has status "Ready":"False"
	I0731 16:42:53.610668   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:53.750564   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:53.750594   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:53.936690   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:54.111173   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:54.250884   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:54.252305   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:54.435688   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:54.611062   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:54.751590   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:54.751812   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:54.936610   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:55.110895   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:55.250098   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:55.252753   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:55.435695   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:55.611476   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:55.752843   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:55.754945   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:55.940633   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:55.958710   16404 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-zcd67" in "kube-system" namespace has status "Ready":"False"
	I0731 16:42:56.110954   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:56.249850   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:56.250907   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:56.436266   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:56.611715   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:56.750128   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:56.752470   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:56.936369   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:57.115747   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:57.249925   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:57.250382   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:57.439642   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:57.610702   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:58.078686   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:58.079864   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:58.081640   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:58.082797   16404 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-zcd67" in "kube-system" namespace has status "Ready":"False"
	I0731 16:42:58.110499   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:58.250839   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:58.251292   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:58.436440   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:58.610937   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:58.751998   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:58.753791   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:58.941879   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:59.111353   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:59.250942   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:59.251351   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:59.436048   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:59.459846   16404 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-zcd67" in "kube-system" namespace has status "Ready":"True"
	I0731 16:42:59.459867   16404 pod_ready.go:81] duration metric: took 19.51011325s for pod "nvidia-device-plugin-daemonset-zcd67" in "kube-system" namespace to be "Ready" ...
	I0731 16:42:59.459875   16404 pod_ready.go:38] duration metric: took 24.404992572s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 16:42:59.459889   16404 api_server.go:52] waiting for apiserver process to appear ...
	I0731 16:42:59.459935   16404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 16:42:59.477699   16404 api_server.go:72] duration metric: took 26.64821314s to wait for apiserver process to appear ...
	I0731 16:42:59.477727   16404 api_server.go:88] waiting for apiserver healthz status ...
	I0731 16:42:59.477751   16404 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0731 16:42:59.481655   16404 api_server.go:279] https://192.168.39.140:8443/healthz returned 200:
	ok
	I0731 16:42:59.482635   16404 api_server.go:141] control plane version: v1.30.3
	I0731 16:42:59.482660   16404 api_server.go:131] duration metric: took 4.926485ms to wait for apiserver health ...
	I0731 16:42:59.482667   16404 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 16:42:59.491372   16404 system_pods.go:59] 18 kube-system pods found
	I0731 16:42:59.491402   16404 system_pods.go:61] "coredns-7db6d8ff4d-5xsd7" [e335215e-3b35-4219-aebf-5cb36b99f501] Running
	I0731 16:42:59.491414   16404 system_pods.go:61] "csi-hostpath-attacher-0" [04d0345d-a817-4a36-bc46-fbe0548e5155] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0731 16:42:59.491423   16404 system_pods.go:61] "csi-hostpath-resizer-0" [366c8800-8ec9-4594-8d4e-6f9dd2ec2dfa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0731 16:42:59.491434   16404 system_pods.go:61] "csi-hostpathplugin-t8pzb" [1d031c16-6f7c-45e2-9123-4c71d43ebf7e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0731 16:42:59.491446   16404 system_pods.go:61] "etcd-addons-190022" [461d2a65-1ad4-4075-9d88-f95cf652e869] Running
	I0731 16:42:59.491453   16404 system_pods.go:61] "kube-apiserver-addons-190022" [858de9d8-e554-4ae0-9fb8-fd08b3e02f0d] Running
	I0731 16:42:59.491461   16404 system_pods.go:61] "kube-controller-manager-addons-190022" [528fd2e2-99a3-41f9-be37-0da13e3b7f85] Running
	I0731 16:42:59.491467   16404 system_pods.go:61] "kube-ingress-dns-minikube" [d54a8e85-afc8-4ad8-84be-3d5e643783f0] Running
	I0731 16:42:59.491473   16404 system_pods.go:61] "kube-proxy-p46dc" [3f47ba8b-8470-4e58-aabc-6cc47f18d726] Running
	I0731 16:42:59.491479   16404 system_pods.go:61] "kube-scheduler-addons-190022" [c3b38992-d228-460e-b578-fa2f0f914052] Running
	I0731 16:42:59.491489   16404 system_pods.go:61] "metrics-server-c59844bb4-j57l6" [4638cda1-728a-48d6-9736-4f6234e9f6c1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 16:42:59.491498   16404 system_pods.go:61] "nvidia-device-plugin-daemonset-zcd67" [f8e78301-23c4-432b-bd96-644d7c9b034e] Running
	I0731 16:42:59.491508   16404 system_pods.go:61] "registry-698f998955-xbtsh" [0beecbd0-f912-410d-b71c-b5c7bb05b1a6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0731 16:42:59.491517   16404 system_pods.go:61] "registry-proxy-f7tqb" [896d8e3c-67c0-4b9c-bab5-43c46ee24394] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0731 16:42:59.491529   16404 system_pods.go:61] "snapshot-controller-745499f584-rd2bq" [b32b5cc5-ece8-4227-a9fd-6c3f89791c42] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0731 16:42:59.491539   16404 system_pods.go:61] "snapshot-controller-745499f584-s2f9h" [3d3f15af-4f66-4845-9bbb-874f2d6254fd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0731 16:42:59.491549   16404 system_pods.go:61] "storage-provisioner" [2e3f9681-80f4-4d36-9897-9103dcd23543] Running
	I0731 16:42:59.491557   16404 system_pods.go:61] "tiller-deploy-6677d64bcd-jbrvp" [acce776b-f280-4d5c-85be-c197f74e1f0d] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0731 16:42:59.491568   16404 system_pods.go:74] duration metric: took 8.894803ms to wait for pod list to return data ...
	I0731 16:42:59.491581   16404 default_sa.go:34] waiting for default service account to be created ...
	I0731 16:42:59.493837   16404 default_sa.go:45] found service account: "default"
	I0731 16:42:59.493854   16404 default_sa.go:55] duration metric: took 2.267272ms for default service account to be created ...
	I0731 16:42:59.493861   16404 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 16:42:59.501324   16404 system_pods.go:86] 18 kube-system pods found
	I0731 16:42:59.501352   16404 system_pods.go:89] "coredns-7db6d8ff4d-5xsd7" [e335215e-3b35-4219-aebf-5cb36b99f501] Running
	I0731 16:42:59.501359   16404 system_pods.go:89] "csi-hostpath-attacher-0" [04d0345d-a817-4a36-bc46-fbe0548e5155] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0731 16:42:59.501365   16404 system_pods.go:89] "csi-hostpath-resizer-0" [366c8800-8ec9-4594-8d4e-6f9dd2ec2dfa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0731 16:42:59.501376   16404 system_pods.go:89] "csi-hostpathplugin-t8pzb" [1d031c16-6f7c-45e2-9123-4c71d43ebf7e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0731 16:42:59.501381   16404 system_pods.go:89] "etcd-addons-190022" [461d2a65-1ad4-4075-9d88-f95cf652e869] Running
	I0731 16:42:59.501386   16404 system_pods.go:89] "kube-apiserver-addons-190022" [858de9d8-e554-4ae0-9fb8-fd08b3e02f0d] Running
	I0731 16:42:59.501392   16404 system_pods.go:89] "kube-controller-manager-addons-190022" [528fd2e2-99a3-41f9-be37-0da13e3b7f85] Running
	I0731 16:42:59.501396   16404 system_pods.go:89] "kube-ingress-dns-minikube" [d54a8e85-afc8-4ad8-84be-3d5e643783f0] Running
	I0731 16:42:59.501400   16404 system_pods.go:89] "kube-proxy-p46dc" [3f47ba8b-8470-4e58-aabc-6cc47f18d726] Running
	I0731 16:42:59.501405   16404 system_pods.go:89] "kube-scheduler-addons-190022" [c3b38992-d228-460e-b578-fa2f0f914052] Running
	I0731 16:42:59.501410   16404 system_pods.go:89] "metrics-server-c59844bb4-j57l6" [4638cda1-728a-48d6-9736-4f6234e9f6c1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 16:42:59.501417   16404 system_pods.go:89] "nvidia-device-plugin-daemonset-zcd67" [f8e78301-23c4-432b-bd96-644d7c9b034e] Running
	I0731 16:42:59.501423   16404 system_pods.go:89] "registry-698f998955-xbtsh" [0beecbd0-f912-410d-b71c-b5c7bb05b1a6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0731 16:42:59.501431   16404 system_pods.go:89] "registry-proxy-f7tqb" [896d8e3c-67c0-4b9c-bab5-43c46ee24394] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0731 16:42:59.501438   16404 system_pods.go:89] "snapshot-controller-745499f584-rd2bq" [b32b5cc5-ece8-4227-a9fd-6c3f89791c42] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0731 16:42:59.501447   16404 system_pods.go:89] "snapshot-controller-745499f584-s2f9h" [3d3f15af-4f66-4845-9bbb-874f2d6254fd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0731 16:42:59.501452   16404 system_pods.go:89] "storage-provisioner" [2e3f9681-80f4-4d36-9897-9103dcd23543] Running
	I0731 16:42:59.501459   16404 system_pods.go:89] "tiller-deploy-6677d64bcd-jbrvp" [acce776b-f280-4d5c-85be-c197f74e1f0d] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0731 16:42:59.501465   16404 system_pods.go:126] duration metric: took 7.599553ms to wait for k8s-apps to be running ...
	I0731 16:42:59.501474   16404 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 16:42:59.501522   16404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 16:42:59.515490   16404 system_svc.go:56] duration metric: took 14.008876ms WaitForService to wait for kubelet
	I0731 16:42:59.515517   16404 kubeadm.go:582] duration metric: took 26.686036262s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 16:42:59.515542   16404 node_conditions.go:102] verifying NodePressure condition ...
	I0731 16:42:59.518692   16404 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 16:42:59.518711   16404 node_conditions.go:123] node cpu capacity is 2
	I0731 16:42:59.518723   16404 node_conditions.go:105] duration metric: took 3.172728ms to run NodePressure ...
	I0731 16:42:59.518733   16404 start.go:241] waiting for startup goroutines ...
	I0731 16:42:59.611381   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:59.750451   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:59.750567   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:59.937628   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:00.111365   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:00.252008   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:00.252140   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:00.437554   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:00.610571   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:00.750730   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:00.751243   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:00.935867   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:01.111474   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:01.251191   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:01.252400   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:01.436781   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:01.610899   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:01.752710   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:01.753246   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:01.938067   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:02.120781   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:02.251611   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:02.251916   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:02.436725   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:02.623245   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:02.751047   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:02.751473   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:02.936107   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:03.110927   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:03.249777   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:03.250715   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:03.436299   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:03.611419   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:03.752143   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:03.752333   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:03.939415   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:04.111033   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:04.251896   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:04.252160   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:04.437011   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:04.610412   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:04.751778   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:04.752077   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:04.937139   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:05.110763   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:05.250258   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:05.251870   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:05.446331   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:05.611178   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:05.751942   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:05.752021   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:05.937083   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:06.110296   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:06.255343   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:06.255465   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:06.436095   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:06.610486   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:06.751796   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:06.753975   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:06.936126   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:07.110704   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:07.252853   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:07.253567   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:07.436514   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:07.610479   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:07.750593   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:07.750709   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:07.936396   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:08.111303   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:08.250407   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:08.250820   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:08.436522   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:08.611432   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:08.749760   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:08.751179   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:08.936959   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:09.111386   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:09.249512   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:09.251984   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:09.436582   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:09.611026   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:09.750691   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:09.752121   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:09.936229   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:10.110995   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:10.250282   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:10.251538   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:10.436567   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:10.611012   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:10.750403   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:10.751440   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:10.936254   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:11.112213   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:11.250041   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:11.251371   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:11.436769   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:11.611125   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:11.749970   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:11.751941   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:11.936620   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:12.110946   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:12.250003   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:12.251932   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:12.436347   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:12.611450   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:12.752154   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:12.753050   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:12.951712   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:13.110750   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:13.250068   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:13.251616   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:13.435730   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:13.610933   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:13.750002   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:13.751376   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:13.936661   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:14.111377   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:14.252391   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:14.252715   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:14.436416   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:14.610774   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:14.749669   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:14.751412   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:14.935818   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:15.111037   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:15.249719   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:15.251868   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:15.438042   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:15.936438   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:15.937963   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:15.939699   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:15.939826   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:16.110619   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:16.251514   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:16.254940   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:16.436602   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:16.611088   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:16.750822   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:16.751600   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:16.936020   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:17.111472   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:17.257677   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:17.262742   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:17.435729   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:17.610493   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:17.751055   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:17.751445   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:17.945897   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:18.111460   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:18.251528   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:18.251664   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:18.436434   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:18.611450   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:18.751037   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:18.751282   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:18.937462   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:19.111579   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:19.249679   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:19.251514   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:19.436255   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:19.611442   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:19.751031   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:19.751050   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:19.942081   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:20.111503   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:20.252208   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:20.252940   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:20.436028   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:20.612136   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:20.753174   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:20.754725   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:20.936045   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:21.110865   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:21.249836   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:21.251578   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:21.436696   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:21.610895   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:21.749939   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:21.753532   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:21.937501   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:22.110679   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:22.249664   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:22.252056   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:22.437909   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:22.611624   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:22.752141   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:22.752589   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:22.937685   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:23.111469   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:23.250562   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:23.251743   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:23.435576   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:23.618054   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:23.751268   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:23.751301   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:23.936040   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:24.111311   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:24.250314   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:24.250682   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:24.436695   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:24.610896   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:24.749989   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:24.752189   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:24.937047   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:25.110430   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:25.251318   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:25.251636   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:25.437074   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:25.615305   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:25.753706   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:25.753916   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:25.938328   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:26.115266   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:26.254167   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:26.254342   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:26.438508   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:26.615968   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:26.750170   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:26.750287   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:26.935826   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:27.111083   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:27.251049   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:27.251319   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:27.436540   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:27.611003   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:27.750358   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:27.751479   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:27.939617   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:28.111351   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:28.251260   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:28.251415   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:28.436431   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:28.610717   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:28.749951   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:28.751723   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:28.935777   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:29.111143   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:29.250659   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:29.251498   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:29.437846   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:29.611422   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:29.751565   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:29.751717   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:30.075776   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:30.111398   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:30.250637   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:30.253554   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:30.437507   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:30.628569   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:30.752716   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:30.752939   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:31.161615   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:31.163778   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:31.252393   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:31.252410   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:31.436334   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:31.611452   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:31.750891   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:31.752898   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:31.936812   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:32.112850   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:32.250317   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:32.252971   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:32.437269   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:32.612513   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:32.750203   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:32.750349   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:32.936275   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:33.110892   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:33.250266   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:33.250361   16404 kapi.go:107] duration metric: took 52.004987116s to wait for kubernetes.io/minikube-addons=registry ...
	I0731 16:43:33.435896   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:33.611279   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:33.752713   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:33.936571   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:34.111324   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:34.250970   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:34.436286   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:34.610486   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:34.750596   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:34.936585   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:35.111399   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:35.250910   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:35.435880   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:35.611469   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:35.749976   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:35.936910   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:36.111984   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:36.250512   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:36.444408   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:36.612107   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:36.750822   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:36.940064   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:37.111562   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:37.250682   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:37.436343   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:37.610849   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:37.750427   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:37.940907   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:38.110935   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:38.250748   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:38.436456   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:38.612128   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:38.819556   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:38.941190   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:39.110554   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:39.250006   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:39.436475   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:39.611798   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:39.750941   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:39.935333   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:40.110969   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:40.250830   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:40.436900   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:40.612129   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:40.750959   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:40.940977   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:41.111814   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:41.250487   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:41.436095   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:41.610395   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:41.750206   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:41.936742   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:42.110962   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:42.251692   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:42.435622   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:42.611181   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:42.751221   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:42.935997   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:43.113945   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:43.250296   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:43.437101   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:43.610775   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:43.750307   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:43.936638   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:44.111472   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:44.249781   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:44.438175   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:44.616001   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:44.751497   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:44.936191   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:45.122822   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:45.261268   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:45.435851   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:45.611605   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:45.751461   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:45.936293   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:46.111265   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:46.253226   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:46.435755   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:46.610818   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:46.750716   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:46.936443   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:47.111198   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:47.250720   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:47.440688   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:47.610875   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:47.752820   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:47.937230   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:48.111382   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:48.251266   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:48.436672   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:48.611101   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:48.750874   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:48.936091   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:49.114995   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:49.251135   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:49.436376   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:49.610591   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:49.755900   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:49.935780   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:50.111300   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:50.251377   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:50.718151   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:50.719737   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:50.750966   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:50.936152   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:51.110478   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:51.250179   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:51.437056   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:51.611900   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:51.752995   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:51.936374   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:52.111596   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:52.251515   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:52.437238   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:52.611299   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:52.751100   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:52.937154   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:53.111130   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:53.325258   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:53.437413   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:53.611176   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:53.751359   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:53.937251   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:54.110858   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:54.252240   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:54.437420   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:54.611353   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:54.751286   16404 kapi.go:107] duration metric: took 1m13.505038604s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0731 16:43:54.944950   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:55.111622   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:55.440408   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:55.611294   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:55.936704   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:56.111251   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:56.437283   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:56.610546   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:56.936769   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:57.111627   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:57.436727   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:57.611499   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:57.936163   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:58.111060   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:58.436407   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:58.615387   16404 kapi.go:107] duration metric: took 1m15.008131623s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0731 16:43:58.616696   16404 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-190022 cluster.
	I0731 16:43:58.617808   16404 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0731 16:43:58.618941   16404 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0731 16:43:58.937070   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:59.436751   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:59.936122   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:44:00.436726   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:44:00.936519   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:44:01.482374   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:44:01.937603   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:44:02.436096   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:44:02.936411   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:44:03.436128   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:44:03.936386   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:44:04.438393   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:44:04.936389   16404 kapi.go:107] duration metric: took 1m23.005600008s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0731 16:44:04.938087   16404 out.go:177] * Enabled addons: nvidia-device-plugin, helm-tiller, metrics-server, inspektor-gadget, storage-provisioner, ingress-dns, cloud-spanner, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0731 16:44:04.939218   16404 addons.go:510] duration metric: took 1m32.109723075s for enable addons: enabled=[nvidia-device-plugin helm-tiller metrics-server inspektor-gadget storage-provisioner ingress-dns cloud-spanner yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0731 16:44:04.939260   16404 start.go:246] waiting for cluster config update ...
	I0731 16:44:04.939283   16404 start.go:255] writing updated cluster config ...
	I0731 16:44:04.939529   16404 ssh_runner.go:195] Run: rm -f paused
	I0731 16:44:04.990518   16404 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 16:44:04.992260   16404 out.go:177] * Done! kubectl is now configured to use "addons-190022" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 31 16:47:29 addons-190022 crio[684]: time="2024-07-31 16:47:29.756661171Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722444449756620102,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589584,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=26ce86a5-b39a-4ea2-aa8e-4b73b6b08df1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 16:47:29 addons-190022 crio[684]: time="2024-07-31 16:47:29.757441813Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fd58357b-4468-4b39-b1a8-36cc86ac2ac8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 16:47:29 addons-190022 crio[684]: time="2024-07-31 16:47:29.757546007Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fd58357b-4468-4b39-b1a8-36cc86ac2ac8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 16:47:29 addons-190022 crio[684]: time="2024-07-31 16:47:29.758074051Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b72204b0c0ed63cb5ba3e6090ed222bdade12c033b2e3f53691e28f6f7aef6b9,PodSandboxId:beedf3034d96440fe3491657f0f664abc0e4488a45d7098d1ec47505f1c3297b,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722444442631510028,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-xlw6l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 01fab767-1af8-4759-a725-caff0c1428fc,},Annotations:map[string]string{io.kubernetes.container.hash: 132eb6ad,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0765324d0b239089b8e31e17c569e8ca6eca916c178da715b3356863d6e5b08e,PodSandboxId:cfe2fa74adbbad1df1d72ade4e161087a46619bff52b02ab070983322b461d99,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722444303518929417,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3744fa0a-cce8-4eb8-9ae6-27e8475392e6,},Annotations:map[string]string{io.kubernet
es.container.hash: 55f2e5a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92bd13a40acd241cc986696357382f44b2b86374d42804b30215eebc5d8ea971,PodSandboxId:882ed477f9de5ffe5374fc4225dec00f9c5fb51c8a58edddefd3708ef1302852,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722444248528960505,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5e4e17e2-79e1-4a39-86
01-44265c1314a6,},Annotations:map[string]string{io.kubernetes.container.hash: 91997ced,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb0e5c820bdb410ad6ab2ee4c5e760ba94055502cdcbab390bf0250334bb10f5,PodSandboxId:48b263a01b67d981a196a95cef2448b9ebc2663761d31c0e4410304f41b37ea1,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722444219030773868,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-v82lj,io.kubernetes.pod.namespace: ingress-nginx,io.kubern
etes.pod.uid: 18a8a780-bf8b-478f-b6eb-80d97ddb27fc,},Annotations:map[string]string{io.kubernetes.container.hash: f25dcf07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4041dd1f1c11f032d0d76bfe7b25341c0780ec9baf9f0c16dccd7331ab44680,PodSandboxId:628e7da66f8815386415117c4a24c1373e34e2420d25c7ac02c3825af2375117,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722444218890080358,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-gxbnx,io.kubernetes.
pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c49c7125-f6a8-4a0e-9131-465c371eaff9,},Annotations:map[string]string{io.kubernetes.container.hash: 29625ee5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b4ec7714ed78c410ef58fb9d24df3a4cb05ade3bd4cf91657da0b63033da347,PodSandboxId:49140dfffc40eb129afc9fd3533c09a10bced535dd56d13034359be5f956dbcd,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1722444214462966358,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.
pod.name: local-path-provisioner-8d985888d-9hlt7,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: b506afd6-0c0a-470e-b1b3-a48ad3f6977d,},Annotations:map[string]string{io.kubernetes.container.hash: 58aac654,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:583dd18f14d371d009203df28c79f7b43401f51f624f86e8785026321eca760b,PodSandboxId:952962996be74f0b20c4732212097ab1cb35e33f5daebe47ba8ed7fd8f5a4ef2,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722444205420694074,Labels:map[string]string{io
.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-j57l6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4638cda1-728a-48d6-9736-4f6234e9f6c1,},Annotations:map[string]string{io.kubernetes.container.hash: bc7a47c0,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:689504ce84d77edb6d4fa85ce92f64fe541eb2f4c1bb3dd5402f35a76268d00a,PodSandboxId:0fa0ab46a357457b73d44fabedfc912952f7846d3581c3c7007f54da86d27006,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbd
a1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722444159409456506,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e3f9681-80f4-4d36-9897-9103dcd23543,},Annotations:map[string]string{io.kubernetes.container.hash: a0cfa204,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf02a171405c16c5b33884703776484ed4ef25b7a98c921e000897ade7c3f7f5,PodSandboxId:e3c08cf6b87977be5815118cf527ed7673856398de7961f292fb2ca7a27091d5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAIN
ER_RUNNING,CreatedAt:1722444156494522009,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5xsd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e335215e-3b35-4219-aebf-5cb36b99f501,},Annotations:map[string]string{io.kubernetes.container.hash: 95dbeee8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71c2430355fdd45d105e4f0a661352d41ef566f3830a43f4c7a29120639b9b24,PodSandboxId:6df6605b88ca96d64f40013b818e348dc291fc090148fdb57b8f8413584cd189,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e
122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722444153834639592,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p46dc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f47ba8b-8470-4e58-aabc-6cc47f18d726,},Annotations:map[string]string{io.kubernetes.container.hash: 7061031b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37fa9519210b38e6c0b8654e3767fa4299a2f1e74f6e8b921b965aa18a752f7d,PodSandboxId:c1c9c7146766f915a3072287e8b12ad11b89b3cae446cdd9d2b5dbdd204faad9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a89
9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722444133635200994,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-190022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abdcd5276b59307086e39e5a0733e075,},Annotations:map[string]string{io.kubernetes.container.hash: f6eb76e5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9196ff47535f2ce157de81e4cff038af22e270b2559341c143d31fee8ef1914,PodSandboxId:953e8051b6a6ca722e0a4a2272ffc53736ad34b7fb4b064740f736bdcfbe1b85,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722444133630129123,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-190022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cae04f723b44db246bbd25dc52dcd209,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6e02749c1e2f10d2e9e617185cc64a68404a6d54853b78cd0a71d5920fe8984,PodSandboxId:4bc06c234387f8877ab21f3854b4d8cc6998c9c371976fbbca4afbbe8d353f86,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifi
edImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722444133640130087,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-190022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35e5d3a0b14653fa12e710c67ba3ee39,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4259df48d13e2dfd05141f8bbfbb4340cdeeaf42b3182a0b13fe16cf804ec32b,PodSandboxId:d4a9685db8a6d1e75221757fd611aed144e97c8c55a8e2c3f29940e27c5e166f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722444133418400683,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-190022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 241e9b0c677781fd11f1e06d10601799,},Annotations:map[string]string{io.kubernetes.container.hash: 65e98503,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fd58357b-4468-4b39-b1a8-36cc86ac2ac8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 16:47:29 addons-190022 crio[684]: time="2024-07-31 16:47:29.808670216Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dae0b4a0-6f20-4f25-8e62-0433e2ee7ce0 name=/runtime.v1.RuntimeService/Version
	Jul 31 16:47:29 addons-190022 crio[684]: time="2024-07-31 16:47:29.808759412Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dae0b4a0-6f20-4f25-8e62-0433e2ee7ce0 name=/runtime.v1.RuntimeService/Version
	Jul 31 16:47:29 addons-190022 crio[684]: time="2024-07-31 16:47:29.809925917Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2666489b-97fe-47ad-bc9f-a6fda05f32b9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 16:47:29 addons-190022 crio[684]: time="2024-07-31 16:47:29.811173783Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722444449811146984,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589584,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2666489b-97fe-47ad-bc9f-a6fda05f32b9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 16:47:29 addons-190022 crio[684]: time="2024-07-31 16:47:29.811752387Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3b13bb59-3a0c-430a-a27d-e2ba39e37fae name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 16:47:29 addons-190022 crio[684]: time="2024-07-31 16:47:29.811826562Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3b13bb59-3a0c-430a-a27d-e2ba39e37fae name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 16:47:29 addons-190022 crio[684]: time="2024-07-31 16:47:29.812158150Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b72204b0c0ed63cb5ba3e6090ed222bdade12c033b2e3f53691e28f6f7aef6b9,PodSandboxId:beedf3034d96440fe3491657f0f664abc0e4488a45d7098d1ec47505f1c3297b,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722444442631510028,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-xlw6l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 01fab767-1af8-4759-a725-caff0c1428fc,},Annotations:map[string]string{io.kubernetes.container.hash: 132eb6ad,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0765324d0b239089b8e31e17c569e8ca6eca916c178da715b3356863d6e5b08e,PodSandboxId:cfe2fa74adbbad1df1d72ade4e161087a46619bff52b02ab070983322b461d99,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722444303518929417,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3744fa0a-cce8-4eb8-9ae6-27e8475392e6,},Annotations:map[string]string{io.kubernet
es.container.hash: 55f2e5a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92bd13a40acd241cc986696357382f44b2b86374d42804b30215eebc5d8ea971,PodSandboxId:882ed477f9de5ffe5374fc4225dec00f9c5fb51c8a58edddefd3708ef1302852,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722444248528960505,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5e4e17e2-79e1-4a39-86
01-44265c1314a6,},Annotations:map[string]string{io.kubernetes.container.hash: 91997ced,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb0e5c820bdb410ad6ab2ee4c5e760ba94055502cdcbab390bf0250334bb10f5,PodSandboxId:48b263a01b67d981a196a95cef2448b9ebc2663761d31c0e4410304f41b37ea1,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722444219030773868,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-v82lj,io.kubernetes.pod.namespace: ingress-nginx,io.kubern
etes.pod.uid: 18a8a780-bf8b-478f-b6eb-80d97ddb27fc,},Annotations:map[string]string{io.kubernetes.container.hash: f25dcf07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4041dd1f1c11f032d0d76bfe7b25341c0780ec9baf9f0c16dccd7331ab44680,PodSandboxId:628e7da66f8815386415117c4a24c1373e34e2420d25c7ac02c3825af2375117,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722444218890080358,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-gxbnx,io.kubernetes.
pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c49c7125-f6a8-4a0e-9131-465c371eaff9,},Annotations:map[string]string{io.kubernetes.container.hash: 29625ee5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b4ec7714ed78c410ef58fb9d24df3a4cb05ade3bd4cf91657da0b63033da347,PodSandboxId:49140dfffc40eb129afc9fd3533c09a10bced535dd56d13034359be5f956dbcd,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1722444214462966358,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.
pod.name: local-path-provisioner-8d985888d-9hlt7,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: b506afd6-0c0a-470e-b1b3-a48ad3f6977d,},Annotations:map[string]string{io.kubernetes.container.hash: 58aac654,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:583dd18f14d371d009203df28c79f7b43401f51f624f86e8785026321eca760b,PodSandboxId:952962996be74f0b20c4732212097ab1cb35e33f5daebe47ba8ed7fd8f5a4ef2,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722444205420694074,Labels:map[string]string{io
.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-j57l6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4638cda1-728a-48d6-9736-4f6234e9f6c1,},Annotations:map[string]string{io.kubernetes.container.hash: bc7a47c0,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:689504ce84d77edb6d4fa85ce92f64fe541eb2f4c1bb3dd5402f35a76268d00a,PodSandboxId:0fa0ab46a357457b73d44fabedfc912952f7846d3581c3c7007f54da86d27006,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbd
a1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722444159409456506,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e3f9681-80f4-4d36-9897-9103dcd23543,},Annotations:map[string]string{io.kubernetes.container.hash: a0cfa204,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf02a171405c16c5b33884703776484ed4ef25b7a98c921e000897ade7c3f7f5,PodSandboxId:e3c08cf6b87977be5815118cf527ed7673856398de7961f292fb2ca7a27091d5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAIN
ER_RUNNING,CreatedAt:1722444156494522009,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5xsd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e335215e-3b35-4219-aebf-5cb36b99f501,},Annotations:map[string]string{io.kubernetes.container.hash: 95dbeee8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71c2430355fdd45d105e4f0a661352d41ef566f3830a43f4c7a29120639b9b24,PodSandboxId:6df6605b88ca96d64f40013b818e348dc291fc090148fdb57b8f8413584cd189,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e
122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722444153834639592,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p46dc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f47ba8b-8470-4e58-aabc-6cc47f18d726,},Annotations:map[string]string{io.kubernetes.container.hash: 7061031b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37fa9519210b38e6c0b8654e3767fa4299a2f1e74f6e8b921b965aa18a752f7d,PodSandboxId:c1c9c7146766f915a3072287e8b12ad11b89b3cae446cdd9d2b5dbdd204faad9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a89
9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722444133635200994,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-190022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abdcd5276b59307086e39e5a0733e075,},Annotations:map[string]string{io.kubernetes.container.hash: f6eb76e5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9196ff47535f2ce157de81e4cff038af22e270b2559341c143d31fee8ef1914,PodSandboxId:953e8051b6a6ca722e0a4a2272ffc53736ad34b7fb4b064740f736bdcfbe1b85,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722444133630129123,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-190022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cae04f723b44db246bbd25dc52dcd209,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6e02749c1e2f10d2e9e617185cc64a68404a6d54853b78cd0a71d5920fe8984,PodSandboxId:4bc06c234387f8877ab21f3854b4d8cc6998c9c371976fbbca4afbbe8d353f86,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifi
edImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722444133640130087,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-190022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35e5d3a0b14653fa12e710c67ba3ee39,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4259df48d13e2dfd05141f8bbfbb4340cdeeaf42b3182a0b13fe16cf804ec32b,PodSandboxId:d4a9685db8a6d1e75221757fd611aed144e97c8c55a8e2c3f29940e27c5e166f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722444133418400683,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-190022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 241e9b0c677781fd11f1e06d10601799,},Annotations:map[string]string{io.kubernetes.container.hash: 65e98503,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3b13bb59-3a0c-430a-a27d-e2ba39e37fae name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 16:47:29 addons-190022 crio[684]: time="2024-07-31 16:47:29.849487681Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a36f26f8-0d22-4d39-80a5-57e796c7456e name=/runtime.v1.RuntimeService/Version
	Jul 31 16:47:29 addons-190022 crio[684]: time="2024-07-31 16:47:29.849584696Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a36f26f8-0d22-4d39-80a5-57e796c7456e name=/runtime.v1.RuntimeService/Version
	Jul 31 16:47:29 addons-190022 crio[684]: time="2024-07-31 16:47:29.850900168Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=47b8773f-1af2-43fe-8065-5ef807569142 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 16:47:29 addons-190022 crio[684]: time="2024-07-31 16:47:29.852197139Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722444449852170141,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589584,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=47b8773f-1af2-43fe-8065-5ef807569142 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 16:47:29 addons-190022 crio[684]: time="2024-07-31 16:47:29.852746625Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c44406d9-0dbe-4206-a7e4-1b30a2b39cd2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 16:47:29 addons-190022 crio[684]: time="2024-07-31 16:47:29.852806135Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c44406d9-0dbe-4206-a7e4-1b30a2b39cd2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 16:47:29 addons-190022 crio[684]: time="2024-07-31 16:47:29.853411896Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b72204b0c0ed63cb5ba3e6090ed222bdade12c033b2e3f53691e28f6f7aef6b9,PodSandboxId:beedf3034d96440fe3491657f0f664abc0e4488a45d7098d1ec47505f1c3297b,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722444442631510028,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-xlw6l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 01fab767-1af8-4759-a725-caff0c1428fc,},Annotations:map[string]string{io.kubernetes.container.hash: 132eb6ad,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0765324d0b239089b8e31e17c569e8ca6eca916c178da715b3356863d6e5b08e,PodSandboxId:cfe2fa74adbbad1df1d72ade4e161087a46619bff52b02ab070983322b461d99,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722444303518929417,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3744fa0a-cce8-4eb8-9ae6-27e8475392e6,},Annotations:map[string]string{io.kubernet
es.container.hash: 55f2e5a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92bd13a40acd241cc986696357382f44b2b86374d42804b30215eebc5d8ea971,PodSandboxId:882ed477f9de5ffe5374fc4225dec00f9c5fb51c8a58edddefd3708ef1302852,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722444248528960505,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5e4e17e2-79e1-4a39-86
01-44265c1314a6,},Annotations:map[string]string{io.kubernetes.container.hash: 91997ced,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb0e5c820bdb410ad6ab2ee4c5e760ba94055502cdcbab390bf0250334bb10f5,PodSandboxId:48b263a01b67d981a196a95cef2448b9ebc2663761d31c0e4410304f41b37ea1,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722444219030773868,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-v82lj,io.kubernetes.pod.namespace: ingress-nginx,io.kubern
etes.pod.uid: 18a8a780-bf8b-478f-b6eb-80d97ddb27fc,},Annotations:map[string]string{io.kubernetes.container.hash: f25dcf07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4041dd1f1c11f032d0d76bfe7b25341c0780ec9baf9f0c16dccd7331ab44680,PodSandboxId:628e7da66f8815386415117c4a24c1373e34e2420d25c7ac02c3825af2375117,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722444218890080358,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-gxbnx,io.kubernetes.
pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c49c7125-f6a8-4a0e-9131-465c371eaff9,},Annotations:map[string]string{io.kubernetes.container.hash: 29625ee5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b4ec7714ed78c410ef58fb9d24df3a4cb05ade3bd4cf91657da0b63033da347,PodSandboxId:49140dfffc40eb129afc9fd3533c09a10bced535dd56d13034359be5f956dbcd,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1722444214462966358,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.
pod.name: local-path-provisioner-8d985888d-9hlt7,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: b506afd6-0c0a-470e-b1b3-a48ad3f6977d,},Annotations:map[string]string{io.kubernetes.container.hash: 58aac654,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:583dd18f14d371d009203df28c79f7b43401f51f624f86e8785026321eca760b,PodSandboxId:952962996be74f0b20c4732212097ab1cb35e33f5daebe47ba8ed7fd8f5a4ef2,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722444205420694074,Labels:map[string]string{io
.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-j57l6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4638cda1-728a-48d6-9736-4f6234e9f6c1,},Annotations:map[string]string{io.kubernetes.container.hash: bc7a47c0,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:689504ce84d77edb6d4fa85ce92f64fe541eb2f4c1bb3dd5402f35a76268d00a,PodSandboxId:0fa0ab46a357457b73d44fabedfc912952f7846d3581c3c7007f54da86d27006,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbd
a1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722444159409456506,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e3f9681-80f4-4d36-9897-9103dcd23543,},Annotations:map[string]string{io.kubernetes.container.hash: a0cfa204,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf02a171405c16c5b33884703776484ed4ef25b7a98c921e000897ade7c3f7f5,PodSandboxId:e3c08cf6b87977be5815118cf527ed7673856398de7961f292fb2ca7a27091d5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAIN
ER_RUNNING,CreatedAt:1722444156494522009,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5xsd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e335215e-3b35-4219-aebf-5cb36b99f501,},Annotations:map[string]string{io.kubernetes.container.hash: 95dbeee8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71c2430355fdd45d105e4f0a661352d41ef566f3830a43f4c7a29120639b9b24,PodSandboxId:6df6605b88ca96d64f40013b818e348dc291fc090148fdb57b8f8413584cd189,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e
122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722444153834639592,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p46dc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f47ba8b-8470-4e58-aabc-6cc47f18d726,},Annotations:map[string]string{io.kubernetes.container.hash: 7061031b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37fa9519210b38e6c0b8654e3767fa4299a2f1e74f6e8b921b965aa18a752f7d,PodSandboxId:c1c9c7146766f915a3072287e8b12ad11b89b3cae446cdd9d2b5dbdd204faad9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a89
9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722444133635200994,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-190022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abdcd5276b59307086e39e5a0733e075,},Annotations:map[string]string{io.kubernetes.container.hash: f6eb76e5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9196ff47535f2ce157de81e4cff038af22e270b2559341c143d31fee8ef1914,PodSandboxId:953e8051b6a6ca722e0a4a2272ffc53736ad34b7fb4b064740f736bdcfbe1b85,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722444133630129123,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-190022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cae04f723b44db246bbd25dc52dcd209,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6e02749c1e2f10d2e9e617185cc64a68404a6d54853b78cd0a71d5920fe8984,PodSandboxId:4bc06c234387f8877ab21f3854b4d8cc6998c9c371976fbbca4afbbe8d353f86,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifi
edImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722444133640130087,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-190022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35e5d3a0b14653fa12e710c67ba3ee39,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4259df48d13e2dfd05141f8bbfbb4340cdeeaf42b3182a0b13fe16cf804ec32b,PodSandboxId:d4a9685db8a6d1e75221757fd611aed144e97c8c55a8e2c3f29940e27c5e166f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722444133418400683,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-190022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 241e9b0c677781fd11f1e06d10601799,},Annotations:map[string]string{io.kubernetes.container.hash: 65e98503,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c44406d9-0dbe-4206-a7e4-1b30a2b39cd2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 16:47:29 addons-190022 crio[684]: time="2024-07-31 16:47:29.890664190Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fc461a63-2b22-46eb-9d3f-ab636f44819d name=/runtime.v1.RuntimeService/Version
	Jul 31 16:47:29 addons-190022 crio[684]: time="2024-07-31 16:47:29.890785786Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fc461a63-2b22-46eb-9d3f-ab636f44819d name=/runtime.v1.RuntimeService/Version
	Jul 31 16:47:29 addons-190022 crio[684]: time="2024-07-31 16:47:29.892723142Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c9f17263-0914-40b5-953e-c69b8fd0a3c6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 16:47:29 addons-190022 crio[684]: time="2024-07-31 16:47:29.894518035Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722444449894471657,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589584,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c9f17263-0914-40b5-953e-c69b8fd0a3c6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 16:47:29 addons-190022 crio[684]: time="2024-07-31 16:47:29.895174936Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fe4d2b79-2dcf-458a-9908-23ab51d5b0b2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 16:47:29 addons-190022 crio[684]: time="2024-07-31 16:47:29.895329569Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fe4d2b79-2dcf-458a-9908-23ab51d5b0b2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 16:47:29 addons-190022 crio[684]: time="2024-07-31 16:47:29.895825129Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b72204b0c0ed63cb5ba3e6090ed222bdade12c033b2e3f53691e28f6f7aef6b9,PodSandboxId:beedf3034d96440fe3491657f0f664abc0e4488a45d7098d1ec47505f1c3297b,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722444442631510028,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-xlw6l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 01fab767-1af8-4759-a725-caff0c1428fc,},Annotations:map[string]string{io.kubernetes.container.hash: 132eb6ad,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0765324d0b239089b8e31e17c569e8ca6eca916c178da715b3356863d6e5b08e,PodSandboxId:cfe2fa74adbbad1df1d72ade4e161087a46619bff52b02ab070983322b461d99,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722444303518929417,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3744fa0a-cce8-4eb8-9ae6-27e8475392e6,},Annotations:map[string]string{io.kubernet
es.container.hash: 55f2e5a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92bd13a40acd241cc986696357382f44b2b86374d42804b30215eebc5d8ea971,PodSandboxId:882ed477f9de5ffe5374fc4225dec00f9c5fb51c8a58edddefd3708ef1302852,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722444248528960505,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5e4e17e2-79e1-4a39-86
01-44265c1314a6,},Annotations:map[string]string{io.kubernetes.container.hash: 91997ced,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb0e5c820bdb410ad6ab2ee4c5e760ba94055502cdcbab390bf0250334bb10f5,PodSandboxId:48b263a01b67d981a196a95cef2448b9ebc2663761d31c0e4410304f41b37ea1,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722444219030773868,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-v82lj,io.kubernetes.pod.namespace: ingress-nginx,io.kubern
etes.pod.uid: 18a8a780-bf8b-478f-b6eb-80d97ddb27fc,},Annotations:map[string]string{io.kubernetes.container.hash: f25dcf07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4041dd1f1c11f032d0d76bfe7b25341c0780ec9baf9f0c16dccd7331ab44680,PodSandboxId:628e7da66f8815386415117c4a24c1373e34e2420d25c7ac02c3825af2375117,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722444218890080358,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-gxbnx,io.kubernetes.
pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c49c7125-f6a8-4a0e-9131-465c371eaff9,},Annotations:map[string]string{io.kubernetes.container.hash: 29625ee5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b4ec7714ed78c410ef58fb9d24df3a4cb05ade3bd4cf91657da0b63033da347,PodSandboxId:49140dfffc40eb129afc9fd3533c09a10bced535dd56d13034359be5f956dbcd,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1722444214462966358,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.
pod.name: local-path-provisioner-8d985888d-9hlt7,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: b506afd6-0c0a-470e-b1b3-a48ad3f6977d,},Annotations:map[string]string{io.kubernetes.container.hash: 58aac654,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:583dd18f14d371d009203df28c79f7b43401f51f624f86e8785026321eca760b,PodSandboxId:952962996be74f0b20c4732212097ab1cb35e33f5daebe47ba8ed7fd8f5a4ef2,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722444205420694074,Labels:map[string]string{io
.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-j57l6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4638cda1-728a-48d6-9736-4f6234e9f6c1,},Annotations:map[string]string{io.kubernetes.container.hash: bc7a47c0,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:689504ce84d77edb6d4fa85ce92f64fe541eb2f4c1bb3dd5402f35a76268d00a,PodSandboxId:0fa0ab46a357457b73d44fabedfc912952f7846d3581c3c7007f54da86d27006,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbd
a1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722444159409456506,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e3f9681-80f4-4d36-9897-9103dcd23543,},Annotations:map[string]string{io.kubernetes.container.hash: a0cfa204,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf02a171405c16c5b33884703776484ed4ef25b7a98c921e000897ade7c3f7f5,PodSandboxId:e3c08cf6b87977be5815118cf527ed7673856398de7961f292fb2ca7a27091d5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAIN
ER_RUNNING,CreatedAt:1722444156494522009,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5xsd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e335215e-3b35-4219-aebf-5cb36b99f501,},Annotations:map[string]string{io.kubernetes.container.hash: 95dbeee8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71c2430355fdd45d105e4f0a661352d41ef566f3830a43f4c7a29120639b9b24,PodSandboxId:6df6605b88ca96d64f40013b818e348dc291fc090148fdb57b8f8413584cd189,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e
122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722444153834639592,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p46dc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f47ba8b-8470-4e58-aabc-6cc47f18d726,},Annotations:map[string]string{io.kubernetes.container.hash: 7061031b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37fa9519210b38e6c0b8654e3767fa4299a2f1e74f6e8b921b965aa18a752f7d,PodSandboxId:c1c9c7146766f915a3072287e8b12ad11b89b3cae446cdd9d2b5dbdd204faad9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a89
9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722444133635200994,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-190022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abdcd5276b59307086e39e5a0733e075,},Annotations:map[string]string{io.kubernetes.container.hash: f6eb76e5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9196ff47535f2ce157de81e4cff038af22e270b2559341c143d31fee8ef1914,PodSandboxId:953e8051b6a6ca722e0a4a2272ffc53736ad34b7fb4b064740f736bdcfbe1b85,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722444133630129123,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-190022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cae04f723b44db246bbd25dc52dcd209,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6e02749c1e2f10d2e9e617185cc64a68404a6d54853b78cd0a71d5920fe8984,PodSandboxId:4bc06c234387f8877ab21f3854b4d8cc6998c9c371976fbbca4afbbe8d353f86,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifi
edImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722444133640130087,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-190022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35e5d3a0b14653fa12e710c67ba3ee39,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4259df48d13e2dfd05141f8bbfbb4340cdeeaf42b3182a0b13fe16cf804ec32b,PodSandboxId:d4a9685db8a6d1e75221757fd611aed144e97c8c55a8e2c3f29940e27c5e166f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722444133418400683,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-190022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 241e9b0c677781fd11f1e06d10601799,},Annotations:map[string]string{io.kubernetes.container.hash: 65e98503,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fe4d2b79-2dcf-458a-9908-23ab51d5b0b2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b72204b0c0ed6       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        7 seconds ago       Running             hello-world-app           0                   beedf3034d964       hello-world-app-6778b5fc9f-xlw6l
	0765324d0b239       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                              2 minutes ago       Running             nginx                     0                   cfe2fa74adbba       nginx
	92bd13a40acd2       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   882ed477f9de5       busybox
	eb0e5c820bdb4       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   3 minutes ago       Exited              patch                     0                   48b263a01b67d       ingress-nginx-admission-patch-v82lj
	b4041dd1f1c11       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   3 minutes ago       Exited              create                    0                   628e7da66f881       ingress-nginx-admission-create-gxbnx
	9b4ec7714ed78       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             3 minutes ago       Running             local-path-provisioner    0                   49140dfffc40e       local-path-provisioner-8d985888d-9hlt7
	583dd18f14d37       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        4 minutes ago       Running             metrics-server            0                   952962996be74       metrics-server-c59844bb4-j57l6
	689504ce84d77       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   0fa0ab46a3574       storage-provisioner
	bf02a171405c1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             4 minutes ago       Running             coredns                   0                   e3c08cf6b8797       coredns-7db6d8ff4d-5xsd7
	71c2430355fdd       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                                             4 minutes ago       Running             kube-proxy                0                   6df6605b88ca9       kube-proxy-p46dc
	b6e02749c1e2f       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                                             5 minutes ago       Running             kube-controller-manager   0                   4bc06c234387f       kube-controller-manager-addons-190022
	37fa9519210b3       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                             5 minutes ago       Running             etcd                      0                   c1c9c7146766f       etcd-addons-190022
	a9196ff47535f       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                                             5 minutes ago       Running             kube-scheduler            0                   953e8051b6a6c       kube-scheduler-addons-190022
	4259df48d13e2       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                                             5 minutes ago       Running             kube-apiserver            0                   d4a9685db8a6d       kube-apiserver-addons-190022
	
	
	==> coredns [bf02a171405c16c5b33884703776484ed4ef25b7a98c921e000897ade7c3f7f5] <==
	[INFO] 10.244.0.7:35275 - 2356 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.001253212s
	[INFO] 10.244.0.7:47135 - 24081 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000074895s
	[INFO] 10.244.0.7:47135 - 19723 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00007575s
	[INFO] 10.244.0.7:45814 - 12542 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00012956s
	[INFO] 10.244.0.7:45814 - 36093 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000078848s
	[INFO] 10.244.0.7:39930 - 18624 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000100773s
	[INFO] 10.244.0.7:39930 - 22466 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000108035s
	[INFO] 10.244.0.7:58512 - 28535 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000052653s
	[INFO] 10.244.0.7:58512 - 45128 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000063382s
	[INFO] 10.244.0.7:47265 - 25829 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000028845s
	[INFO] 10.244.0.7:47265 - 46824 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000061601s
	[INFO] 10.244.0.7:34028 - 29937 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000026686s
	[INFO] 10.244.0.7:34028 - 22007 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000028446s
	[INFO] 10.244.0.7:38783 - 18194 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000027676s
	[INFO] 10.244.0.7:38783 - 35093 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000063226s
	[INFO] 10.244.0.22:54467 - 29428 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001089104s
	[INFO] 10.244.0.22:49179 - 8534 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000411106s
	[INFO] 10.244.0.22:42935 - 10254 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000127186s
	[INFO] 10.244.0.22:39706 - 28067 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000104068s
	[INFO] 10.244.0.22:38280 - 13891 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000114332s
	[INFO] 10.244.0.22:49870 - 12013 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000112283s
	[INFO] 10.244.0.22:40432 - 31354 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.000832301s
	[INFO] 10.244.0.22:38095 - 44901 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003515395s
	[INFO] 10.244.0.25:60167 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000369871s
	[INFO] 10.244.0.25:48255 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000111281s
	
	
	==> describe nodes <==
	Name:               addons-190022
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-190022
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1d737dad7efa60c56d30434fcd857dd3b14c91d9
	                    minikube.k8s.io/name=addons-190022
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T16_42_19_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-190022
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 16:42:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-190022
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 16:47:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 16:45:22 +0000   Wed, 31 Jul 2024 16:42:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 16:45:22 +0000   Wed, 31 Jul 2024 16:42:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 16:45:22 +0000   Wed, 31 Jul 2024 16:42:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 16:45:22 +0000   Wed, 31 Jul 2024 16:42:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.140
	  Hostname:    addons-190022
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 f12e0c90c0b74bb2ab73fd663fb74722
	  System UUID:                f12e0c90-c0b7-4bb2-ab73-fd663fb74722
	  Boot ID:                    3779d878-0e6a-41ae-98eb-58e93b91f1b1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                      ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m25s
	  default                     hello-world-app-6778b5fc9f-xlw6l          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m31s
	  kube-system                 coredns-7db6d8ff4d-5xsd7                  100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m57s
	  kube-system                 etcd-addons-190022                        100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m12s
	  kube-system                 kube-apiserver-addons-190022              250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m12s
	  kube-system                 kube-controller-manager-addons-190022     200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m12s
	  kube-system                 kube-proxy-p46dc                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m58s
	  kube-system                 kube-scheduler-addons-190022              100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m12s
	  kube-system                 metrics-server-c59844bb4-j57l6            100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         4m52s
	  kube-system                 storage-provisioner                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m53s
	  local-path-storage          local-path-provisioner-8d985888d-9hlt7    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (9%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m55s  kube-proxy       
	  Normal  Starting                 5m12s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m12s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m12s  kubelet          Node addons-190022 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m12s  kubelet          Node addons-190022 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m12s  kubelet          Node addons-190022 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m11s  kubelet          Node addons-190022 status is now: NodeReady
	  Normal  RegisteredNode           4m58s  node-controller  Node addons-190022 event: Registered Node addons-190022 in Controller
	
	
	==> dmesg <==
	[ +14.749703] systemd-fstab-generator[1475]: Ignoring "noauto" option for root device
	[  +0.139032] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.041311] kauditd_printk_skb: 119 callbacks suppressed
	[  +5.057407] kauditd_printk_skb: 150 callbacks suppressed
	[  +5.432897] kauditd_printk_skb: 55 callbacks suppressed
	[Jul31 16:43] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.952538] kauditd_printk_skb: 1 callbacks suppressed
	[  +5.496129] kauditd_printk_skb: 23 callbacks suppressed
	[  +9.020916] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.815173] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.652736] kauditd_printk_skb: 28 callbacks suppressed
	[  +9.292292] kauditd_printk_skb: 85 callbacks suppressed
	[  +5.099346] kauditd_printk_skb: 11 callbacks suppressed
	[Jul31 16:44] kauditd_printk_skb: 2 callbacks suppressed
	[ +14.999978] kauditd_printk_skb: 50 callbacks suppressed
	[ +10.306622] kauditd_printk_skb: 2 callbacks suppressed
	[ +11.639899] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.397971] kauditd_printk_skb: 43 callbacks suppressed
	[  +6.540824] kauditd_printk_skb: 56 callbacks suppressed
	[  +5.806532] kauditd_printk_skb: 22 callbacks suppressed
	[Jul31 16:45] kauditd_printk_skb: 55 callbacks suppressed
	[ +29.802053] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.231383] kauditd_printk_skb: 33 callbacks suppressed
	[Jul31 16:47] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.102617] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [37fa9519210b38e6c0b8654e3767fa4299a2f1e74f6e8b921b965aa18a752f7d] <==
	{"level":"warn","ts":"2024-07-31T16:43:50.693816Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-31T16:43:50.294997Z","time spent":"396.679149ms","remote":"127.0.0.1:38316","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":541,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-190022\" mod_revision:1021 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-190022\" value_size:487 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-190022\" > >"}
	{"level":"warn","ts":"2024-07-31T16:43:50.695523Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"221.016475ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/roles/\" range_end:\"/registry/roles0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-07-31T16:43:50.695578Z","caller":"traceutil/trace.go:171","msg":"trace[203305282] range","detail":"{range_begin:/registry/roles/; range_end:/registry/roles0; response_count:0; response_revision:1097; }","duration":"221.101194ms","start":"2024-07-31T16:43:50.474468Z","end":"2024-07-31T16:43:50.69557Z","steps":["trace[203305282] 'agreement among raft nodes before linearized reading'  (duration: 220.977966ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T16:43:50.69576Z","caller":"traceutil/trace.go:171","msg":"trace[903938694] transaction","detail":"{read_only:false; response_revision:1097; number_of_response:1; }","duration":"279.79359ms","start":"2024-07-31T16:43:50.415959Z","end":"2024-07-31T16:43:50.695753Z","steps":["trace[903938694] 'process raft request'  (duration: 279.414648ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T16:43:50.69592Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.928449ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11453"}
	{"level":"warn","ts":"2024-07-31T16:43:50.696376Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"221.160402ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/gadget/gadget-f5ddz.17e759e646c4e513\" ","response":"range_response_count:1 size:779"}
	{"level":"info","ts":"2024-07-31T16:43:50.696414Z","caller":"traceutil/trace.go:171","msg":"trace[1097566836] range","detail":"{range_begin:/registry/events/gadget/gadget-f5ddz.17e759e646c4e513; range_end:; response_count:1; response_revision:1097; }","duration":"221.220289ms","start":"2024-07-31T16:43:50.475187Z","end":"2024-07-31T16:43:50.696407Z","steps":["trace[1097566836] 'agreement among raft nodes before linearized reading'  (duration: 221.133272ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T16:43:50.69666Z","caller":"traceutil/trace.go:171","msg":"trace[1272129351] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1097; }","duration":"105.019388ms","start":"2024-07-31T16:43:50.590961Z","end":"2024-07-31T16:43:50.695981Z","steps":["trace[1272129351] 'agreement among raft nodes before linearized reading'  (duration: 104.892665ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T16:44:01.459193Z","caller":"traceutil/trace.go:171","msg":"trace[1792784800] transaction","detail":"{read_only:false; response_revision:1155; number_of_response:1; }","duration":"111.773545ms","start":"2024-07-31T16:44:01.347397Z","end":"2024-07-31T16:44:01.45917Z","steps":["trace[1792784800] 'process raft request'  (duration: 111.411046ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T16:44:01.82981Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"211.860222ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2024-07-31T16:44:01.830007Z","caller":"traceutil/trace.go:171","msg":"trace[148798408] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1155; }","duration":"212.118848ms","start":"2024-07-31T16:44:01.617875Z","end":"2024-07-31T16:44:01.829994Z","steps":["trace[148798408] 'range keys from in-memory index tree'  (duration: 211.706506ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T16:44:04.790297Z","caller":"traceutil/trace.go:171","msg":"trace[231479675] transaction","detail":"{read_only:false; response_revision:1170; number_of_response:1; }","duration":"155.69714ms","start":"2024-07-31T16:44:04.634582Z","end":"2024-07-31T16:44:04.790279Z","steps":["trace[231479675] 'process raft request'  (duration: 155.518894ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T16:45:01.717109Z","caller":"traceutil/trace.go:171","msg":"trace[1846756209] linearizableReadLoop","detail":"{readStateIndex:1677; appliedIndex:1676; }","duration":"274.751668ms","start":"2024-07-31T16:45:01.442324Z","end":"2024-07-31T16:45:01.717076Z","steps":["trace[1846756209] 'read index received'  (duration: 274.573584ms)","trace[1846756209] 'applied index is now lower than readState.Index'  (duration: 177.471µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-31T16:45:01.717284Z","caller":"traceutil/trace.go:171","msg":"trace[1667804110] transaction","detail":"{read_only:false; response_revision:1617; number_of_response:1; }","duration":"318.989675ms","start":"2024-07-31T16:45:01.398288Z","end":"2024-07-31T16:45:01.717278Z","steps":["trace[1667804110] 'process raft request'  (duration: 318.670611ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T16:45:01.717455Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-31T16:45:01.398271Z","time spent":"319.033235ms","remote":"127.0.0.1:38206","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3802,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/headlamp/headlamp-9d868696f-kxkw6\" mod_revision:1601 > success:<request_put:<key:\"/registry/pods/headlamp/headlamp-9d868696f-kxkw6\" value_size:3746 >> failure:<request_range:<key:\"/registry/pods/headlamp/headlamp-9d868696f-kxkw6\" > >"}
	{"level":"warn","ts":"2024-07-31T16:45:01.717574Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"275.264411ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas/\" range_end:\"/registry/resourcequotas0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-31T16:45:01.71761Z","caller":"traceutil/trace.go:171","msg":"trace[505274928] range","detail":"{range_begin:/registry/resourcequotas/; range_end:/registry/resourcequotas0; response_count:0; response_revision:1617; }","duration":"275.333998ms","start":"2024-07-31T16:45:01.44227Z","end":"2024-07-31T16:45:01.717604Z","steps":["trace[505274928] 'agreement among raft nodes before linearized reading'  (duration: 275.275441ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T16:45:01.717741Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"258.925085ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-31T16:45:01.717774Z","caller":"traceutil/trace.go:171","msg":"trace[398093156] range","detail":"{range_begin:/registry/horizontalpodautoscalers/; range_end:/registry/horizontalpodautoscalers0; response_count:0; response_revision:1617; }","duration":"258.982988ms","start":"2024-07-31T16:45:01.458785Z","end":"2024-07-31T16:45:01.717768Z","steps":["trace[398093156] 'agreement among raft nodes before linearized reading'  (duration: 258.937037ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T16:45:01.718106Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.571059ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:14 size:70538"}
	{"level":"info","ts":"2024-07-31T16:45:01.718148Z","caller":"traceutil/trace.go:171","msg":"trace[1312937675] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:14; response_revision:1617; }","duration":"101.643381ms","start":"2024-07-31T16:45:01.616497Z","end":"2024-07-31T16:45:01.718141Z","steps":["trace[1312937675] 'agreement among raft nodes before linearized reading'  (duration: 101.48067ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T16:45:01.718304Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"171.608555ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:5242"}
	{"level":"info","ts":"2024-07-31T16:45:01.718334Z","caller":"traceutil/trace.go:171","msg":"trace[74531476] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1617; }","duration":"171.641584ms","start":"2024-07-31T16:45:01.546686Z","end":"2024-07-31T16:45:01.718328Z","steps":["trace[74531476] 'agreement among raft nodes before linearized reading'  (duration: 171.529137ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T16:45:40.093464Z","caller":"traceutil/trace.go:171","msg":"trace[1985704140] transaction","detail":"{read_only:false; response_revision:1830; number_of_response:1; }","duration":"133.990425ms","start":"2024-07-31T16:45:39.959449Z","end":"2024-07-31T16:45:40.093439Z","steps":["trace[1985704140] 'process raft request'  (duration: 133.838386ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T16:45:46.22431Z","caller":"traceutil/trace.go:171","msg":"trace[133151561] transaction","detail":"{read_only:false; response_revision:1867; number_of_response:1; }","duration":"108.37674ms","start":"2024-07-31T16:45:46.11591Z","end":"2024-07-31T16:45:46.224287Z","steps":["trace[133151561] 'process raft request'  (duration: 108.174866ms)"],"step_count":1}
	
	
	==> kernel <==
	 16:47:30 up 5 min,  0 users,  load average: 0.50, 1.28, 0.70
	Linux addons-190022 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4259df48d13e2dfd05141f8bbfbb4340cdeeaf42b3182a0b13fe16cf804ec32b] <==
	E0731 16:44:28.988176       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0731 16:44:28.989040       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.82.240:443/apis/metrics.k8s.io/v1beta1: Get "https://10.108.82.240:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.108.82.240:443: connect: connection refused
	I0731 16:44:29.035962       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0731 16:44:36.631090       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0731 16:44:37.658368       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0731 16:44:48.587906       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.104.33.233"}
	I0731 16:44:59.382701       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0731 16:44:59.540123       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.144.237"}
	I0731 16:45:17.906762       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0731 16:45:41.241947       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 16:45:41.242033       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0731 16:45:41.273135       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 16:45:41.273191       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0731 16:45:41.282114       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 16:45:41.282158       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0731 16:45:41.295069       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 16:45:41.295119       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0731 16:45:41.320587       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 16:45:41.320717       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0731 16:45:42.282839       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0731 16:45:42.320654       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0731 16:45:42.375879       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0731 16:47:19.940191       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.105.116.6"}
	E0731 16:47:22.068809       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [b6e02749c1e2f10d2e9e617185cc64a68404a6d54853b78cd0a71d5920fe8984] <==
	W0731 16:46:14.274358       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 16:46:14.274432       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 16:46:17.026005       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 16:46:17.026147       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 16:46:18.585612       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 16:46:18.585715       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 16:46:19.993778       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 16:46:19.993886       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 16:46:52.960181       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 16:46:52.960277       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 16:46:55.634788       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 16:46:55.634919       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 16:46:58.549460       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 16:46:58.549501       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 16:47:00.738793       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 16:47:00.738833       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0731 16:47:19.783601       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="30.835044ms"
	I0731 16:47:19.794747       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="10.771884ms"
	I0731 16:47:19.795948       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="99.396µs"
	I0731 16:47:19.808898       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="40.284µs"
	I0731 16:47:21.973588       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0731 16:47:21.975450       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-6d9bd977d4" duration="4.882µs"
	I0731 16:47:21.982545       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0731 16:47:23.364893       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="8.402002ms"
	I0731 16:47:23.366908       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="36.971µs"
	
	
	==> kube-proxy [71c2430355fdd45d105e4f0a661352d41ef566f3830a43f4c7a29120639b9b24] <==
	I0731 16:42:34.695147       1 server_linux.go:69] "Using iptables proxy"
	I0731 16:42:34.706680       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.140"]
	I0731 16:42:34.816880       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 16:42:34.816941       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 16:42:34.816962       1 server_linux.go:165] "Using iptables Proxier"
	I0731 16:42:34.830525       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 16:42:34.830764       1 server.go:872] "Version info" version="v1.30.3"
	I0731 16:42:34.830787       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 16:42:34.831885       1 config.go:192] "Starting service config controller"
	I0731 16:42:34.831914       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 16:42:34.831934       1 config.go:101] "Starting endpoint slice config controller"
	I0731 16:42:34.831938       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 16:42:34.832480       1 config.go:319] "Starting node config controller"
	I0731 16:42:34.832505       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 16:42:34.932669       1 shared_informer.go:320] Caches are synced for node config
	I0731 16:42:34.932700       1 shared_informer.go:320] Caches are synced for service config
	I0731 16:42:34.932748       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [a9196ff47535f2ce157de81e4cff038af22e270b2559341c143d31fee8ef1914] <==
	W0731 16:42:16.094276       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0731 16:42:16.094298       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0731 16:42:16.094337       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 16:42:16.094359       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0731 16:42:16.094420       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 16:42:16.094471       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 16:42:16.915664       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 16:42:16.915741       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0731 16:42:16.975998       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 16:42:16.976060       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0731 16:42:16.999604       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 16:42:17.000321       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0731 16:42:17.166569       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 16:42:17.166706       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0731 16:42:17.180096       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 16:42:17.180180       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 16:42:17.186396       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 16:42:17.186476       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0731 16:42:17.218736       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0731 16:42:17.218815       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0731 16:42:17.218748       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 16:42:17.218896       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0731 16:42:17.448355       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 16:42:17.448491       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0731 16:42:19.887509       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 31 16:47:19 addons-190022 kubelet[1270]: I0731 16:47:19.793193    1270 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d031c16-6f7c-45e2-9123-4c71d43ebf7e" containerName="hostpath"
	Jul 31 16:47:19 addons-190022 kubelet[1270]: I0731 16:47:19.793202    1270 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d3f15af-4f66-4845-9bbb-874f2d6254fd" containerName="volume-snapshot-controller"
	Jul 31 16:47:19 addons-190022 kubelet[1270]: I0731 16:47:19.793206    1270 memory_manager.go:354] "RemoveStaleState removing state" podUID="b71bdb51-1e7b-4e0d-953d-85dd2af82edf" containerName="task-pv-container"
	Jul 31 16:47:19 addons-190022 kubelet[1270]: I0731 16:47:19.882558    1270 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcf2c\" (UniqueName: \"kubernetes.io/projected/01fab767-1af8-4759-a725-caff0c1428fc-kube-api-access-fcf2c\") pod \"hello-world-app-6778b5fc9f-xlw6l\" (UID: \"01fab767-1af8-4759-a725-caff0c1428fc\") " pod="default/hello-world-app-6778b5fc9f-xlw6l"
	Jul 31 16:47:20 addons-190022 kubelet[1270]: I0731 16:47:20.889171    1270 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xzws\" (UniqueName: \"kubernetes.io/projected/d54a8e85-afc8-4ad8-84be-3d5e643783f0-kube-api-access-6xzws\") pod \"d54a8e85-afc8-4ad8-84be-3d5e643783f0\" (UID: \"d54a8e85-afc8-4ad8-84be-3d5e643783f0\") "
	Jul 31 16:47:20 addons-190022 kubelet[1270]: I0731 16:47:20.891059    1270 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d54a8e85-afc8-4ad8-84be-3d5e643783f0-kube-api-access-6xzws" (OuterVolumeSpecName: "kube-api-access-6xzws") pod "d54a8e85-afc8-4ad8-84be-3d5e643783f0" (UID: "d54a8e85-afc8-4ad8-84be-3d5e643783f0"). InnerVolumeSpecName "kube-api-access-6xzws". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 31 16:47:20 addons-190022 kubelet[1270]: I0731 16:47:20.989745    1270 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-6xzws\" (UniqueName: \"kubernetes.io/projected/d54a8e85-afc8-4ad8-84be-3d5e643783f0-kube-api-access-6xzws\") on node \"addons-190022\" DevicePath \"\""
	Jul 31 16:47:21 addons-190022 kubelet[1270]: I0731 16:47:21.333211    1270 scope.go:117] "RemoveContainer" containerID="225cd7f2a77e96b3fd6d3527455baf4acd702f449071d40cc7855e50a0724220"
	Jul 31 16:47:21 addons-190022 kubelet[1270]: I0731 16:47:21.363362    1270 scope.go:117] "RemoveContainer" containerID="225cd7f2a77e96b3fd6d3527455baf4acd702f449071d40cc7855e50a0724220"
	Jul 31 16:47:21 addons-190022 kubelet[1270]: E0731 16:47:21.363859    1270 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"225cd7f2a77e96b3fd6d3527455baf4acd702f449071d40cc7855e50a0724220\": container with ID starting with 225cd7f2a77e96b3fd6d3527455baf4acd702f449071d40cc7855e50a0724220 not found: ID does not exist" containerID="225cd7f2a77e96b3fd6d3527455baf4acd702f449071d40cc7855e50a0724220"
	Jul 31 16:47:21 addons-190022 kubelet[1270]: I0731 16:47:21.363901    1270 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"225cd7f2a77e96b3fd6d3527455baf4acd702f449071d40cc7855e50a0724220"} err="failed to get container status \"225cd7f2a77e96b3fd6d3527455baf4acd702f449071d40cc7855e50a0724220\": rpc error: code = NotFound desc = could not find container \"225cd7f2a77e96b3fd6d3527455baf4acd702f449071d40cc7855e50a0724220\": container with ID starting with 225cd7f2a77e96b3fd6d3527455baf4acd702f449071d40cc7855e50a0724220 not found: ID does not exist"
	Jul 31 16:47:22 addons-190022 kubelet[1270]: I0731 16:47:22.397380    1270 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18a8a780-bf8b-478f-b6eb-80d97ddb27fc" path="/var/lib/kubelet/pods/18a8a780-bf8b-478f-b6eb-80d97ddb27fc/volumes"
	Jul 31 16:47:22 addons-190022 kubelet[1270]: I0731 16:47:22.397818    1270 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c49c7125-f6a8-4a0e-9131-465c371eaff9" path="/var/lib/kubelet/pods/c49c7125-f6a8-4a0e-9131-465c371eaff9/volumes"
	Jul 31 16:47:22 addons-190022 kubelet[1270]: I0731 16:47:22.398188    1270 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d54a8e85-afc8-4ad8-84be-3d5e643783f0" path="/var/lib/kubelet/pods/d54a8e85-afc8-4ad8-84be-3d5e643783f0/volumes"
	Jul 31 16:47:25 addons-190022 kubelet[1270]: I0731 16:47:25.226024    1270 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/13c1675d-a7ab-4a65-87b5-c4c4d8b32f88-webhook-cert\") pod \"13c1675d-a7ab-4a65-87b5-c4c4d8b32f88\" (UID: \"13c1675d-a7ab-4a65-87b5-c4c4d8b32f88\") "
	Jul 31 16:47:25 addons-190022 kubelet[1270]: I0731 16:47:25.226070    1270 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8fzv\" (UniqueName: \"kubernetes.io/projected/13c1675d-a7ab-4a65-87b5-c4c4d8b32f88-kube-api-access-q8fzv\") pod \"13c1675d-a7ab-4a65-87b5-c4c4d8b32f88\" (UID: \"13c1675d-a7ab-4a65-87b5-c4c4d8b32f88\") "
	Jul 31 16:47:25 addons-190022 kubelet[1270]: I0731 16:47:25.229678    1270 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13c1675d-a7ab-4a65-87b5-c4c4d8b32f88-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "13c1675d-a7ab-4a65-87b5-c4c4d8b32f88" (UID: "13c1675d-a7ab-4a65-87b5-c4c4d8b32f88"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 31 16:47:25 addons-190022 kubelet[1270]: I0731 16:47:25.231636    1270 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13c1675d-a7ab-4a65-87b5-c4c4d8b32f88-kube-api-access-q8fzv" (OuterVolumeSpecName: "kube-api-access-q8fzv") pod "13c1675d-a7ab-4a65-87b5-c4c4d8b32f88" (UID: "13c1675d-a7ab-4a65-87b5-c4c4d8b32f88"). InnerVolumeSpecName "kube-api-access-q8fzv". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 31 16:47:25 addons-190022 kubelet[1270]: I0731 16:47:25.326712    1270 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/13c1675d-a7ab-4a65-87b5-c4c4d8b32f88-webhook-cert\") on node \"addons-190022\" DevicePath \"\""
	Jul 31 16:47:25 addons-190022 kubelet[1270]: I0731 16:47:25.326749    1270 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-q8fzv\" (UniqueName: \"kubernetes.io/projected/13c1675d-a7ab-4a65-87b5-c4c4d8b32f88-kube-api-access-q8fzv\") on node \"addons-190022\" DevicePath \"\""
	Jul 31 16:47:25 addons-190022 kubelet[1270]: I0731 16:47:25.357199    1270 scope.go:117] "RemoveContainer" containerID="05a561464054948547fef4a3c71f0891b7d39d1b40102fd3309065bb97153733"
	Jul 31 16:47:25 addons-190022 kubelet[1270]: I0731 16:47:25.376470    1270 scope.go:117] "RemoveContainer" containerID="05a561464054948547fef4a3c71f0891b7d39d1b40102fd3309065bb97153733"
	Jul 31 16:47:25 addons-190022 kubelet[1270]: E0731 16:47:25.376822    1270 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05a561464054948547fef4a3c71f0891b7d39d1b40102fd3309065bb97153733\": container with ID starting with 05a561464054948547fef4a3c71f0891b7d39d1b40102fd3309065bb97153733 not found: ID does not exist" containerID="05a561464054948547fef4a3c71f0891b7d39d1b40102fd3309065bb97153733"
	Jul 31 16:47:25 addons-190022 kubelet[1270]: I0731 16:47:25.376857    1270 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05a561464054948547fef4a3c71f0891b7d39d1b40102fd3309065bb97153733"} err="failed to get container status \"05a561464054948547fef4a3c71f0891b7d39d1b40102fd3309065bb97153733\": rpc error: code = NotFound desc = could not find container \"05a561464054948547fef4a3c71f0891b7d39d1b40102fd3309065bb97153733\": container with ID starting with 05a561464054948547fef4a3c71f0891b7d39d1b40102fd3309065bb97153733 not found: ID does not exist"
	Jul 31 16:47:26 addons-190022 kubelet[1270]: I0731 16:47:26.396487    1270 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13c1675d-a7ab-4a65-87b5-c4c4d8b32f88" path="/var/lib/kubelet/pods/13c1675d-a7ab-4a65-87b5-c4c4d8b32f88/volumes"
	
	
	==> storage-provisioner [689504ce84d77edb6d4fa85ce92f64fe541eb2f4c1bb3dd5402f35a76268d00a] <==
	I0731 16:42:39.952704       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0731 16:42:40.170723       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0731 16:42:40.170804       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0731 16:42:40.330646       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0731 16:42:40.330826       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-190022_6b9d968e-7497-4863-82bc-7a85b6c38769!
	I0731 16:42:40.391591       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"408bdfd9-3c16-4ebc-97ef-d44c7164d237", APIVersion:"v1", ResourceVersion:"649", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-190022_6b9d968e-7497-4863-82bc-7a85b6c38769 became leader
	I0731 16:42:40.533695       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-190022_6b9d968e-7497-4863-82bc-7a85b6c38769!
	E0731 16:45:33.889779       1 controller.go:1050] claim "178a4605-3a05-4309-ab54-4c88c6342c99" in work queue no longer exists
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-190022 -n addons-190022
helpers_test.go:261: (dbg) Run:  kubectl --context addons-190022 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (151.82s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (349.22s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.692014ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-j57l6" [4638cda1-728a-48d6-9736-4f6234e9f6c1] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005125389s
addons_test.go:417: (dbg) Run:  kubectl --context addons-190022 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-190022 top pods -n kube-system: exit status 1 (67.951534ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-5xsd7, age: 2m19.051954921s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-190022 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-190022 top pods -n kube-system: exit status 1 (60.379024ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-5xsd7, age: 2m22.503041267s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-190022 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-190022 top pods -n kube-system: exit status 1 (176.473022ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-5xsd7, age: 2m28.753176561s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-190022 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-190022 top pods -n kube-system: exit status 1 (68.478817ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-5xsd7, age: 2m33.356923342s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-190022 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-190022 top pods -n kube-system: exit status 1 (64.728355ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-5xsd7, age: 2m48.161778717s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-190022 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-190022 top pods -n kube-system: exit status 1 (74.156471ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-5xsd7, age: 3m1.477090789s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-190022 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-190022 top pods -n kube-system: exit status 1 (60.765753ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-5xsd7, age: 3m22.876785911s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-190022 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-190022 top pods -n kube-system: exit status 1 (63.061859ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-5xsd7, age: 4m11.612833705s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-190022 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-190022 top pods -n kube-system: exit status 1 (61.620752ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-5xsd7, age: 4m53.675473444s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-190022 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-190022 top pods -n kube-system: exit status 1 (62.05045ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-5xsd7, age: 5m59.70276121s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-190022 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-190022 top pods -n kube-system: exit status 1 (62.079816ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-5xsd7, age: 6m31.001988394s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-190022 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-190022 top pods -n kube-system: exit status 1 (62.390947ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-5xsd7, age: 7m20.433608128s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-190022 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-190022 top pods -n kube-system: exit status 1 (60.319674ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-5xsd7, age: 8m0.56338173s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-190022 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-190022 -n addons-190022
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-190022 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-190022 logs -n 25: (1.210379397s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-133798                                                                     | download-only-133798 | jenkins | v1.33.1 | 31 Jul 24 16:41 UTC | 31 Jul 24 16:41 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-720978 | jenkins | v1.33.1 | 31 Jul 24 16:41 UTC |                     |
	|         | binary-mirror-720978                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:43559                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-720978                                                                     | binary-mirror-720978 | jenkins | v1.33.1 | 31 Jul 24 16:41 UTC | 31 Jul 24 16:41 UTC |
	| addons  | disable dashboard -p                                                                        | addons-190022        | jenkins | v1.33.1 | 31 Jul 24 16:41 UTC |                     |
	|         | addons-190022                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-190022        | jenkins | v1.33.1 | 31 Jul 24 16:41 UTC |                     |
	|         | addons-190022                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-190022 --wait=true                                                                | addons-190022        | jenkins | v1.33.1 | 31 Jul 24 16:41 UTC | 31 Jul 24 16:44 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-190022 addons disable                                                                | addons-190022        | jenkins | v1.33.1 | 31 Jul 24 16:44 UTC | 31 Jul 24 16:44 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-190022        | jenkins | v1.33.1 | 31 Jul 24 16:44 UTC | 31 Jul 24 16:44 UTC |
	|         | addons-190022                                                                               |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-190022        | jenkins | v1.33.1 | 31 Jul 24 16:44 UTC | 31 Jul 24 16:44 UTC |
	|         | addons-190022                                                                               |                      |         |         |                     |                     |
	| addons  | addons-190022 addons disable                                                                | addons-190022        | jenkins | v1.33.1 | 31 Jul 24 16:44 UTC | 31 Jul 24 16:44 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-190022 addons disable                                                                | addons-190022        | jenkins | v1.33.1 | 31 Jul 24 16:44 UTC | 31 Jul 24 16:44 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ip      | addons-190022 ip                                                                            | addons-190022        | jenkins | v1.33.1 | 31 Jul 24 16:44 UTC | 31 Jul 24 16:44 UTC |
	| addons  | addons-190022 addons disable                                                                | addons-190022        | jenkins | v1.33.1 | 31 Jul 24 16:44 UTC | 31 Jul 24 16:44 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-190022        | jenkins | v1.33.1 | 31 Jul 24 16:44 UTC | 31 Jul 24 16:44 UTC |
	|         | -p addons-190022                                                                            |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-190022        | jenkins | v1.33.1 | 31 Jul 24 16:44 UTC | 31 Jul 24 16:44 UTC |
	|         | -p addons-190022                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-190022 ssh cat                                                                       | addons-190022        | jenkins | v1.33.1 | 31 Jul 24 16:44 UTC | 31 Jul 24 16:44 UTC |
	|         | /opt/local-path-provisioner/pvc-f9db1751-d5be-4a5c-a915-8af812dc20b1_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-190022 addons disable                                                                | addons-190022        | jenkins | v1.33.1 | 31 Jul 24 16:44 UTC | 31 Jul 24 16:44 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-190022 addons disable                                                                | addons-190022        | jenkins | v1.33.1 | 31 Jul 24 16:44 UTC | 31 Jul 24 16:45 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-190022 ssh curl -s                                                                   | addons-190022        | jenkins | v1.33.1 | 31 Jul 24 16:45 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-190022 addons                                                                        | addons-190022        | jenkins | v1.33.1 | 31 Jul 24 16:45 UTC | 31 Jul 24 16:45 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-190022 addons                                                                        | addons-190022        | jenkins | v1.33.1 | 31 Jul 24 16:45 UTC | 31 Jul 24 16:45 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-190022 ip                                                                            | addons-190022        | jenkins | v1.33.1 | 31 Jul 24 16:47 UTC | 31 Jul 24 16:47 UTC |
	| addons  | addons-190022 addons disable                                                                | addons-190022        | jenkins | v1.33.1 | 31 Jul 24 16:47 UTC | 31 Jul 24 16:47 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-190022 addons disable                                                                | addons-190022        | jenkins | v1.33.1 | 31 Jul 24 16:47 UTC | 31 Jul 24 16:47 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-190022 addons                                                                        | addons-190022        | jenkins | v1.33.1 | 31 Jul 24 16:50 UTC | 31 Jul 24 16:50 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 16:41:38
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 16:41:38.474649   16404 out.go:291] Setting OutFile to fd 1 ...
	I0731 16:41:38.474750   16404 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 16:41:38.474755   16404 out.go:304] Setting ErrFile to fd 2...
	I0731 16:41:38.474759   16404 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 16:41:38.474953   16404 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19349-8084/.minikube/bin
	I0731 16:41:38.475549   16404 out.go:298] Setting JSON to false
	I0731 16:41:38.476305   16404 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1442,"bootTime":1722442656,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 16:41:38.476359   16404 start.go:139] virtualization: kvm guest
	I0731 16:41:38.478496   16404 out.go:177] * [addons-190022] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 16:41:38.480002   16404 notify.go:220] Checking for updates...
	I0731 16:41:38.480029   16404 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 16:41:38.481475   16404 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 16:41:38.482914   16404 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 16:41:38.484446   16404 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19349-8084/.minikube
	I0731 16:41:38.485804   16404 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 16:41:38.487022   16404 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 16:41:38.488479   16404 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 16:41:38.519713   16404 out.go:177] * Using the kvm2 driver based on user configuration
	I0731 16:41:38.521035   16404 start.go:297] selected driver: kvm2
	I0731 16:41:38.521051   16404 start.go:901] validating driver "kvm2" against <nil>
	I0731 16:41:38.521061   16404 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 16:41:38.521715   16404 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 16:41:38.521777   16404 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19349-8084/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 16:41:38.535923   16404 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 16:41:38.535968   16404 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 16:41:38.536216   16404 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 16:41:38.536245   16404 cni.go:84] Creating CNI manager for ""
	I0731 16:41:38.536252   16404 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 16:41:38.536259   16404 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 16:41:38.536314   16404 start.go:340] cluster config:
	{Name:addons-190022 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-190022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 16:41:38.536466   16404 iso.go:125] acquiring lock: {Name:mk4494bb5a63902bf12a909c3109db85229bc516 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 16:41:38.538332   16404 out.go:177] * Starting "addons-190022" primary control-plane node in "addons-190022" cluster
	I0731 16:41:38.539504   16404 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 16:41:38.539535   16404 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0731 16:41:38.539544   16404 cache.go:56] Caching tarball of preloaded images
	I0731 16:41:38.539621   16404 preload.go:172] Found /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 16:41:38.539634   16404 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 16:41:38.539946   16404 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/config.json ...
	I0731 16:41:38.539970   16404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/config.json: {Name:mkd8f9cca2cc4c776d5bec228678fc5030cb0e7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 16:41:38.540111   16404 start.go:360] acquireMachinesLock for addons-190022: {Name:mkd78eacfab7d6c3058c6674434b3d889ec957e0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 16:41:38.540166   16404 start.go:364] duration metric: took 40.488µs to acquireMachinesLock for "addons-190022"
	I0731 16:41:38.540186   16404 start.go:93] Provisioning new machine with config: &{Name:addons-190022 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:addons-190022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 16:41:38.540246   16404 start.go:125] createHost starting for "" (driver="kvm2")
	I0731 16:41:38.541685   16404 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0731 16:41:38.541853   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:41:38.541903   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:41:38.555736   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36163
	I0731 16:41:38.556207   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:41:38.556759   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:41:38.556787   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:41:38.557120   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:41:38.557264   16404 main.go:141] libmachine: (addons-190022) Calling .GetMachineName
	I0731 16:41:38.557384   16404 main.go:141] libmachine: (addons-190022) Calling .DriverName
	I0731 16:41:38.557537   16404 start.go:159] libmachine.API.Create for "addons-190022" (driver="kvm2")
	I0731 16:41:38.557562   16404 client.go:168] LocalClient.Create starting
	I0731 16:41:38.557607   16404 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem
	I0731 16:41:38.665969   16404 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem
	I0731 16:41:38.721645   16404 main.go:141] libmachine: Running pre-create checks...
	I0731 16:41:38.721667   16404 main.go:141] libmachine: (addons-190022) Calling .PreCreateCheck
	I0731 16:41:38.722161   16404 main.go:141] libmachine: (addons-190022) Calling .GetConfigRaw
	I0731 16:41:38.722588   16404 main.go:141] libmachine: Creating machine...
	I0731 16:41:38.722601   16404 main.go:141] libmachine: (addons-190022) Calling .Create
	I0731 16:41:38.722746   16404 main.go:141] libmachine: (addons-190022) Creating KVM machine...
	I0731 16:41:38.724081   16404 main.go:141] libmachine: (addons-190022) DBG | found existing default KVM network
	I0731 16:41:38.724921   16404 main.go:141] libmachine: (addons-190022) DBG | I0731 16:41:38.724770   16426 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012f990}
	I0731 16:41:38.724956   16404 main.go:141] libmachine: (addons-190022) DBG | created network xml: 
	I0731 16:41:38.724977   16404 main.go:141] libmachine: (addons-190022) DBG | <network>
	I0731 16:41:38.724985   16404 main.go:141] libmachine: (addons-190022) DBG |   <name>mk-addons-190022</name>
	I0731 16:41:38.724990   16404 main.go:141] libmachine: (addons-190022) DBG |   <dns enable='no'/>
	I0731 16:41:38.724998   16404 main.go:141] libmachine: (addons-190022) DBG |   
	I0731 16:41:38.725007   16404 main.go:141] libmachine: (addons-190022) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0731 16:41:38.725015   16404 main.go:141] libmachine: (addons-190022) DBG |     <dhcp>
	I0731 16:41:38.725022   16404 main.go:141] libmachine: (addons-190022) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0731 16:41:38.725030   16404 main.go:141] libmachine: (addons-190022) DBG |     </dhcp>
	I0731 16:41:38.725035   16404 main.go:141] libmachine: (addons-190022) DBG |   </ip>
	I0731 16:41:38.725042   16404 main.go:141] libmachine: (addons-190022) DBG |   
	I0731 16:41:38.725047   16404 main.go:141] libmachine: (addons-190022) DBG | </network>
	I0731 16:41:38.725054   16404 main.go:141] libmachine: (addons-190022) DBG | 
	I0731 16:41:38.730472   16404 main.go:141] libmachine: (addons-190022) DBG | trying to create private KVM network mk-addons-190022 192.168.39.0/24...
	I0731 16:41:38.792592   16404 main.go:141] libmachine: (addons-190022) DBG | private KVM network mk-addons-190022 192.168.39.0/24 created
	I0731 16:41:38.792623   16404 main.go:141] libmachine: (addons-190022) DBG | I0731 16:41:38.792515   16426 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19349-8084/.minikube
	I0731 16:41:38.792642   16404 main.go:141] libmachine: (addons-190022) Setting up store path in /home/jenkins/minikube-integration/19349-8084/.minikube/machines/addons-190022 ...
	I0731 16:41:38.792687   16404 main.go:141] libmachine: (addons-190022) Building disk image from file:///home/jenkins/minikube-integration/19349-8084/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0731 16:41:38.792721   16404 main.go:141] libmachine: (addons-190022) Downloading /home/jenkins/minikube-integration/19349-8084/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19349-8084/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0731 16:41:39.062859   16404 main.go:141] libmachine: (addons-190022) DBG | I0731 16:41:39.062711   16426 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/addons-190022/id_rsa...
	I0731 16:41:39.187616   16404 main.go:141] libmachine: (addons-190022) DBG | I0731 16:41:39.187440   16426 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/addons-190022/addons-190022.rawdisk...
	I0731 16:41:39.187648   16404 main.go:141] libmachine: (addons-190022) DBG | Writing magic tar header
	I0731 16:41:39.187663   16404 main.go:141] libmachine: (addons-190022) DBG | Writing SSH key tar header
	I0731 16:41:39.188170   16404 main.go:141] libmachine: (addons-190022) DBG | I0731 16:41:39.188061   16426 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19349-8084/.minikube/machines/addons-190022 ...
	I0731 16:41:39.188199   16404 main.go:141] libmachine: (addons-190022) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/addons-190022
	I0731 16:41:39.188213   16404 main.go:141] libmachine: (addons-190022) Setting executable bit set on /home/jenkins/minikube-integration/19349-8084/.minikube/machines/addons-190022 (perms=drwx------)
	I0731 16:41:39.188225   16404 main.go:141] libmachine: (addons-190022) Setting executable bit set on /home/jenkins/minikube-integration/19349-8084/.minikube/machines (perms=drwxr-xr-x)
	I0731 16:41:39.188232   16404 main.go:141] libmachine: (addons-190022) Setting executable bit set on /home/jenkins/minikube-integration/19349-8084/.minikube (perms=drwxr-xr-x)
	I0731 16:41:39.188260   16404 main.go:141] libmachine: (addons-190022) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19349-8084/.minikube/machines
	I0731 16:41:39.188282   16404 main.go:141] libmachine: (addons-190022) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19349-8084/.minikube
	I0731 16:41:39.188294   16404 main.go:141] libmachine: (addons-190022) Setting executable bit set on /home/jenkins/minikube-integration/19349-8084 (perms=drwxrwxr-x)
	I0731 16:41:39.188308   16404 main.go:141] libmachine: (addons-190022) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19349-8084
	I0731 16:41:39.188326   16404 main.go:141] libmachine: (addons-190022) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0731 16:41:39.188338   16404 main.go:141] libmachine: (addons-190022) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0731 16:41:39.188345   16404 main.go:141] libmachine: (addons-190022) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0731 16:41:39.188358   16404 main.go:141] libmachine: (addons-190022) Creating domain...
	I0731 16:41:39.188366   16404 main.go:141] libmachine: (addons-190022) DBG | Checking permissions on dir: /home/jenkins
	I0731 16:41:39.188381   16404 main.go:141] libmachine: (addons-190022) DBG | Checking permissions on dir: /home
	I0731 16:41:39.188394   16404 main.go:141] libmachine: (addons-190022) DBG | Skipping /home - not owner
	I0731 16:41:39.189573   16404 main.go:141] libmachine: (addons-190022) define libvirt domain using xml: 
	I0731 16:41:39.189607   16404 main.go:141] libmachine: (addons-190022) <domain type='kvm'>
	I0731 16:41:39.189619   16404 main.go:141] libmachine: (addons-190022)   <name>addons-190022</name>
	I0731 16:41:39.189624   16404 main.go:141] libmachine: (addons-190022)   <memory unit='MiB'>4000</memory>
	I0731 16:41:39.189630   16404 main.go:141] libmachine: (addons-190022)   <vcpu>2</vcpu>
	I0731 16:41:39.189642   16404 main.go:141] libmachine: (addons-190022)   <features>
	I0731 16:41:39.189647   16404 main.go:141] libmachine: (addons-190022)     <acpi/>
	I0731 16:41:39.189651   16404 main.go:141] libmachine: (addons-190022)     <apic/>
	I0731 16:41:39.189656   16404 main.go:141] libmachine: (addons-190022)     <pae/>
	I0731 16:41:39.189660   16404 main.go:141] libmachine: (addons-190022)     
	I0731 16:41:39.189665   16404 main.go:141] libmachine: (addons-190022)   </features>
	I0731 16:41:39.189670   16404 main.go:141] libmachine: (addons-190022)   <cpu mode='host-passthrough'>
	I0731 16:41:39.189675   16404 main.go:141] libmachine: (addons-190022)   
	I0731 16:41:39.189684   16404 main.go:141] libmachine: (addons-190022)   </cpu>
	I0731 16:41:39.189689   16404 main.go:141] libmachine: (addons-190022)   <os>
	I0731 16:41:39.189698   16404 main.go:141] libmachine: (addons-190022)     <type>hvm</type>
	I0731 16:41:39.189729   16404 main.go:141] libmachine: (addons-190022)     <boot dev='cdrom'/>
	I0731 16:41:39.189750   16404 main.go:141] libmachine: (addons-190022)     <boot dev='hd'/>
	I0731 16:41:39.189760   16404 main.go:141] libmachine: (addons-190022)     <bootmenu enable='no'/>
	I0731 16:41:39.189768   16404 main.go:141] libmachine: (addons-190022)   </os>
	I0731 16:41:39.189777   16404 main.go:141] libmachine: (addons-190022)   <devices>
	I0731 16:41:39.189787   16404 main.go:141] libmachine: (addons-190022)     <disk type='file' device='cdrom'>
	I0731 16:41:39.189804   16404 main.go:141] libmachine: (addons-190022)       <source file='/home/jenkins/minikube-integration/19349-8084/.minikube/machines/addons-190022/boot2docker.iso'/>
	I0731 16:41:39.189829   16404 main.go:141] libmachine: (addons-190022)       <target dev='hdc' bus='scsi'/>
	I0731 16:41:39.189842   16404 main.go:141] libmachine: (addons-190022)       <readonly/>
	I0731 16:41:39.189853   16404 main.go:141] libmachine: (addons-190022)     </disk>
	I0731 16:41:39.189869   16404 main.go:141] libmachine: (addons-190022)     <disk type='file' device='disk'>
	I0731 16:41:39.189881   16404 main.go:141] libmachine: (addons-190022)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0731 16:41:39.189895   16404 main.go:141] libmachine: (addons-190022)       <source file='/home/jenkins/minikube-integration/19349-8084/.minikube/machines/addons-190022/addons-190022.rawdisk'/>
	I0731 16:41:39.189906   16404 main.go:141] libmachine: (addons-190022)       <target dev='hda' bus='virtio'/>
	I0731 16:41:39.189937   16404 main.go:141] libmachine: (addons-190022)     </disk>
	I0731 16:41:39.189961   16404 main.go:141] libmachine: (addons-190022)     <interface type='network'>
	I0731 16:41:39.189972   16404 main.go:141] libmachine: (addons-190022)       <source network='mk-addons-190022'/>
	I0731 16:41:39.189988   16404 main.go:141] libmachine: (addons-190022)       <model type='virtio'/>
	I0731 16:41:39.190000   16404 main.go:141] libmachine: (addons-190022)     </interface>
	I0731 16:41:39.190010   16404 main.go:141] libmachine: (addons-190022)     <interface type='network'>
	I0731 16:41:39.190018   16404 main.go:141] libmachine: (addons-190022)       <source network='default'/>
	I0731 16:41:39.190025   16404 main.go:141] libmachine: (addons-190022)       <model type='virtio'/>
	I0731 16:41:39.190031   16404 main.go:141] libmachine: (addons-190022)     </interface>
	I0731 16:41:39.190038   16404 main.go:141] libmachine: (addons-190022)     <serial type='pty'>
	I0731 16:41:39.190044   16404 main.go:141] libmachine: (addons-190022)       <target port='0'/>
	I0731 16:41:39.190058   16404 main.go:141] libmachine: (addons-190022)     </serial>
	I0731 16:41:39.190069   16404 main.go:141] libmachine: (addons-190022)     <console type='pty'>
	I0731 16:41:39.190086   16404 main.go:141] libmachine: (addons-190022)       <target type='serial' port='0'/>
	I0731 16:41:39.190095   16404 main.go:141] libmachine: (addons-190022)     </console>
	I0731 16:41:39.190101   16404 main.go:141] libmachine: (addons-190022)     <rng model='virtio'>
	I0731 16:41:39.190110   16404 main.go:141] libmachine: (addons-190022)       <backend model='random'>/dev/random</backend>
	I0731 16:41:39.190115   16404 main.go:141] libmachine: (addons-190022)     </rng>
	I0731 16:41:39.190124   16404 main.go:141] libmachine: (addons-190022)     
	I0731 16:41:39.190132   16404 main.go:141] libmachine: (addons-190022)     
	I0731 16:41:39.190145   16404 main.go:141] libmachine: (addons-190022)   </devices>
	I0731 16:41:39.190157   16404 main.go:141] libmachine: (addons-190022) </domain>
	I0731 16:41:39.190166   16404 main.go:141] libmachine: (addons-190022) 
	I0731 16:41:39.196001   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:02:9a:1f in network default
	I0731 16:41:39.196507   16404 main.go:141] libmachine: (addons-190022) Ensuring networks are active...
	I0731 16:41:39.196524   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:41:39.197278   16404 main.go:141] libmachine: (addons-190022) Ensuring network default is active
	I0731 16:41:39.197493   16404 main.go:141] libmachine: (addons-190022) Ensuring network mk-addons-190022 is active
	I0731 16:41:39.197925   16404 main.go:141] libmachine: (addons-190022) Getting domain xml...
	I0731 16:41:39.198553   16404 main.go:141] libmachine: (addons-190022) Creating domain...
	I0731 16:41:40.602781   16404 main.go:141] libmachine: (addons-190022) Waiting to get IP...
	I0731 16:41:40.603516   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:41:40.603931   16404 main.go:141] libmachine: (addons-190022) DBG | unable to find current IP address of domain addons-190022 in network mk-addons-190022
	I0731 16:41:40.603958   16404 main.go:141] libmachine: (addons-190022) DBG | I0731 16:41:40.603923   16426 retry.go:31] will retry after 299.522845ms: waiting for machine to come up
	I0731 16:41:40.905435   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:41:40.905923   16404 main.go:141] libmachine: (addons-190022) DBG | unable to find current IP address of domain addons-190022 in network mk-addons-190022
	I0731 16:41:40.905950   16404 main.go:141] libmachine: (addons-190022) DBG | I0731 16:41:40.905870   16426 retry.go:31] will retry after 318.334424ms: waiting for machine to come up
	I0731 16:41:41.225407   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:41:41.225879   16404 main.go:141] libmachine: (addons-190022) DBG | unable to find current IP address of domain addons-190022 in network mk-addons-190022
	I0731 16:41:41.225907   16404 main.go:141] libmachine: (addons-190022) DBG | I0731 16:41:41.225819   16426 retry.go:31] will retry after 298.274864ms: waiting for machine to come up
	I0731 16:41:41.525265   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:41:41.525725   16404 main.go:141] libmachine: (addons-190022) DBG | unable to find current IP address of domain addons-190022 in network mk-addons-190022
	I0731 16:41:41.525753   16404 main.go:141] libmachine: (addons-190022) DBG | I0731 16:41:41.525670   16426 retry.go:31] will retry after 393.737403ms: waiting for machine to come up
	I0731 16:41:41.921291   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:41:41.921741   16404 main.go:141] libmachine: (addons-190022) DBG | unable to find current IP address of domain addons-190022 in network mk-addons-190022
	I0731 16:41:41.921768   16404 main.go:141] libmachine: (addons-190022) DBG | I0731 16:41:41.921690   16426 retry.go:31] will retry after 651.921555ms: waiting for machine to come up
	I0731 16:41:42.576857   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:41:42.577388   16404 main.go:141] libmachine: (addons-190022) DBG | unable to find current IP address of domain addons-190022 in network mk-addons-190022
	I0731 16:41:42.577413   16404 main.go:141] libmachine: (addons-190022) DBG | I0731 16:41:42.577309   16426 retry.go:31] will retry after 625.355859ms: waiting for machine to come up
	I0731 16:41:43.204131   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:41:43.204527   16404 main.go:141] libmachine: (addons-190022) DBG | unable to find current IP address of domain addons-190022 in network mk-addons-190022
	I0731 16:41:43.204586   16404 main.go:141] libmachine: (addons-190022) DBG | I0731 16:41:43.204501   16426 retry.go:31] will retry after 857.401115ms: waiting for machine to come up
	I0731 16:41:44.063071   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:41:44.063478   16404 main.go:141] libmachine: (addons-190022) DBG | unable to find current IP address of domain addons-190022 in network mk-addons-190022
	I0731 16:41:44.063509   16404 main.go:141] libmachine: (addons-190022) DBG | I0731 16:41:44.063414   16426 retry.go:31] will retry after 1.331583997s: waiting for machine to come up
	I0731 16:41:45.396247   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:41:45.396731   16404 main.go:141] libmachine: (addons-190022) DBG | unable to find current IP address of domain addons-190022 in network mk-addons-190022
	I0731 16:41:45.396757   16404 main.go:141] libmachine: (addons-190022) DBG | I0731 16:41:45.396687   16426 retry.go:31] will retry after 1.121424428s: waiting for machine to come up
	I0731 16:41:46.520037   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:41:46.520369   16404 main.go:141] libmachine: (addons-190022) DBG | unable to find current IP address of domain addons-190022 in network mk-addons-190022
	I0731 16:41:46.520409   16404 main.go:141] libmachine: (addons-190022) DBG | I0731 16:41:46.520312   16426 retry.go:31] will retry after 1.846743517s: waiting for machine to come up
	I0731 16:41:48.369541   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:41:48.369963   16404 main.go:141] libmachine: (addons-190022) DBG | unable to find current IP address of domain addons-190022 in network mk-addons-190022
	I0731 16:41:48.369992   16404 main.go:141] libmachine: (addons-190022) DBG | I0731 16:41:48.369903   16426 retry.go:31] will retry after 2.862497152s: waiting for machine to come up
	I0731 16:41:51.235923   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:41:51.236392   16404 main.go:141] libmachine: (addons-190022) DBG | unable to find current IP address of domain addons-190022 in network mk-addons-190022
	I0731 16:41:51.236419   16404 main.go:141] libmachine: (addons-190022) DBG | I0731 16:41:51.236351   16426 retry.go:31] will retry after 3.250256872s: waiting for machine to come up
	I0731 16:41:54.488065   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:41:54.488396   16404 main.go:141] libmachine: (addons-190022) DBG | unable to find current IP address of domain addons-190022 in network mk-addons-190022
	I0731 16:41:54.488419   16404 main.go:141] libmachine: (addons-190022) DBG | I0731 16:41:54.488357   16426 retry.go:31] will retry after 3.524085571s: waiting for machine to come up
	I0731 16:41:58.016962   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:41:58.017391   16404 main.go:141] libmachine: (addons-190022) DBG | unable to find current IP address of domain addons-190022 in network mk-addons-190022
	I0731 16:41:58.017419   16404 main.go:141] libmachine: (addons-190022) DBG | I0731 16:41:58.017343   16426 retry.go:31] will retry after 3.777226244s: waiting for machine to come up
	I0731 16:42:01.798205   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:01.798683   16404 main.go:141] libmachine: (addons-190022) Found IP for machine: 192.168.39.140
	I0731 16:42:01.798700   16404 main.go:141] libmachine: (addons-190022) Reserving static IP address...
	I0731 16:42:01.798708   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has current primary IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:01.799040   16404 main.go:141] libmachine: (addons-190022) DBG | unable to find host DHCP lease matching {name: "addons-190022", mac: "52:54:00:b8:3c:34", ip: "192.168.39.140"} in network mk-addons-190022
	I0731 16:42:01.868230   16404 main.go:141] libmachine: (addons-190022) DBG | Getting to WaitForSSH function...
	I0731 16:42:01.868261   16404 main.go:141] libmachine: (addons-190022) Reserved static IP address: 192.168.39.140
	I0731 16:42:01.868276   16404 main.go:141] libmachine: (addons-190022) Waiting for SSH to be available...
	I0731 16:42:01.870777   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:01.871313   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:01.871342   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:01.871511   16404 main.go:141] libmachine: (addons-190022) DBG | Using SSH client type: external
	I0731 16:42:01.871554   16404 main.go:141] libmachine: (addons-190022) DBG | Using SSH private key: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/addons-190022/id_rsa (-rw-------)
	I0731 16:42:01.871585   16404 main.go:141] libmachine: (addons-190022) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.140 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19349-8084/.minikube/machines/addons-190022/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 16:42:01.871596   16404 main.go:141] libmachine: (addons-190022) DBG | About to run SSH command:
	I0731 16:42:01.871605   16404 main.go:141] libmachine: (addons-190022) DBG | exit 0
	I0731 16:42:02.007321   16404 main.go:141] libmachine: (addons-190022) DBG | SSH cmd err, output: <nil>: 
	I0731 16:42:02.007710   16404 main.go:141] libmachine: (addons-190022) KVM machine creation complete!
	I0731 16:42:02.007981   16404 main.go:141] libmachine: (addons-190022) Calling .GetConfigRaw
	I0731 16:42:02.008586   16404 main.go:141] libmachine: (addons-190022) Calling .DriverName
	I0731 16:42:02.008825   16404 main.go:141] libmachine: (addons-190022) Calling .DriverName
	I0731 16:42:02.009048   16404 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0731 16:42:02.009068   16404 main.go:141] libmachine: (addons-190022) Calling .GetState
	I0731 16:42:02.010319   16404 main.go:141] libmachine: Detecting operating system of created instance...
	I0731 16:42:02.010336   16404 main.go:141] libmachine: Waiting for SSH to be available...
	I0731 16:42:02.010356   16404 main.go:141] libmachine: Getting to WaitForSSH function...
	I0731 16:42:02.010368   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHHostname
	I0731 16:42:02.012728   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:02.013036   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:02.013073   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:02.013185   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHPort
	I0731 16:42:02.013341   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:02.013484   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:02.013655   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHUsername
	I0731 16:42:02.013792   16404 main.go:141] libmachine: Using SSH client type: native
	I0731 16:42:02.014003   16404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0731 16:42:02.014016   16404 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0731 16:42:02.118535   16404 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 16:42:02.118564   16404 main.go:141] libmachine: Detecting the provisioner...
	I0731 16:42:02.118574   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHHostname
	I0731 16:42:02.121222   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:02.121625   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:02.121658   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:02.121801   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHPort
	I0731 16:42:02.122032   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:02.122255   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:02.122416   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHUsername
	I0731 16:42:02.122588   16404 main.go:141] libmachine: Using SSH client type: native
	I0731 16:42:02.122766   16404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0731 16:42:02.122779   16404 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0731 16:42:02.227339   16404 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0731 16:42:02.227443   16404 main.go:141] libmachine: found compatible host: buildroot
	I0731 16:42:02.227456   16404 main.go:141] libmachine: Provisioning with buildroot...
	I0731 16:42:02.227465   16404 main.go:141] libmachine: (addons-190022) Calling .GetMachineName
	I0731 16:42:02.227740   16404 buildroot.go:166] provisioning hostname "addons-190022"
	I0731 16:42:02.227764   16404 main.go:141] libmachine: (addons-190022) Calling .GetMachineName
	I0731 16:42:02.227979   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHHostname
	I0731 16:42:02.230544   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:02.230923   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:02.230960   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:02.231130   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHPort
	I0731 16:42:02.231318   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:02.231475   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:02.231667   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHUsername
	I0731 16:42:02.231824   16404 main.go:141] libmachine: Using SSH client type: native
	I0731 16:42:02.231970   16404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0731 16:42:02.231981   16404 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-190022 && echo "addons-190022" | sudo tee /etc/hostname
	I0731 16:42:02.349582   16404 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-190022
	
	I0731 16:42:02.349646   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHHostname
	I0731 16:42:02.352401   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:02.352696   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:02.352715   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:02.352898   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHPort
	I0731 16:42:02.353108   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:02.353266   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:02.353409   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHUsername
	I0731 16:42:02.353560   16404 main.go:141] libmachine: Using SSH client type: native
	I0731 16:42:02.353723   16404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0731 16:42:02.353739   16404 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-190022' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-190022/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-190022' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 16:42:02.467252   16404 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 16:42:02.467285   16404 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19349-8084/.minikube CaCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19349-8084/.minikube}
	I0731 16:42:02.467317   16404 buildroot.go:174] setting up certificates
	I0731 16:42:02.467327   16404 provision.go:84] configureAuth start
	I0731 16:42:02.467337   16404 main.go:141] libmachine: (addons-190022) Calling .GetMachineName
	I0731 16:42:02.467621   16404 main.go:141] libmachine: (addons-190022) Calling .GetIP
	I0731 16:42:02.470250   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:02.470553   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:02.470578   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:02.470741   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHHostname
	I0731 16:42:02.472797   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:02.473147   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:02.473185   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:02.473301   16404 provision.go:143] copyHostCerts
	I0731 16:42:02.473371   16404 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem (1082 bytes)
	I0731 16:42:02.473521   16404 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem (1123 bytes)
	I0731 16:42:02.473628   16404 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem (1675 bytes)
	I0731 16:42:02.473717   16404 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem org=jenkins.addons-190022 san=[127.0.0.1 192.168.39.140 addons-190022 localhost minikube]
	I0731 16:42:02.602461   16404 provision.go:177] copyRemoteCerts
	I0731 16:42:02.602518   16404 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 16:42:02.602540   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHHostname
	I0731 16:42:02.605284   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:02.605652   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:02.605681   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:02.605865   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHPort
	I0731 16:42:02.606020   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:02.606162   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHUsername
	I0731 16:42:02.606293   16404 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/addons-190022/id_rsa Username:docker}
	I0731 16:42:02.688592   16404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 16:42:02.711250   16404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0731 16:42:02.733389   16404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 16:42:02.754554   16404 provision.go:87] duration metric: took 287.214664ms to configureAuth
	I0731 16:42:02.754589   16404 buildroot.go:189] setting minikube options for container-runtime
	I0731 16:42:02.754791   16404 config.go:182] Loaded profile config "addons-190022": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 16:42:02.754897   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHHostname
	I0731 16:42:02.757654   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:02.758008   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:02.758037   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:02.758247   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHPort
	I0731 16:42:02.758418   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:02.758628   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:02.758744   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHUsername
	I0731 16:42:02.758885   16404 main.go:141] libmachine: Using SSH client type: native
	I0731 16:42:02.759076   16404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0731 16:42:02.759093   16404 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 16:42:03.008242   16404 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 16:42:03.008266   16404 main.go:141] libmachine: Checking connection to Docker...
	I0731 16:42:03.008273   16404 main.go:141] libmachine: (addons-190022) Calling .GetURL
	I0731 16:42:03.009604   16404 main.go:141] libmachine: (addons-190022) DBG | Using libvirt version 6000000
	I0731 16:42:03.011774   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:03.012098   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:03.012121   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:03.012264   16404 main.go:141] libmachine: Docker is up and running!
	I0731 16:42:03.012279   16404 main.go:141] libmachine: Reticulating splines...
	I0731 16:42:03.012286   16404 client.go:171] duration metric: took 24.454717478s to LocalClient.Create
	I0731 16:42:03.012310   16404 start.go:167] duration metric: took 24.454773022s to libmachine.API.Create "addons-190022"
	I0731 16:42:03.012326   16404 start.go:293] postStartSetup for "addons-190022" (driver="kvm2")
	I0731 16:42:03.012337   16404 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 16:42:03.012356   16404 main.go:141] libmachine: (addons-190022) Calling .DriverName
	I0731 16:42:03.012555   16404 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 16:42:03.012579   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHHostname
	I0731 16:42:03.014708   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:03.015031   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:03.015067   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:03.015251   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHPort
	I0731 16:42:03.015424   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:03.015575   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHUsername
	I0731 16:42:03.015678   16404 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/addons-190022/id_rsa Username:docker}
	I0731 16:42:03.096693   16404 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 16:42:03.100483   16404 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 16:42:03.100507   16404 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/addons for local assets ...
	I0731 16:42:03.100581   16404 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/files for local assets ...
	I0731 16:42:03.100620   16404 start.go:296] duration metric: took 88.285991ms for postStartSetup
	I0731 16:42:03.100655   16404 main.go:141] libmachine: (addons-190022) Calling .GetConfigRaw
	I0731 16:42:03.101203   16404 main.go:141] libmachine: (addons-190022) Calling .GetIP
	I0731 16:42:03.104137   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:03.104629   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:03.104660   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:03.104901   16404 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/config.json ...
	I0731 16:42:03.105094   16404 start.go:128] duration metric: took 24.564838378s to createHost
	I0731 16:42:03.105116   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHHostname
	I0731 16:42:03.107425   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:03.107791   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:03.107815   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:03.107980   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHPort
	I0731 16:42:03.108198   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:03.108372   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:03.108530   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHUsername
	I0731 16:42:03.108672   16404 main.go:141] libmachine: Using SSH client type: native
	I0731 16:42:03.108816   16404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0731 16:42:03.108825   16404 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 16:42:03.215407   16404 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722444123.186608497
	
	I0731 16:42:03.215427   16404 fix.go:216] guest clock: 1722444123.186608497
	I0731 16:42:03.215434   16404 fix.go:229] Guest: 2024-07-31 16:42:03.186608497 +0000 UTC Remote: 2024-07-31 16:42:03.105105177 +0000 UTC m=+24.661188828 (delta=81.50332ms)
	I0731 16:42:03.215475   16404 fix.go:200] guest clock delta is within tolerance: 81.50332ms
	I0731 16:42:03.215483   16404 start.go:83] releasing machines lock for "addons-190022", held for 24.67530545s
	I0731 16:42:03.215508   16404 main.go:141] libmachine: (addons-190022) Calling .DriverName
	I0731 16:42:03.215770   16404 main.go:141] libmachine: (addons-190022) Calling .GetIP
	I0731 16:42:03.218433   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:03.218762   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:03.218789   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:03.219017   16404 main.go:141] libmachine: (addons-190022) Calling .DriverName
	I0731 16:42:03.219566   16404 main.go:141] libmachine: (addons-190022) Calling .DriverName
	I0731 16:42:03.219779   16404 main.go:141] libmachine: (addons-190022) Calling .DriverName
	I0731 16:42:03.219882   16404 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 16:42:03.219928   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHHostname
	I0731 16:42:03.219993   16404 ssh_runner.go:195] Run: cat /version.json
	I0731 16:42:03.220020   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHHostname
	I0731 16:42:03.222583   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:03.222894   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:03.222911   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:03.222955   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:03.223160   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHPort
	I0731 16:42:03.223292   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:03.223403   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:03.223407   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHUsername
	I0731 16:42:03.223428   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:03.223596   16404 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/addons-190022/id_rsa Username:docker}
	I0731 16:42:03.223611   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHPort
	I0731 16:42:03.223771   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:03.223903   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHUsername
	I0731 16:42:03.224037   16404 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/addons-190022/id_rsa Username:docker}
	I0731 16:42:03.348886   16404 ssh_runner.go:195] Run: systemctl --version
	I0731 16:42:03.354547   16404 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 16:42:03.506769   16404 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 16:42:03.512894   16404 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 16:42:03.512959   16404 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 16:42:03.530633   16404 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 16:42:03.530656   16404 start.go:495] detecting cgroup driver to use...
	I0731 16:42:03.530720   16404 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 16:42:03.550022   16404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 16:42:03.565106   16404 docker.go:217] disabling cri-docker service (if available) ...
	I0731 16:42:03.565162   16404 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 16:42:03.580074   16404 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 16:42:03.592634   16404 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 16:42:03.706677   16404 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 16:42:03.844950   16404 docker.go:233] disabling docker service ...
	I0731 16:42:03.845019   16404 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 16:42:03.858774   16404 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 16:42:03.871923   16404 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 16:42:04.005544   16404 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 16:42:04.137780   16404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 16:42:04.150925   16404 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 16:42:04.167713   16404 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 16:42:04.167777   16404 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 16:42:04.177583   16404 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 16:42:04.177649   16404 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 16:42:04.187498   16404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 16:42:04.197571   16404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 16:42:04.207555   16404 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 16:42:04.217155   16404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 16:42:04.226559   16404 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 16:42:04.241976   16404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 16:42:04.251421   16404 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 16:42:04.259923   16404 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 16:42:04.259984   16404 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 16:42:04.271575   16404 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 16:42:04.280100   16404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 16:42:04.391524   16404 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 16:42:04.521477   16404 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 16:42:04.521570   16404 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 16:42:04.525844   16404 start.go:563] Will wait 60s for crictl version
	I0731 16:42:04.525898   16404 ssh_runner.go:195] Run: which crictl
	I0731 16:42:04.529120   16404 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 16:42:04.565684   16404 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 16:42:04.565815   16404 ssh_runner.go:195] Run: crio --version
	I0731 16:42:04.597647   16404 ssh_runner.go:195] Run: crio --version
	I0731 16:42:04.624234   16404 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 16:42:04.625519   16404 main.go:141] libmachine: (addons-190022) Calling .GetIP
	I0731 16:42:04.627911   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:04.628266   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:04.628293   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:04.628472   16404 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 16:42:04.632180   16404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 16:42:04.643206   16404 kubeadm.go:883] updating cluster {Name:addons-190022 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:addons-190022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.140 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 16:42:04.643313   16404 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 16:42:04.643354   16404 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 16:42:04.673696   16404 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 16:42:04.673762   16404 ssh_runner.go:195] Run: which lz4
	I0731 16:42:04.677405   16404 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 16:42:04.681392   16404 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 16:42:04.681421   16404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 16:42:05.858907   16404 crio.go:462] duration metric: took 1.181525886s to copy over tarball
	I0731 16:42:05.858980   16404 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 16:42:08.068359   16404 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.209356148s)
	I0731 16:42:08.068392   16404 crio.go:469] duration metric: took 2.209450869s to extract the tarball
	I0731 16:42:08.068408   16404 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 16:42:08.105425   16404 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 16:42:08.149060   16404 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 16:42:08.149080   16404 cache_images.go:84] Images are preloaded, skipping loading
	I0731 16:42:08.149087   16404 kubeadm.go:934] updating node { 192.168.39.140 8443 v1.30.3 crio true true} ...
	I0731 16:42:08.149182   16404 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-190022 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.140
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-190022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 16:42:08.149244   16404 ssh_runner.go:195] Run: crio config
	I0731 16:42:08.194277   16404 cni.go:84] Creating CNI manager for ""
	I0731 16:42:08.194300   16404 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 16:42:08.194311   16404 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 16:42:08.194364   16404 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.140 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-190022 NodeName:addons-190022 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.140"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.140 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 16:42:08.194512   16404 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.140
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-190022"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.140
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.140"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 16:42:08.194574   16404 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 16:42:08.204098   16404 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 16:42:08.204171   16404 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 16:42:08.213251   16404 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0731 16:42:08.228899   16404 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 16:42:08.243601   16404 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0731 16:42:08.258573   16404 ssh_runner.go:195] Run: grep 192.168.39.140	control-plane.minikube.internal$ /etc/hosts
	I0731 16:42:08.261931   16404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.140	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 16:42:08.273138   16404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 16:42:08.400122   16404 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 16:42:08.416751   16404 certs.go:68] Setting up /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022 for IP: 192.168.39.140
	I0731 16:42:08.416779   16404 certs.go:194] generating shared ca certs ...
	I0731 16:42:08.416802   16404 certs.go:226] acquiring lock for ca certs: {Name:mk49eed439085f9c30746706460ce213570d1997 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 16:42:08.416960   16404 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key
	I0731 16:42:08.546599   16404 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt ...
	I0731 16:42:08.546627   16404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt: {Name:mk98fd99858826e16dd06829f67d17f3bbd5dba5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 16:42:08.546788   16404 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key ...
	I0731 16:42:08.546799   16404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key: {Name:mk758a5c4c2c0948d134585d8091c39b75aa53cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 16:42:08.546868   16404 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key
	I0731 16:42:08.698767   16404 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.crt ...
	I0731 16:42:08.698796   16404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.crt: {Name:mk0402e4fdafd0dadcc3a759a146f0382cb7f698 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 16:42:08.698949   16404 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key ...
	I0731 16:42:08.698960   16404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key: {Name:mk3396450d02245824d44252cd53d5e243110597 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 16:42:08.699020   16404 certs.go:256] generating profile certs ...
	I0731 16:42:08.699069   16404 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/client.key
	I0731 16:42:08.699102   16404 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/client.crt with IP's: []
	I0731 16:42:08.863211   16404 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/client.crt ...
	I0731 16:42:08.863241   16404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/client.crt: {Name:mkc08d8f316dbe9990d761b84b24b31787979fd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 16:42:08.863398   16404 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/client.key ...
	I0731 16:42:08.863425   16404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/client.key: {Name:mk236cf66608a04fccc0dc3a978aa705916000df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 16:42:08.863507   16404 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/apiserver.key.202be5a6
	I0731 16:42:08.863525   16404 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/apiserver.crt.202be5a6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.140]
	I0731 16:42:09.004841   16404 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/apiserver.crt.202be5a6 ...
	I0731 16:42:09.004871   16404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/apiserver.crt.202be5a6: {Name:mk22785ec0c5e4d2c0950a76981c34684f85c969 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 16:42:09.005019   16404 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/apiserver.key.202be5a6 ...
	I0731 16:42:09.005031   16404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/apiserver.key.202be5a6: {Name:mkaa5e980909d47dc81bfccb1a97b6a303c56f69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 16:42:09.005093   16404 certs.go:381] copying /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/apiserver.crt.202be5a6 -> /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/apiserver.crt
	I0731 16:42:09.005182   16404 certs.go:385] copying /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/apiserver.key.202be5a6 -> /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/apiserver.key
	I0731 16:42:09.005240   16404 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/proxy-client.key
	I0731 16:42:09.005258   16404 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/proxy-client.crt with IP's: []
	I0731 16:42:09.175359   16404 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/proxy-client.crt ...
	I0731 16:42:09.175397   16404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/proxy-client.crt: {Name:mk9fcfd895242a38b7cc4fdd83ba69a9c96b7633 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 16:42:09.175543   16404 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/proxy-client.key ...
	I0731 16:42:09.175553   16404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/proxy-client.key: {Name:mkedfa22e9ce237ebf95ee7a2debe0f3b934e2ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 16:42:09.175716   16404 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 16:42:09.175751   16404 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem (1082 bytes)
	I0731 16:42:09.175773   16404 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem (1123 bytes)
	I0731 16:42:09.175795   16404 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem (1675 bytes)
	I0731 16:42:09.177089   16404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 16:42:09.204217   16404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 16:42:09.226695   16404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 16:42:09.247974   16404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 16:42:09.270172   16404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0731 16:42:09.291628   16404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 16:42:09.313189   16404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 16:42:09.335478   16404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 16:42:09.357762   16404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 16:42:09.379615   16404 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 16:42:09.394999   16404 ssh_runner.go:195] Run: openssl version
	I0731 16:42:09.400758   16404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 16:42:09.410874   16404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 16:42:09.415180   16404 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 16:42 /usr/share/ca-certificates/minikubeCA.pem
	I0731 16:42:09.415231   16404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 16:42:09.420540   16404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 16:42:09.430259   16404 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 16:42:09.434185   16404 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 16:42:09.434230   16404 kubeadm.go:392] StartCluster: {Name:addons-190022 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:addons-190022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.140 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 16:42:09.434301   16404 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 16:42:09.434372   16404 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 16:42:09.470928   16404 cri.go:89] found id: ""
	I0731 16:42:09.470991   16404 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 16:42:09.480086   16404 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 16:42:09.491206   16404 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 16:42:09.503823   16404 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 16:42:09.503841   16404 kubeadm.go:157] found existing configuration files:
	
	I0731 16:42:09.503883   16404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 16:42:09.512469   16404 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 16:42:09.512530   16404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 16:42:09.521693   16404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 16:42:09.530442   16404 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 16:42:09.530494   16404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 16:42:09.538797   16404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 16:42:09.546978   16404 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 16:42:09.547032   16404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 16:42:09.555753   16404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 16:42:09.563797   16404 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 16:42:09.563846   16404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 16:42:09.572635   16404 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 16:42:09.625936   16404 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0731 16:42:09.626002   16404 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 16:42:09.744873   16404 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 16:42:09.745011   16404 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 16:42:09.745110   16404 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 16:42:09.953459   16404 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 16:42:10.101223   16404 out.go:204]   - Generating certificates and keys ...
	I0731 16:42:10.101356   16404 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 16:42:10.101440   16404 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 16:42:10.277611   16404 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0731 16:42:10.337591   16404 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0731 16:42:10.420205   16404 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0731 16:42:10.516040   16404 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0731 16:42:10.596247   16404 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0731 16:42:10.596554   16404 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-190022 localhost] and IPs [192.168.39.140 127.0.0.1 ::1]
	I0731 16:42:10.730169   16404 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0731 16:42:10.730349   16404 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-190022 localhost] and IPs [192.168.39.140 127.0.0.1 ::1]
	I0731 16:42:10.786611   16404 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0731 16:42:10.916279   16404 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0731 16:42:11.123724   16404 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0731 16:42:11.123997   16404 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 16:42:11.189266   16404 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 16:42:11.243585   16404 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0731 16:42:11.341228   16404 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 16:42:11.731499   16404 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 16:42:12.047912   16404 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 16:42:12.048793   16404 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 16:42:12.052466   16404 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 16:42:12.054583   16404 out.go:204]   - Booting up control plane ...
	I0731 16:42:12.054695   16404 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 16:42:12.054794   16404 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 16:42:12.054885   16404 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 16:42:12.070417   16404 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 16:42:12.071262   16404 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 16:42:12.071331   16404 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 16:42:12.202217   16404 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0731 16:42:12.202303   16404 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0731 16:42:12.704028   16404 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.224044ms
	I0731 16:42:12.704103   16404 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0731 16:42:17.702681   16404 kubeadm.go:310] [api-check] The API server is healthy after 5.001312549s
	I0731 16:42:17.715966   16404 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 16:42:17.727539   16404 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 16:42:17.763374   16404 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 16:42:17.763634   16404 kubeadm.go:310] [mark-control-plane] Marking the node addons-190022 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 16:42:17.774706   16404 kubeadm.go:310] [bootstrap-token] Using token: 09eco0.a7s4hcv0o7zkmmb3
	I0731 16:42:17.776220   16404 out.go:204]   - Configuring RBAC rules ...
	I0731 16:42:17.776357   16404 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 16:42:17.783231   16404 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 16:42:17.790236   16404 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 16:42:17.793454   16404 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 16:42:17.796736   16404 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 16:42:17.801710   16404 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 16:42:18.111165   16404 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 16:42:18.548044   16404 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 16:42:19.108771   16404 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 16:42:19.109627   16404 kubeadm.go:310] 
	I0731 16:42:19.109737   16404 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 16:42:19.109770   16404 kubeadm.go:310] 
	I0731 16:42:19.109890   16404 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 16:42:19.109900   16404 kubeadm.go:310] 
	I0731 16:42:19.109935   16404 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 16:42:19.110027   16404 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 16:42:19.110076   16404 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 16:42:19.110082   16404 kubeadm.go:310] 
	I0731 16:42:19.110160   16404 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 16:42:19.110172   16404 kubeadm.go:310] 
	I0731 16:42:19.110224   16404 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 16:42:19.110233   16404 kubeadm.go:310] 
	I0731 16:42:19.110297   16404 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 16:42:19.110394   16404 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 16:42:19.110498   16404 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 16:42:19.110514   16404 kubeadm.go:310] 
	I0731 16:42:19.110617   16404 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 16:42:19.110713   16404 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 16:42:19.110722   16404 kubeadm.go:310] 
	I0731 16:42:19.110922   16404 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 09eco0.a7s4hcv0o7zkmmb3 \
	I0731 16:42:19.111065   16404 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:460b24b51d10a3760f2391542a8a89e401359854f437f922cbee3f2d8ed6c856 \
	I0731 16:42:19.111104   16404 kubeadm.go:310] 	--control-plane 
	I0731 16:42:19.111131   16404 kubeadm.go:310] 
	I0731 16:42:19.111234   16404 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 16:42:19.111244   16404 kubeadm.go:310] 
	I0731 16:42:19.111356   16404 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 09eco0.a7s4hcv0o7zkmmb3 \
	I0731 16:42:19.111479   16404 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:460b24b51d10a3760f2391542a8a89e401359854f437f922cbee3f2d8ed6c856 
	I0731 16:42:19.111612   16404 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 16:42:19.111637   16404 cni.go:84] Creating CNI manager for ""
	I0731 16:42:19.111650   16404 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 16:42:19.113330   16404 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 16:42:19.114623   16404 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 16:42:19.125158   16404 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 16:42:19.142456   16404 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 16:42:19.142520   16404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:42:19.142578   16404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-190022 minikube.k8s.io/updated_at=2024_07_31T16_42_19_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1d737dad7efa60c56d30434fcd857dd3b14c91d9 minikube.k8s.io/name=addons-190022 minikube.k8s.io/primary=true
	I0731 16:42:19.182656   16404 ops.go:34] apiserver oom_adj: -16
	I0731 16:42:19.237915   16404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:42:19.738139   16404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:42:20.238042   16404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:42:20.738206   16404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:42:21.238157   16404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:42:21.738796   16404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:42:22.238549   16404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:42:22.738820   16404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:42:23.238839   16404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:42:23.738377   16404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:42:24.238819   16404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:42:24.738887   16404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:42:25.238667   16404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:42:25.738526   16404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:42:26.238076   16404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:42:26.738628   16404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:42:27.237982   16404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:42:27.738797   16404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:42:28.238742   16404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:42:28.739019   16404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:42:29.238828   16404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:42:29.737963   16404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:42:30.238275   16404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:42:30.737944   16404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:42:31.238670   16404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:42:31.738820   16404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:42:32.238718   16404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:42:32.738481   16404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:42:32.828588   16404 kubeadm.go:1113] duration metric: took 13.686125798s to wait for elevateKubeSystemPrivileges
	I0731 16:42:32.828624   16404 kubeadm.go:394] duration metric: took 23.394397952s to StartCluster
	I0731 16:42:32.828641   16404 settings.go:142] acquiring lock: {Name:mk8ac8269296bf0b887a00ff74caaad180005f89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 16:42:32.828795   16404 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 16:42:32.829209   16404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/kubeconfig: {Name:mkf2340bc1ada3c683f132aca37228d7588e1fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 16:42:32.829421   16404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0731 16:42:32.829450   16404 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.140 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 16:42:32.829492   16404 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0731 16:42:32.829568   16404 addons.go:69] Setting yakd=true in profile "addons-190022"
	I0731 16:42:32.829599   16404 addons.go:234] Setting addon yakd=true in "addons-190022"
	I0731 16:42:32.829609   16404 addons.go:69] Setting inspektor-gadget=true in profile "addons-190022"
	I0731 16:42:32.829635   16404 addons.go:234] Setting addon inspektor-gadget=true in "addons-190022"
	I0731 16:42:32.829635   16404 addons.go:69] Setting gcp-auth=true in profile "addons-190022"
	I0731 16:42:32.829653   16404 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-190022"
	I0731 16:42:32.829654   16404 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-190022"
	I0731 16:42:32.829674   16404 addons.go:69] Setting default-storageclass=true in profile "addons-190022"
	I0731 16:42:32.829681   16404 addons.go:69] Setting ingress=true in profile "addons-190022"
	I0731 16:42:32.829686   16404 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-190022"
	I0731 16:42:32.829686   16404 config.go:182] Loaded profile config "addons-190022": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 16:42:32.829691   16404 addons.go:69] Setting volumesnapshots=true in profile "addons-190022"
	I0731 16:42:32.829700   16404 addons.go:234] Setting addon ingress=true in "addons-190022"
	I0731 16:42:32.829701   16404 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-190022"
	I0731 16:42:32.829709   16404 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-190022"
	I0731 16:42:32.829718   16404 addons.go:69] Setting registry=true in profile "addons-190022"
	I0731 16:42:32.829667   16404 mustload.go:65] Loading cluster: addons-190022
	I0731 16:42:32.829748   16404 addons.go:69] Setting storage-provisioner=true in profile "addons-190022"
	I0731 16:42:32.829682   16404 addons.go:69] Setting volcano=true in profile "addons-190022"
	I0731 16:42:32.829776   16404 addons.go:234] Setting addon storage-provisioner=true in "addons-190022"
	I0731 16:42:32.829780   16404 addons.go:234] Setting addon volcano=true in "addons-190022"
	I0731 16:42:32.829644   16404 addons.go:69] Setting metrics-server=true in profile "addons-190022"
	I0731 16:42:32.829812   16404 host.go:66] Checking if "addons-190022" exists ...
	I0731 16:42:32.829826   16404 addons.go:234] Setting addon metrics-server=true in "addons-190022"
	I0731 16:42:32.829846   16404 host.go:66] Checking if "addons-190022" exists ...
	I0731 16:42:32.829887   16404 config.go:182] Loaded profile config "addons-190022": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 16:42:32.829659   16404 addons.go:69] Setting cloud-spanner=true in profile "addons-190022"
	I0731 16:42:32.829959   16404 addons.go:234] Setting addon cloud-spanner=true in "addons-190022"
	I0731 16:42:32.829988   16404 host.go:66] Checking if "addons-190022" exists ...
	I0731 16:42:32.830141   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.830173   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.830196   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.830209   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.830215   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.829674   16404 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-190022"
	I0731 16:42:32.830231   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.829711   16404 addons.go:234] Setting addon volumesnapshots=true in "addons-190022"
	I0731 16:42:32.830239   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.830258   16404 host.go:66] Checking if "addons-190022" exists ...
	I0731 16:42:32.829676   16404 addons.go:69] Setting helm-tiller=true in profile "addons-190022"
	I0731 16:42:32.830350   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.830150   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.830356   16404 addons.go:234] Setting addon helm-tiller=true in "addons-190022"
	I0731 16:42:32.830388   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.829727   16404 host.go:66] Checking if "addons-190022" exists ...
	I0731 16:42:32.830424   16404 host.go:66] Checking if "addons-190022" exists ...
	I0731 16:42:32.829732   16404 addons.go:69] Setting ingress-dns=true in profile "addons-190022"
	I0731 16:42:32.830467   16404 addons.go:234] Setting addon ingress-dns=true in "addons-190022"
	I0731 16:42:32.830496   16404 host.go:66] Checking if "addons-190022" exists ...
	I0731 16:42:32.830584   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.830619   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.830662   16404 host.go:66] Checking if "addons-190022" exists ...
	I0731 16:42:32.829735   16404 addons.go:234] Setting addon registry=true in "addons-190022"
	I0731 16:42:32.830736   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.830751   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.830747   16404 host.go:66] Checking if "addons-190022" exists ...
	I0731 16:42:32.830809   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.830836   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.830853   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.830881   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.831072   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.829806   16404 host.go:66] Checking if "addons-190022" exists ...
	I0731 16:42:32.830231   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.831169   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.829670   16404 host.go:66] Checking if "addons-190022" exists ...
	I0731 16:42:32.831482   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.831484   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.830398   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.831513   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.831555   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.829636   16404 host.go:66] Checking if "addons-190022" exists ...
	I0731 16:42:32.831690   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.831728   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.829725   16404 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-190022"
	I0731 16:42:32.831766   16404 host.go:66] Checking if "addons-190022" exists ...
	I0731 16:42:32.847203   16404 out.go:177] * Verifying Kubernetes components...
	I0731 16:42:32.849043   16404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 16:42:32.851049   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35457
	I0731 16:42:32.851552   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.851775   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44283
	I0731 16:42:32.851953   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44311
	I0731 16:42:32.851955   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46317
	I0731 16:42:32.852092   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.852107   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.852137   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.852382   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.852516   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.852611   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.852774   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.852796   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.853059   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.853100   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.853185   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.853411   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.853503   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.853413   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.853549   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.853821   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.853857   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.856473   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44825
	I0731 16:42:32.857592   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.858157   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.858173   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.858520   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34441
	I0731 16:42:32.858525   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.858850   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.859078   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.859138   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.859242   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.859255   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.859571   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.860185   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.860224   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.864109   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45095
	I0731 16:42:32.864472   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.864925   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.864937   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.865251   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.865426   16404 main.go:141] libmachine: (addons-190022) Calling .GetState
	I0731 16:42:32.869811   16404 addons.go:234] Setting addon default-storageclass=true in "addons-190022"
	I0731 16:42:32.869845   16404 host.go:66] Checking if "addons-190022" exists ...
	I0731 16:42:32.870182   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.870214   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.870351   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42213
	I0731 16:42:32.871551   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.871594   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.871853   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.871878   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.872186   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.872220   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.872612   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.872698   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36047
	I0731 16:42:32.871551   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.872826   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.872828   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.872856   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.875764   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.875961   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.875975   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.876701   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.876715   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.876771   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.876812   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42427
	I0731 16:42:32.877310   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.877372   16404 main.go:141] libmachine: (addons-190022) Calling .GetState
	I0731 16:42:32.877418   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.878807   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.878823   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.879080   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.879126   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.879217   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.879803   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.879875   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.880171   16404 host.go:66] Checking if "addons-190022" exists ...
	I0731 16:42:32.880655   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.880723   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.889744   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41189
	I0731 16:42:32.890696   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.891354   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.891380   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.891750   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.891916   16404 main.go:141] libmachine: (addons-190022) Calling .GetState
	I0731 16:42:32.893574   16404 main.go:141] libmachine: (addons-190022) Calling .DriverName
	I0731 16:42:32.895893   16404 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0731 16:42:32.897138   16404 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0731 16:42:32.897157   16404 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0731 16:42:32.897178   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHHostname
	I0731 16:42:32.901176   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:32.901605   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:32.901628   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:32.901927   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHPort
	I0731 16:42:32.902109   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:32.902257   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHUsername
	I0731 16:42:32.902401   16404 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/addons-190022/id_rsa Username:docker}
	I0731 16:42:32.905170   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36297
	I0731 16:42:32.906178   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.906798   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.906816   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.906886   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38935
	I0731 16:42:32.907230   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.907411   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43047
	I0731 16:42:32.907421   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41191
	I0731 16:42:32.907495   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.907827   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.907875   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.908007   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.908022   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.908997   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46259
	I0731 16:42:32.909010   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.909081   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.909184   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.909380   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.909445   16404 main.go:141] libmachine: (addons-190022) Calling .GetState
	I0731 16:42:32.909548   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.909561   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.909993   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.910027   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.910233   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.910260   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.910472   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.910625   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.911090   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.911096   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.911143   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.911252   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.911612   16404 main.go:141] libmachine: (addons-190022) Calling .DriverName
	I0731 16:42:32.911835   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.912416   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.912445   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.913662   16404 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.1
	I0731 16:42:32.915267   16404 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0731 16:42:32.915332   16404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0731 16:42:32.915350   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHHostname
	I0731 16:42:32.917483   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45351
	I0731 16:42:32.917961   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.919041   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.919061   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.919804   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.920231   16404 main.go:141] libmachine: (addons-190022) Calling .GetState
	I0731 16:42:32.921390   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:32.921823   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:32.921854   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:32.922058   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHPort
	I0731 16:42:32.922313   16404 main.go:141] libmachine: (addons-190022) Calling .DriverName
	I0731 16:42:32.922372   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:32.922650   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHUsername
	I0731 16:42:32.922802   16404 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/addons-190022/id_rsa Username:docker}
	I0731 16:42:32.923762   16404 out.go:177]   - Using image docker.io/registry:2.8.3
	I0731 16:42:32.925352   16404 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0731 16:42:32.927167   16404 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0731 16:42:32.927185   16404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0731 16:42:32.927204   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHHostname
	I0731 16:42:32.929242   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35889
	I0731 16:42:32.929720   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.930344   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.930363   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.930775   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.931003   16404 main.go:141] libmachine: (addons-190022) Calling .GetState
	I0731 16:42:32.931623   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:32.931708   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38647
	I0731 16:42:32.932250   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.932798   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:32.932827   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:32.932983   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.933004   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.933010   16404 main.go:141] libmachine: (addons-190022) Calling .DriverName
	I0731 16:42:32.933064   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHPort
	I0731 16:42:32.933339   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:32.933382   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.933476   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHUsername
	I0731 16:42:32.933580   16404 main.go:141] libmachine: (addons-190022) Calling .GetState
	I0731 16:42:32.933638   16404 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/addons-190022/id_rsa Username:docker}
	I0731 16:42:32.934633   16404 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0731 16:42:32.935567   16404 main.go:141] libmachine: (addons-190022) Calling .DriverName
	I0731 16:42:32.935926   16404 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 16:42:32.935954   16404 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 16:42:32.935971   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHHostname
	I0731 16:42:32.937018   16404 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0731 16:42:32.938570   16404 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0731 16:42:32.938937   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35187
	I0731 16:42:32.939305   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:32.939367   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.939858   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.939870   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.940239   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.940387   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:32.940412   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:32.940548   16404 main.go:141] libmachine: (addons-190022) Calling .DriverName
	I0731 16:42:32.940614   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHPort
	I0731 16:42:32.940864   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:32.941076   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHUsername
	I0731 16:42:32.941177   16404 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0731 16:42:32.941197   16404 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/addons-190022/id_rsa Username:docker}
	I0731 16:42:32.942601   16404 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0731 16:42:32.942623   16404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0731 16:42:32.942640   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHHostname
	I0731 16:42:32.944447   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37341
	I0731 16:42:32.944727   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.945122   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.945138   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.945454   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.945635   16404 main.go:141] libmachine: (addons-190022) Calling .GetState
	I0731 16:42:32.946962   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35137
	I0731 16:42:32.947298   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.947630   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.947641   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.948054   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.948192   16404 main.go:141] libmachine: (addons-190022) Calling .GetState
	I0731 16:42:32.950329   16404 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-190022"
	I0731 16:42:32.950359   16404 host.go:66] Checking if "addons-190022" exists ...
	I0731 16:42:32.950585   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.950607   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.950749   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40065
	I0731 16:42:32.951560   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:32.951586   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.951650   16404 main.go:141] libmachine: (addons-190022) Calling .DriverName
	I0731 16:42:32.951945   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:32.951961   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:32.952152   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.952161   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.952309   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHPort
	I0731 16:42:32.952453   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.952501   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:32.952622   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHUsername
	I0731 16:42:32.952660   16404 main.go:141] libmachine: (addons-190022) Calling .GetState
	I0731 16:42:32.952711   16404 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/addons-190022/id_rsa Username:docker}
	I0731 16:42:32.953354   16404 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0731 16:42:32.953942   16404 main.go:141] libmachine: (addons-190022) Calling .DriverName
	I0731 16:42:32.954933   16404 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0731 16:42:32.954949   16404 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0731 16:42:32.954966   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHHostname
	I0731 16:42:32.955790   16404 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 16:42:32.956942   16404 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 16:42:32.956956   16404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 16:42:32.956973   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHHostname
	I0731 16:42:32.958772   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:32.959224   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:32.959244   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:32.959392   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHPort
	I0731 16:42:32.959555   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:32.959701   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHUsername
	I0731 16:42:32.959808   16404 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/addons-190022/id_rsa Username:docker}
	I0731 16:42:32.960936   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:32.961295   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:32.961314   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:32.961559   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHPort
	I0731 16:42:32.961625   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39231
	I0731 16:42:32.961763   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:32.961871   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHUsername
	I0731 16:42:32.961933   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.962028   16404 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/addons-190022/id_rsa Username:docker}
	I0731 16:42:32.962392   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.962416   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.962906   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.963090   16404 main.go:141] libmachine: (addons-190022) Calling .GetState
	I0731 16:42:32.966429   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37857
	I0731 16:42:32.966559   16404 main.go:141] libmachine: (addons-190022) Calling .DriverName
	I0731 16:42:32.967075   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.967581   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.967604   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.967992   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.968211   16404 main.go:141] libmachine: (addons-190022) Calling .GetState
	I0731 16:42:32.968852   16404 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0731 16:42:32.969698   16404 main.go:141] libmachine: (addons-190022) Calling .DriverName
	I0731 16:42:32.971274   16404 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0731 16:42:32.971283   16404 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0731 16:42:32.972580   16404 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0731 16:42:32.972597   16404 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0731 16:42:32.972621   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHHostname
	I0731 16:42:32.974187   16404 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0731 16:42:32.974890   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46221
	I0731 16:42:32.976123   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:32.976508   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:32.976531   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:32.976802   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHPort
	I0731 16:42:32.976940   16404 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0731 16:42:32.976982   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:32.977133   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHUsername
	I0731 16:42:32.977268   16404 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/addons-190022/id_rsa Username:docker}
	I0731 16:42:32.979211   16404 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0731 16:42:32.979654   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.979701   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35989
	I0731 16:42:32.980046   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.980249   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.980269   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.980638   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.980790   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.980809   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.981255   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.981281   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.981440   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40659
	I0731 16:42:32.981577   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.981648   16404 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0731 16:42:32.981874   16404 main.go:141] libmachine: (addons-190022) Calling .GetState
	I0731 16:42:32.981933   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.982190   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45857
	I0731 16:42:32.982388   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.982403   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.982442   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.982697   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.983134   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.983169   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.983282   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.983301   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.983706   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.983986   16404 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0731 16:42:32.984109   16404 main.go:141] libmachine: (addons-190022) Calling .GetState
	I0731 16:42:32.984136   16404 main.go:141] libmachine: (addons-190022) Calling .DriverName
	I0731 16:42:32.985892   16404 main.go:141] libmachine: (addons-190022) Calling .DriverName
	I0731 16:42:32.986094   16404 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0731 16:42:32.986146   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:32.986159   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:32.986182   16404 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0731 16:42:32.986370   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:32.986386   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:32.986395   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:32.986406   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:32.986607   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:32.986625   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	W0731 16:42:32.986691   16404 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0731 16:42:32.987376   16404 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0731 16:42:32.987394   16404 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0731 16:42:32.987414   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHHostname
	I0731 16:42:32.987596   16404 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0731 16:42:32.987608   16404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0731 16:42:32.987623   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHHostname
	I0731 16:42:32.991533   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:32.991976   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:32.991995   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:32.992200   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHPort
	I0731 16:42:32.992261   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:32.992418   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:32.992658   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:32.992683   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:32.992702   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHUsername
	I0731 16:42:32.992735   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33793
	I0731 16:42:32.992868   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHPort
	I0731 16:42:32.993133   16404 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/addons-190022/id_rsa Username:docker}
	I0731 16:42:32.993337   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.993388   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:32.993573   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHUsername
	I0731 16:42:32.993730   16404 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/addons-190022/id_rsa Username:docker}
	I0731 16:42:32.993975   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.993984   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.994296   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.994514   16404 main.go:141] libmachine: (addons-190022) Calling .GetState
	I0731 16:42:32.996304   16404 main.go:141] libmachine: (addons-190022) Calling .DriverName
	I0731 16:42:32.996883   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40521
	I0731 16:42:32.997223   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:32.997626   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:32.997637   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:32.998130   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:32.998520   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:32.998558   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:32.998685   16404 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0731 16:42:32.999986   16404 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0731 16:42:33.000003   16404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0731 16:42:33.000020   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHHostname
	I0731 16:42:33.000592   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37877
	I0731 16:42:33.000955   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:33.001354   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:33.001371   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:33.001715   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:33.001882   16404 main.go:141] libmachine: (addons-190022) Calling .GetState
	I0731 16:42:33.003837   16404 main.go:141] libmachine: (addons-190022) Calling .DriverName
	I0731 16:42:33.003977   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:33.004436   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:33.004456   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:33.004660   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHPort
	I0731 16:42:33.004882   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:33.005062   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHUsername
	I0731 16:42:33.005161   16404 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/addons-190022/id_rsa Username:docker}
	I0731 16:42:33.005845   16404 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0731 16:42:33.007017   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33591
	I0731 16:42:33.007234   16404 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0731 16:42:33.007249   16404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0731 16:42:33.007266   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHHostname
	I0731 16:42:33.007336   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:33.007904   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:33.007921   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:33.008218   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:33.008436   16404 main.go:141] libmachine: (addons-190022) Calling .GetState
	I0731 16:42:33.010028   16404 main.go:141] libmachine: (addons-190022) Calling .DriverName
	I0731 16:42:33.010245   16404 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 16:42:33.010258   16404 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 16:42:33.010272   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHHostname
	I0731 16:42:33.010369   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:33.010918   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:33.010962   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:33.011070   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHPort
	I0731 16:42:33.011357   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:33.011567   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHUsername
	I0731 16:42:33.011676   16404 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/addons-190022/id_rsa Username:docker}
	I0731 16:42:33.012961   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:33.013243   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:33.013263   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:33.013390   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHPort
	I0731 16:42:33.013567   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:33.013673   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHUsername
	I0731 16:42:33.013846   16404 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/addons-190022/id_rsa Username:docker}
	W0731 16:42:33.015841   16404 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:44040->192.168.39.140:22: read: connection reset by peer
	I0731 16:42:33.015864   16404 retry.go:31] will retry after 247.440863ms: ssh: handshake failed: read tcp 192.168.39.1:44040->192.168.39.140:22: read: connection reset by peer
	I0731 16:42:33.020324   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36003
	I0731 16:42:33.020732   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:33.021427   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:33.021449   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:33.021743   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:33.021914   16404 main.go:141] libmachine: (addons-190022) Calling .GetState
	I0731 16:42:33.023331   16404 main.go:141] libmachine: (addons-190022) Calling .DriverName
	I0731 16:42:33.025027   16404 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0731 16:42:33.026340   16404 out.go:177]   - Using image docker.io/busybox:stable
	I0731 16:42:33.027567   16404 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0731 16:42:33.027587   16404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0731 16:42:33.027607   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHHostname
	I0731 16:42:33.030807   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:33.030893   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:33.030922   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:33.031091   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHPort
	I0731 16:42:33.031278   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:33.031446   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHUsername
	I0731 16:42:33.031583   16404 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/addons-190022/id_rsa Username:docker}
	I0731 16:42:33.265615   16404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0731 16:42:33.332382   16404 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 16:42:33.332405   16404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0731 16:42:33.339833   16404 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0731 16:42:33.339854   16404 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0731 16:42:33.349998   16404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0731 16:42:33.353012   16404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0731 16:42:33.367606   16404 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 16:42:33.367622   16404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0731 16:42:33.378529   16404 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0731 16:42:33.378546   16404 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0731 16:42:33.396639   16404 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0731 16:42:33.396656   16404 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0731 16:42:33.426308   16404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0731 16:42:33.438016   16404 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0731 16:42:33.438038   16404 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0731 16:42:33.441270   16404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 16:42:33.465712   16404 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0731 16:42:33.465735   16404 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0731 16:42:33.471037   16404 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0731 16:42:33.471056   16404 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0731 16:42:33.493278   16404 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0731 16:42:33.493298   16404 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0731 16:42:33.553308   16404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0731 16:42:33.594530   16404 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0731 16:42:33.594562   16404 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0731 16:42:33.606409   16404 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0731 16:42:33.606428   16404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0731 16:42:33.628927   16404 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 16:42:33.628949   16404 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 16:42:33.632034   16404 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0731 16:42:33.632055   16404 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0731 16:42:33.648116   16404 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0731 16:42:33.648139   16404 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0731 16:42:33.669676   16404 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0731 16:42:33.669699   16404 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0731 16:42:33.696925   16404 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0731 16:42:33.696946   16404 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0731 16:42:33.736844   16404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 16:42:33.770027   16404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0731 16:42:33.787306   16404 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0731 16:42:33.787334   16404 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0731 16:42:33.818265   16404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0731 16:42:33.826483   16404 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 16:42:33.826508   16404 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 16:42:33.927896   16404 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0731 16:42:33.927923   16404 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0731 16:42:33.932677   16404 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0731 16:42:33.932697   16404 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0731 16:42:33.944186   16404 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0731 16:42:33.944208   16404 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0731 16:42:33.999770   16404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 16:42:34.032781   16404 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0731 16:42:34.032812   16404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0731 16:42:34.104115   16404 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0731 16:42:34.104147   16404 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0731 16:42:34.135637   16404 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0731 16:42:34.135665   16404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0731 16:42:34.157568   16404 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0731 16:42:34.157594   16404 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0731 16:42:34.271258   16404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0731 16:42:34.274541   16404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0731 16:42:34.392662   16404 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0731 16:42:34.392694   16404 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0731 16:42:34.419055   16404 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0731 16:42:34.419080   16404 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0731 16:42:34.669547   16404 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0731 16:42:34.669573   16404 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0731 16:42:34.759785   16404 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0731 16:42:34.759807   16404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0731 16:42:34.983707   16404 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0731 16:42:34.983726   16404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0731 16:42:34.996570   16404 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0731 16:42:34.996594   16404 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0731 16:42:35.046379   16404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.780727917s)
	I0731 16:42:35.046437   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:35.046436   16404 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.714001981s)
	I0731 16:42:35.046451   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:35.046769   16404 main.go:141] libmachine: (addons-190022) DBG | Closing plugin on server side
	I0731 16:42:35.046830   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:35.046843   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:35.046857   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:35.046868   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:35.047119   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:35.047136   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:35.047424   16404 node_ready.go:35] waiting up to 6m0s for node "addons-190022" to be "Ready" ...
	I0731 16:42:35.054837   16404 node_ready.go:49] node "addons-190022" has status "Ready":"True"
	I0731 16:42:35.054861   16404 node_ready.go:38] duration metric: took 7.413255ms for node "addons-190022" to be "Ready" ...
	I0731 16:42:35.054871   16404 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 16:42:35.116799   16404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0731 16:42:35.124411   16404 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-5xsd7" in "kube-system" namespace to be "Ready" ...
	I0731 16:42:35.332733   16404 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0731 16:42:35.332763   16404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0731 16:42:35.540351   16404 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0731 16:42:35.540379   16404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0731 16:42:35.858012   16404 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.525576296s)
	I0731 16:42:35.858053   16404 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0731 16:42:35.999386   16404 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0731 16:42:35.999414   16404 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0731 16:42:36.332290   16404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0731 16:42:36.369383   16404 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-190022" context rescaled to 1 replicas
	I0731 16:42:37.294609   16404 pod_ready.go:102] pod "coredns-7db6d8ff4d-5xsd7" in "kube-system" namespace has status "Ready":"False"
	I0731 16:42:39.132206   16404 pod_ready.go:92] pod "coredns-7db6d8ff4d-5xsd7" in "kube-system" namespace has status "Ready":"True"
	I0731 16:42:39.132229   16404 pod_ready.go:81] duration metric: took 4.007792447s for pod "coredns-7db6d8ff4d-5xsd7" in "kube-system" namespace to be "Ready" ...
	I0731 16:42:39.132238   16404 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-dvscb" in "kube-system" namespace to be "Ready" ...
	I0731 16:42:39.143992   16404 pod_ready.go:92] pod "coredns-7db6d8ff4d-dvscb" in "kube-system" namespace has status "Ready":"True"
	I0731 16:42:39.144010   16404 pod_ready.go:81] duration metric: took 11.767028ms for pod "coredns-7db6d8ff4d-dvscb" in "kube-system" namespace to be "Ready" ...
	I0731 16:42:39.144018   16404 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-190022" in "kube-system" namespace to be "Ready" ...
	I0731 16:42:39.151975   16404 pod_ready.go:92] pod "etcd-addons-190022" in "kube-system" namespace has status "Ready":"True"
	I0731 16:42:39.152004   16404 pod_ready.go:81] duration metric: took 7.978626ms for pod "etcd-addons-190022" in "kube-system" namespace to be "Ready" ...
	I0731 16:42:39.152016   16404 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-190022" in "kube-system" namespace to be "Ready" ...
	I0731 16:42:39.162470   16404 pod_ready.go:92] pod "kube-apiserver-addons-190022" in "kube-system" namespace has status "Ready":"True"
	I0731 16:42:39.162495   16404 pod_ready.go:81] duration metric: took 10.471623ms for pod "kube-apiserver-addons-190022" in "kube-system" namespace to be "Ready" ...
	I0731 16:42:39.162507   16404 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-190022" in "kube-system" namespace to be "Ready" ...
	I0731 16:42:39.172680   16404 pod_ready.go:92] pod "kube-controller-manager-addons-190022" in "kube-system" namespace has status "Ready":"True"
	I0731 16:42:39.172708   16404 pod_ready.go:81] duration metric: took 10.192055ms for pod "kube-controller-manager-addons-190022" in "kube-system" namespace to be "Ready" ...
	I0731 16:42:39.172721   16404 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-p46dc" in "kube-system" namespace to be "Ready" ...
	I0731 16:42:39.532905   16404 pod_ready.go:92] pod "kube-proxy-p46dc" in "kube-system" namespace has status "Ready":"True"
	I0731 16:42:39.532940   16404 pod_ready.go:81] duration metric: took 360.210191ms for pod "kube-proxy-p46dc" in "kube-system" namespace to be "Ready" ...
	I0731 16:42:39.532954   16404 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-190022" in "kube-system" namespace to be "Ready" ...
	I0731 16:42:39.949713   16404 pod_ready.go:92] pod "kube-scheduler-addons-190022" in "kube-system" namespace has status "Ready":"True"
	I0731 16:42:39.949738   16404 pod_ready.go:81] duration metric: took 416.77646ms for pod "kube-scheduler-addons-190022" in "kube-system" namespace to be "Ready" ...
	I0731 16:42:39.949748   16404 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-zcd67" in "kube-system" namespace to be "Ready" ...
	I0731 16:42:39.961358   16404 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0731 16:42:39.961402   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHHostname
	I0731 16:42:39.964531   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:39.964965   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:39.965027   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:39.965180   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHPort
	I0731 16:42:39.965376   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:39.965578   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHUsername
	I0731 16:42:39.965711   16404 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/addons-190022/id_rsa Username:docker}
	I0731 16:42:40.180353   16404 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0731 16:42:40.235783   16404 addons.go:234] Setting addon gcp-auth=true in "addons-190022"
	I0731 16:42:40.235827   16404 host.go:66] Checking if "addons-190022" exists ...
	I0731 16:42:40.236154   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:40.236183   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:40.252038   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37795
	I0731 16:42:40.252490   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:40.252967   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:40.252988   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:40.253281   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:40.253725   16404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:42:40.253753   16404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:42:40.268706   16404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37685
	I0731 16:42:40.269067   16404 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:42:40.269508   16404 main.go:141] libmachine: Using API Version  1
	I0731 16:42:40.269527   16404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:42:40.269862   16404 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:42:40.270109   16404 main.go:141] libmachine: (addons-190022) Calling .GetState
	I0731 16:42:40.271777   16404 main.go:141] libmachine: (addons-190022) Calling .DriverName
	I0731 16:42:40.271995   16404 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0731 16:42:40.272021   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHHostname
	I0731 16:42:40.274395   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:40.274736   16404 main.go:141] libmachine: (addons-190022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:3c:34", ip: ""} in network mk-addons-190022: {Iface:virbr1 ExpiryTime:2024-07-31 17:41:52 +0000 UTC Type:0 Mac:52:54:00:b8:3c:34 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:addons-190022 Clientid:01:52:54:00:b8:3c:34}
	I0731 16:42:40.274758   16404 main.go:141] libmachine: (addons-190022) DBG | domain addons-190022 has defined IP address 192.168.39.140 and MAC address 52:54:00:b8:3c:34 in network mk-addons-190022
	I0731 16:42:40.274918   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHPort
	I0731 16:42:40.275082   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHKeyPath
	I0731 16:42:40.275298   16404 main.go:141] libmachine: (addons-190022) Calling .GetSSHUsername
	I0731 16:42:40.275461   16404 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/addons-190022/id_rsa Username:docker}
	I0731 16:42:41.236374   16404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.88634157s)
	I0731 16:42:41.236419   16404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.8833839s)
	I0731 16:42:41.236432   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:41.236448   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:41.236451   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:41.236462   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:41.236561   16404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.810214787s)
	I0731 16:42:41.236619   16404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.795327908s)
	I0731 16:42:41.236641   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:41.236655   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:41.236684   16404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.683348457s)
	I0731 16:42:41.236709   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:41.236722   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:41.236728   16404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.499860616s)
	I0731 16:42:41.236741   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:41.236748   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:41.236797   16404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.466732589s)
	I0731 16:42:41.236804   16404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.418514128s)
	I0731 16:42:41.236820   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:41.236823   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:41.236830   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:41.236834   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:41.236917   16404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.237119422s)
	I0731 16:42:41.236933   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:41.236949   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:41.237025   16404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.965738174s)
	I0731 16:42:41.237052   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:41.237062   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:41.237164   16404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.962592174s)
	W0731 16:42:41.237208   16404 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0731 16:42:41.237242   16404 retry.go:31] will retry after 213.271537ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0731 16:42:41.237306   16404 main.go:141] libmachine: (addons-190022) DBG | Closing plugin on server side
	I0731 16:42:41.237331   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:41.237344   16404 main.go:141] libmachine: (addons-190022) DBG | Closing plugin on server side
	I0731 16:42:41.237348   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:41.237360   16404 main.go:141] libmachine: (addons-190022) DBG | Closing plugin on server side
	I0731 16:42:41.237363   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:41.237369   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:41.237335   16404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.120506774s)
	I0731 16:42:41.237379   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:41.237383   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:41.237387   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:41.237373   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:41.237393   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:41.237396   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:41.237402   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:41.237405   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:41.237445   16404 main.go:141] libmachine: (addons-190022) DBG | Closing plugin on server side
	I0731 16:42:41.237453   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:41.237460   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:41.237476   16404 main.go:141] libmachine: (addons-190022) DBG | Closing plugin on server side
	I0731 16:42:41.237495   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:41.237500   16404 main.go:141] libmachine: (addons-190022) DBG | Closing plugin on server side
	I0731 16:42:41.237506   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:41.237514   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:41.237523   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:41.237524   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:41.237530   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:41.237533   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:41.237538   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:41.237541   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:41.237545   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:41.237549   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:41.237582   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:41.237896   16404 main.go:141] libmachine: (addons-190022) DBG | Closing plugin on server side
	I0731 16:42:41.237922   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:41.237928   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:41.237935   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:41.237941   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:41.237986   16404 main.go:141] libmachine: (addons-190022) DBG | Closing plugin on server side
	I0731 16:42:41.238003   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:41.238009   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:41.238594   16404 main.go:141] libmachine: (addons-190022) DBG | Closing plugin on server side
	I0731 16:42:41.238616   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:41.238628   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:41.238633   16404 main.go:141] libmachine: (addons-190022) DBG | Closing plugin on server side
	I0731 16:42:41.238638   16404 addons.go:475] Verifying addon registry=true in "addons-190022"
	I0731 16:42:41.238654   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:41.238661   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:41.238862   16404 main.go:141] libmachine: (addons-190022) DBG | Closing plugin on server side
	I0731 16:42:41.238891   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:41.238897   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:41.238902   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:41.238907   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:41.238952   16404 main.go:141] libmachine: (addons-190022) DBG | Closing plugin on server side
	I0731 16:42:41.238971   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:41.238978   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:41.238986   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:41.238993   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:41.239312   16404 main.go:141] libmachine: (addons-190022) DBG | Closing plugin on server side
	I0731 16:42:41.239351   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:41.239358   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:41.239366   16404 addons.go:475] Verifying addon metrics-server=true in "addons-190022"
	I0731 16:42:41.239399   16404 main.go:141] libmachine: (addons-190022) DBG | Closing plugin on server side
	I0731 16:42:41.239421   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:41.239428   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:41.239435   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:41.239441   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:41.239510   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:41.239515   16404 main.go:141] libmachine: (addons-190022) DBG | Closing plugin on server side
	I0731 16:42:41.239521   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:41.239539   16404 main.go:141] libmachine: (addons-190022) DBG | Closing plugin on server side
	I0731 16:42:41.238619   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:41.239562   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:41.239566   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:41.239570   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:41.239554   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:41.239603   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:41.239821   16404 main.go:141] libmachine: (addons-190022) DBG | Closing plugin on server side
	I0731 16:42:41.239851   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:41.239858   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:41.239866   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:41.239873   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:41.240266   16404 main.go:141] libmachine: (addons-190022) DBG | Closing plugin on server side
	I0731 16:42:41.240270   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:41.240282   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:41.240999   16404 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-190022 service yakd-dashboard -n yakd-dashboard
	
	I0731 16:42:41.241957   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:41.242002   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:41.242256   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:41.242269   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:41.242285   16404 main.go:141] libmachine: (addons-190022) DBG | Closing plugin on server side
	I0731 16:42:41.242353   16404 out.go:177] * Verifying registry addon...
	I0731 16:42:41.242450   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:41.242460   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:41.242467   16404 main.go:141] libmachine: (addons-190022) DBG | Closing plugin on server side
	I0731 16:42:41.242469   16404 addons.go:475] Verifying addon ingress=true in "addons-190022"
	I0731 16:42:41.244407   16404 out.go:177] * Verifying ingress addon...
	I0731 16:42:41.245376   16404 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0731 16:42:41.246244   16404 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0731 16:42:41.255780   16404 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0731 16:42:41.255800   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:41.275238   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:41.275262   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:41.275575   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:41.275594   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	W0731 16:42:41.275746   16404 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0731 16:42:41.281790   16404 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0731 16:42:41.281809   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:41.287842   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:41.287862   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:41.288146   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:41.288160   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:41.451168   16404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0731 16:42:41.768840   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:41.775533   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:41.926163   16404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.593809768s)
	I0731 16:42:41.926221   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:41.926236   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:41.926267   16404 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.65424921s)
	I0731 16:42:41.926699   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:41.926714   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:41.926711   16404 main.go:141] libmachine: (addons-190022) DBG | Closing plugin on server side
	I0731 16:42:41.926730   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:41.926737   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:41.926991   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:41.927004   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:41.927014   16404 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-190022"
	I0731 16:42:41.927694   16404 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0731 16:42:41.928693   16404 out.go:177] * Verifying csi-hostpath-driver addon...
	I0731 16:42:41.930135   16404 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0731 16:42:41.930785   16404 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0731 16:42:41.931288   16404 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0731 16:42:41.931303   16404 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0731 16:42:41.955719   16404 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0731 16:42:41.955742   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:41.962614   16404 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0731 16:42:41.962650   16404 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0731 16:42:42.005865   16404 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-zcd67" in "kube-system" namespace has status "Ready":"False"
	I0731 16:42:42.062379   16404 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0731 16:42:42.062405   16404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0731 16:42:42.147012   16404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0731 16:42:42.251044   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:42.252677   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:42.436089   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:42.750251   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:42.752506   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:42.936009   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:43.250857   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:43.251607   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:43.317518   16404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.866290674s)
	I0731 16:42:43.317576   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:43.317592   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:43.317894   16404 main.go:141] libmachine: (addons-190022) DBG | Closing plugin on server side
	I0731 16:42:43.317969   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:43.317987   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:43.317998   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:43.318006   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:43.318217   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:43.318231   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:43.441567   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:43.600612   16404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.453555314s)
	I0731 16:42:43.600674   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:43.600690   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:43.600983   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:43.601001   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:43.601009   16404 main.go:141] libmachine: Making call to close driver server
	I0731 16:42:43.601016   16404 main.go:141] libmachine: (addons-190022) Calling .Close
	I0731 16:42:43.601016   16404 main.go:141] libmachine: (addons-190022) DBG | Closing plugin on server side
	I0731 16:42:43.601247   16404 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:42:43.601261   16404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:42:43.603842   16404 addons.go:475] Verifying addon gcp-auth=true in "addons-190022"
	I0731 16:42:43.605166   16404 out.go:177] * Verifying gcp-auth addon...
	I0731 16:42:43.607250   16404 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0731 16:42:43.658930   16404 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0731 16:42:43.658955   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:43.788743   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:43.788894   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:43.936530   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:44.110910   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:44.257258   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:44.258273   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:44.435851   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:44.455641   16404 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-zcd67" in "kube-system" namespace has status "Ready":"False"
	I0731 16:42:44.611038   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:44.750709   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:44.753059   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:44.938327   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:45.110774   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:45.250401   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:45.251866   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:45.438053   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:45.611223   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:45.750735   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:45.752087   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:45.936659   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:46.111276   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:46.250921   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:46.251030   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:46.437066   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:46.456045   16404 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-zcd67" in "kube-system" namespace has status "Ready":"False"
	I0731 16:42:46.611562   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:46.750204   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:46.752563   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:46.936858   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:47.111048   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:47.250545   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:47.252154   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:47.436410   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:47.612912   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:47.978972   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:47.979035   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:47.979238   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:48.111080   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:48.250638   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:48.254161   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:48.436377   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:48.611514   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:48.751205   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:48.751247   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:48.936909   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:48.955105   16404 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-zcd67" in "kube-system" namespace has status "Ready":"False"
	I0731 16:42:49.111215   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:49.250450   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:49.250705   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:49.436728   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:49.611286   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:49.750085   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:49.751048   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:49.937594   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:50.111079   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:50.251353   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:50.251978   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:50.436400   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:50.610488   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:50.751856   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:50.752387   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:50.938235   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:50.956784   16404 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-zcd67" in "kube-system" namespace has status "Ready":"False"
	I0731 16:42:51.110898   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:51.250794   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:51.252567   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:51.436416   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:51.610917   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:51.750291   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:51.751142   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:51.936999   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:52.111094   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:52.249854   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:52.251984   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:52.437550   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:52.611798   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:52.751359   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:52.751622   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:52.936625   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:53.241084   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:53.250884   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:53.251400   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:53.436491   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:53.456064   16404 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-zcd67" in "kube-system" namespace has status "Ready":"False"
	I0731 16:42:53.610668   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:53.750564   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:53.750594   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:53.936690   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:54.111173   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:54.250884   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:54.252305   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:54.435688   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:54.611062   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:54.751590   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:54.751812   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:54.936610   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:55.110895   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:55.250098   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:55.252753   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:55.435695   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:55.611476   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:55.752843   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:55.754945   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:55.940633   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:55.958710   16404 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-zcd67" in "kube-system" namespace has status "Ready":"False"
	I0731 16:42:56.110954   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:56.249850   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:56.250907   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:56.436266   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:56.611715   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:56.750128   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:56.752470   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:56.936369   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:57.115747   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:57.249925   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:57.250382   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:57.439642   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:57.610702   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:58.078686   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:58.079864   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:58.081640   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:58.082797   16404 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-zcd67" in "kube-system" namespace has status "Ready":"False"
	I0731 16:42:58.110499   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:58.250839   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:58.251292   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:58.436440   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:58.610937   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:58.751998   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:58.753791   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:58.941879   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:59.111353   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:59.250942   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:59.251351   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:59.436048   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:42:59.459846   16404 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-zcd67" in "kube-system" namespace has status "Ready":"True"
	I0731 16:42:59.459867   16404 pod_ready.go:81] duration metric: took 19.51011325s for pod "nvidia-device-plugin-daemonset-zcd67" in "kube-system" namespace to be "Ready" ...
	I0731 16:42:59.459875   16404 pod_ready.go:38] duration metric: took 24.404992572s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 16:42:59.459889   16404 api_server.go:52] waiting for apiserver process to appear ...
	I0731 16:42:59.459935   16404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 16:42:59.477699   16404 api_server.go:72] duration metric: took 26.64821314s to wait for apiserver process to appear ...
	I0731 16:42:59.477727   16404 api_server.go:88] waiting for apiserver healthz status ...
	I0731 16:42:59.477751   16404 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0731 16:42:59.481655   16404 api_server.go:279] https://192.168.39.140:8443/healthz returned 200:
	ok
	I0731 16:42:59.482635   16404 api_server.go:141] control plane version: v1.30.3
	I0731 16:42:59.482660   16404 api_server.go:131] duration metric: took 4.926485ms to wait for apiserver health ...
	I0731 16:42:59.482667   16404 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 16:42:59.491372   16404 system_pods.go:59] 18 kube-system pods found
	I0731 16:42:59.491402   16404 system_pods.go:61] "coredns-7db6d8ff4d-5xsd7" [e335215e-3b35-4219-aebf-5cb36b99f501] Running
	I0731 16:42:59.491414   16404 system_pods.go:61] "csi-hostpath-attacher-0" [04d0345d-a817-4a36-bc46-fbe0548e5155] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0731 16:42:59.491423   16404 system_pods.go:61] "csi-hostpath-resizer-0" [366c8800-8ec9-4594-8d4e-6f9dd2ec2dfa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0731 16:42:59.491434   16404 system_pods.go:61] "csi-hostpathplugin-t8pzb" [1d031c16-6f7c-45e2-9123-4c71d43ebf7e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0731 16:42:59.491446   16404 system_pods.go:61] "etcd-addons-190022" [461d2a65-1ad4-4075-9d88-f95cf652e869] Running
	I0731 16:42:59.491453   16404 system_pods.go:61] "kube-apiserver-addons-190022" [858de9d8-e554-4ae0-9fb8-fd08b3e02f0d] Running
	I0731 16:42:59.491461   16404 system_pods.go:61] "kube-controller-manager-addons-190022" [528fd2e2-99a3-41f9-be37-0da13e3b7f85] Running
	I0731 16:42:59.491467   16404 system_pods.go:61] "kube-ingress-dns-minikube" [d54a8e85-afc8-4ad8-84be-3d5e643783f0] Running
	I0731 16:42:59.491473   16404 system_pods.go:61] "kube-proxy-p46dc" [3f47ba8b-8470-4e58-aabc-6cc47f18d726] Running
	I0731 16:42:59.491479   16404 system_pods.go:61] "kube-scheduler-addons-190022" [c3b38992-d228-460e-b578-fa2f0f914052] Running
	I0731 16:42:59.491489   16404 system_pods.go:61] "metrics-server-c59844bb4-j57l6" [4638cda1-728a-48d6-9736-4f6234e9f6c1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 16:42:59.491498   16404 system_pods.go:61] "nvidia-device-plugin-daemonset-zcd67" [f8e78301-23c4-432b-bd96-644d7c9b034e] Running
	I0731 16:42:59.491508   16404 system_pods.go:61] "registry-698f998955-xbtsh" [0beecbd0-f912-410d-b71c-b5c7bb05b1a6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0731 16:42:59.491517   16404 system_pods.go:61] "registry-proxy-f7tqb" [896d8e3c-67c0-4b9c-bab5-43c46ee24394] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0731 16:42:59.491529   16404 system_pods.go:61] "snapshot-controller-745499f584-rd2bq" [b32b5cc5-ece8-4227-a9fd-6c3f89791c42] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0731 16:42:59.491539   16404 system_pods.go:61] "snapshot-controller-745499f584-s2f9h" [3d3f15af-4f66-4845-9bbb-874f2d6254fd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0731 16:42:59.491549   16404 system_pods.go:61] "storage-provisioner" [2e3f9681-80f4-4d36-9897-9103dcd23543] Running
	I0731 16:42:59.491557   16404 system_pods.go:61] "tiller-deploy-6677d64bcd-jbrvp" [acce776b-f280-4d5c-85be-c197f74e1f0d] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0731 16:42:59.491568   16404 system_pods.go:74] duration metric: took 8.894803ms to wait for pod list to return data ...
	I0731 16:42:59.491581   16404 default_sa.go:34] waiting for default service account to be created ...
	I0731 16:42:59.493837   16404 default_sa.go:45] found service account: "default"
	I0731 16:42:59.493854   16404 default_sa.go:55] duration metric: took 2.267272ms for default service account to be created ...
	I0731 16:42:59.493861   16404 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 16:42:59.501324   16404 system_pods.go:86] 18 kube-system pods found
	I0731 16:42:59.501352   16404 system_pods.go:89] "coredns-7db6d8ff4d-5xsd7" [e335215e-3b35-4219-aebf-5cb36b99f501] Running
	I0731 16:42:59.501359   16404 system_pods.go:89] "csi-hostpath-attacher-0" [04d0345d-a817-4a36-bc46-fbe0548e5155] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0731 16:42:59.501365   16404 system_pods.go:89] "csi-hostpath-resizer-0" [366c8800-8ec9-4594-8d4e-6f9dd2ec2dfa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0731 16:42:59.501376   16404 system_pods.go:89] "csi-hostpathplugin-t8pzb" [1d031c16-6f7c-45e2-9123-4c71d43ebf7e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0731 16:42:59.501381   16404 system_pods.go:89] "etcd-addons-190022" [461d2a65-1ad4-4075-9d88-f95cf652e869] Running
	I0731 16:42:59.501386   16404 system_pods.go:89] "kube-apiserver-addons-190022" [858de9d8-e554-4ae0-9fb8-fd08b3e02f0d] Running
	I0731 16:42:59.501392   16404 system_pods.go:89] "kube-controller-manager-addons-190022" [528fd2e2-99a3-41f9-be37-0da13e3b7f85] Running
	I0731 16:42:59.501396   16404 system_pods.go:89] "kube-ingress-dns-minikube" [d54a8e85-afc8-4ad8-84be-3d5e643783f0] Running
	I0731 16:42:59.501400   16404 system_pods.go:89] "kube-proxy-p46dc" [3f47ba8b-8470-4e58-aabc-6cc47f18d726] Running
	I0731 16:42:59.501405   16404 system_pods.go:89] "kube-scheduler-addons-190022" [c3b38992-d228-460e-b578-fa2f0f914052] Running
	I0731 16:42:59.501410   16404 system_pods.go:89] "metrics-server-c59844bb4-j57l6" [4638cda1-728a-48d6-9736-4f6234e9f6c1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 16:42:59.501417   16404 system_pods.go:89] "nvidia-device-plugin-daemonset-zcd67" [f8e78301-23c4-432b-bd96-644d7c9b034e] Running
	I0731 16:42:59.501423   16404 system_pods.go:89] "registry-698f998955-xbtsh" [0beecbd0-f912-410d-b71c-b5c7bb05b1a6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0731 16:42:59.501431   16404 system_pods.go:89] "registry-proxy-f7tqb" [896d8e3c-67c0-4b9c-bab5-43c46ee24394] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0731 16:42:59.501438   16404 system_pods.go:89] "snapshot-controller-745499f584-rd2bq" [b32b5cc5-ece8-4227-a9fd-6c3f89791c42] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0731 16:42:59.501447   16404 system_pods.go:89] "snapshot-controller-745499f584-s2f9h" [3d3f15af-4f66-4845-9bbb-874f2d6254fd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0731 16:42:59.501452   16404 system_pods.go:89] "storage-provisioner" [2e3f9681-80f4-4d36-9897-9103dcd23543] Running
	I0731 16:42:59.501459   16404 system_pods.go:89] "tiller-deploy-6677d64bcd-jbrvp" [acce776b-f280-4d5c-85be-c197f74e1f0d] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0731 16:42:59.501465   16404 system_pods.go:126] duration metric: took 7.599553ms to wait for k8s-apps to be running ...
	I0731 16:42:59.501474   16404 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 16:42:59.501522   16404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 16:42:59.515490   16404 system_svc.go:56] duration metric: took 14.008876ms WaitForService to wait for kubelet
	I0731 16:42:59.515517   16404 kubeadm.go:582] duration metric: took 26.686036262s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 16:42:59.515542   16404 node_conditions.go:102] verifying NodePressure condition ...
	I0731 16:42:59.518692   16404 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 16:42:59.518711   16404 node_conditions.go:123] node cpu capacity is 2
	I0731 16:42:59.518723   16404 node_conditions.go:105] duration metric: took 3.172728ms to run NodePressure ...
	I0731 16:42:59.518733   16404 start.go:241] waiting for startup goroutines ...
	I0731 16:42:59.611381   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:42:59.750451   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:42:59.750567   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:42:59.937628   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:00.111365   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:00.252008   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:00.252140   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:00.437554   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:00.610571   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:00.750730   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:00.751243   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:00.935867   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:01.111474   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:01.251191   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:01.252400   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:01.436781   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:01.610899   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:01.752710   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:01.753246   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:01.938067   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:02.120781   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:02.251611   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:02.251916   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:02.436725   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:02.623245   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:02.751047   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:02.751473   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:02.936107   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:03.110927   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:03.249777   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:03.250715   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:03.436299   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:03.611419   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:03.752143   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:03.752333   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:03.939415   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:04.111033   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:04.251896   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:04.252160   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:04.437011   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:04.610412   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:04.751778   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:04.752077   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:04.937139   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:05.110763   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:05.250258   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:05.251870   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:05.446331   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:05.611178   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:05.751942   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:05.752021   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:05.937083   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:06.110296   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:06.255343   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:06.255465   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:06.436095   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:06.610486   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:06.751796   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:06.753975   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:06.936126   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:07.110704   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:07.252853   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:07.253567   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:07.436514   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:07.610479   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:07.750593   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:07.750709   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:07.936396   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:08.111303   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:08.250407   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:08.250820   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:08.436522   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:08.611432   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:08.749760   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:08.751179   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:08.936959   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:09.111386   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:09.249512   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:09.251984   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:09.436582   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:09.611026   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:09.750691   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:09.752121   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:09.936229   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:10.110995   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:10.250282   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:10.251538   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:10.436567   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:10.611012   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:10.750403   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:10.751440   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:10.936254   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:11.112213   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:11.250041   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:11.251371   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:11.436769   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:11.611125   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:11.749970   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:11.751941   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:11.936620   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:12.110946   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:12.250003   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:12.251932   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:12.436347   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:12.611450   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:12.752154   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:12.753050   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:12.951712   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:13.110750   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:13.250068   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:13.251616   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:13.435730   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:13.610933   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:13.750002   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:13.751376   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:13.936661   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:14.111377   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:14.252391   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:14.252715   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:14.436416   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:14.610774   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:14.749669   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:14.751412   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:14.935818   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:15.111037   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:15.249719   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:15.251868   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:15.438042   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:15.936438   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:15.937963   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:15.939699   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:15.939826   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:16.110619   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:16.251514   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:16.254940   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:16.436602   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:16.611088   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:16.750822   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:16.751600   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:16.936020   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:17.111472   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:17.257677   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:17.262742   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:17.435729   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:17.610493   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:17.751055   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:17.751445   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:17.945897   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:18.111460   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:18.251528   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:18.251664   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:18.436434   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:18.611450   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:18.751037   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:18.751282   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:18.937462   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:19.111579   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:19.249679   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:19.251514   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:19.436255   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:19.611442   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:19.751031   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:19.751050   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:19.942081   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:20.111503   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:20.252208   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:20.252940   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:20.436028   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:20.612136   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:20.753174   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:20.754725   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:20.936045   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:21.110865   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:21.249836   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:21.251578   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:21.436696   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:21.610895   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:21.749939   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:21.753532   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:21.937501   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:22.110679   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:22.249664   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:22.252056   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:22.437909   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:22.611624   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:22.752141   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:22.752589   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:22.937685   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:23.111469   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:23.250562   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:23.251743   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:23.435576   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:23.618054   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:23.751268   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:23.751301   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:23.936040   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:24.111311   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:24.250314   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:24.250682   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:24.436695   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:24.610896   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:24.749989   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:24.752189   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:24.937047   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:25.110430   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:25.251318   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:25.251636   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:25.437074   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:25.615305   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:25.753706   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:25.753916   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:25.938328   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:26.115266   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:26.254167   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:26.254342   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:26.438508   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:26.615968   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:26.750170   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:26.750287   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:26.935826   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:27.111083   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:27.251049   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:27.251319   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:27.436540   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:27.611003   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:27.750358   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:27.751479   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:27.939617   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:28.111351   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:28.251260   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:28.251415   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:28.436431   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:28.610717   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:28.749951   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:28.751723   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:28.935777   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:29.111143   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:29.250659   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:29.251498   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:29.437846   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:29.611422   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:29.751565   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:29.751717   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:30.075776   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:30.111398   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:30.250637   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:30.253554   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:30.437507   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:30.628569   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:30.752716   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:30.752939   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:31.161615   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:31.163778   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:31.252393   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:31.252410   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:31.436334   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:31.611452   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:31.750891   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:31.752898   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:31.936812   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:32.112850   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:32.250317   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:32.252971   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:32.437269   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:32.612513   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:32.750203   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:32.750349   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 16:43:32.936275   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:33.110892   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:33.250266   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:33.250361   16404 kapi.go:107] duration metric: took 52.004987116s to wait for kubernetes.io/minikube-addons=registry ...
	I0731 16:43:33.435896   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:33.611279   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:33.752713   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:33.936571   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:34.111324   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:34.250970   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:34.436286   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:34.610486   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:34.750596   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:34.936585   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:35.111399   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:35.250910   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:35.435880   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:35.611469   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:35.749976   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:35.936910   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:36.111984   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:36.250512   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:36.444408   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:36.612107   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:36.750822   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:36.940064   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:37.111562   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:37.250682   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:37.436343   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:37.610849   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:37.750427   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:37.940907   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:38.110935   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:38.250748   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:38.436456   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:38.612128   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:38.819556   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:38.941190   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:39.110554   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:39.250006   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:39.436475   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:39.611798   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:39.750941   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:39.935333   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:40.110969   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:40.250830   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:40.436900   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:40.612129   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:40.750959   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:40.940977   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:41.111814   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:41.250487   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:41.436095   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:41.610395   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:41.750206   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:41.936742   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:42.110962   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:42.251692   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:42.435622   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:42.611181   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:42.751221   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:42.935997   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:43.113945   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:43.250296   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:43.437101   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:43.610775   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:43.750307   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:43.936638   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:44.111472   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:44.249781   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:44.438175   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:44.616001   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:44.751497   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:44.936191   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:45.122822   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:45.261268   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:45.435851   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:45.611605   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:45.751461   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:45.936293   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:46.111265   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:46.253226   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:46.435755   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:46.610818   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:46.750716   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:46.936443   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:47.111198   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:47.250720   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:47.440688   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:47.610875   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:47.752820   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:47.937230   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:48.111382   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:48.251266   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:48.436672   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:48.611101   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:48.750874   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:48.936091   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:49.114995   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:49.251135   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:49.436376   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:49.610591   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:49.755900   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:49.935780   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:50.111300   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:50.251377   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:50.718151   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:50.719737   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:50.750966   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:50.936152   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:51.110478   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:51.250179   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:51.437056   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:51.611900   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:51.752995   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:51.936374   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:52.111596   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:52.251515   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:52.437238   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:52.611299   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:52.751100   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:52.937154   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:53.111130   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:53.325258   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:53.437413   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:53.611176   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:53.751359   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:53.937251   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:54.110858   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:54.252240   16404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 16:43:54.437420   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:54.611353   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:54.751286   16404 kapi.go:107] duration metric: took 1m13.505038604s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0731 16:43:54.944950   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:55.111622   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:55.440408   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:55.611294   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:55.936704   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:56.111251   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:56.437283   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:56.610546   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:56.936769   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:57.111627   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:57.436727   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:57.611499   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:57.936163   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:58.111060   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 16:43:58.436407   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:58.615387   16404 kapi.go:107] duration metric: took 1m15.008131623s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0731 16:43:58.616696   16404 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-190022 cluster.
	I0731 16:43:58.617808   16404 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0731 16:43:58.618941   16404 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0731 16:43:58.937070   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:59.436751   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:43:59.936122   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:44:00.436726   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:44:00.936519   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:44:01.482374   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:44:01.937603   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:44:02.436096   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:44:02.936411   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:44:03.436128   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:44:03.936386   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:44:04.438393   16404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 16:44:04.936389   16404 kapi.go:107] duration metric: took 1m23.005600008s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0731 16:44:04.938087   16404 out.go:177] * Enabled addons: nvidia-device-plugin, helm-tiller, metrics-server, inspektor-gadget, storage-provisioner, ingress-dns, cloud-spanner, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0731 16:44:04.939218   16404 addons.go:510] duration metric: took 1m32.109723075s for enable addons: enabled=[nvidia-device-plugin helm-tiller metrics-server inspektor-gadget storage-provisioner ingress-dns cloud-spanner yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0731 16:44:04.939260   16404 start.go:246] waiting for cluster config update ...
	I0731 16:44:04.939283   16404 start.go:255] writing updated cluster config ...
	I0731 16:44:04.939529   16404 ssh_runner.go:195] Run: rm -f paused
	I0731 16:44:04.990518   16404 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 16:44:04.992260   16404 out.go:177] * Done! kubectl is now configured to use "addons-190022" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 31 16:50:34 addons-190022 crio[684]: time="2024-07-31 16:50:34.838582074Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722444634838557360,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589584,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=634fe1db-41c8-4d33-9d71-fe8eb53e3150 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 16:50:34 addons-190022 crio[684]: time="2024-07-31 16:50:34.839044792Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f925e613-4005-45fa-9e6c-24b8bff46425 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 16:50:34 addons-190022 crio[684]: time="2024-07-31 16:50:34.839099436Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f925e613-4005-45fa-9e6c-24b8bff46425 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 16:50:34 addons-190022 crio[684]: time="2024-07-31 16:50:34.839575458Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b72204b0c0ed63cb5ba3e6090ed222bdade12c033b2e3f53691e28f6f7aef6b9,PodSandboxId:beedf3034d96440fe3491657f0f664abc0e4488a45d7098d1ec47505f1c3297b,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722444442631510028,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-xlw6l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 01fab767-1af8-4759-a725-caff0c1428fc,},Annotations:map[string]string{io.kubernetes.container.hash: 132eb6ad,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0765324d0b239089b8e31e17c569e8ca6eca916c178da715b3356863d6e5b08e,PodSandboxId:cfe2fa74adbbad1df1d72ade4e161087a46619bff52b02ab070983322b461d99,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722444303518929417,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3744fa0a-cce8-4eb8-9ae6-27e8475392e6,},Annotations:map[string]string{io.kubernet
es.container.hash: 55f2e5a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92bd13a40acd241cc986696357382f44b2b86374d42804b30215eebc5d8ea971,PodSandboxId:882ed477f9de5ffe5374fc4225dec00f9c5fb51c8a58edddefd3708ef1302852,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722444248528960505,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5e4e17e2-79e1-4a39-86
01-44265c1314a6,},Annotations:map[string]string{io.kubernetes.container.hash: 91997ced,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b4ec7714ed78c410ef58fb9d24df3a4cb05ade3bd4cf91657da0b63033da347,PodSandboxId:49140dfffc40eb129afc9fd3533c09a10bced535dd56d13034359be5f956dbcd,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1722444214462966358,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-9hlt7,io.kubernetes.pod.namesp
ace: local-path-storage,io.kubernetes.pod.uid: b506afd6-0c0a-470e-b1b3-a48ad3f6977d,},Annotations:map[string]string{io.kubernetes.container.hash: 58aac654,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:583dd18f14d371d009203df28c79f7b43401f51f624f86e8785026321eca760b,PodSandboxId:952962996be74f0b20c4732212097ab1cb35e33f5daebe47ba8ed7fd8f5a4ef2,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722444205420694074,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metric
s-server-c59844bb4-j57l6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4638cda1-728a-48d6-9736-4f6234e9f6c1,},Annotations:map[string]string{io.kubernetes.container.hash: bc7a47c0,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:689504ce84d77edb6d4fa85ce92f64fe541eb2f4c1bb3dd5402f35a76268d00a,PodSandboxId:0fa0ab46a357457b73d44fabedfc912952f7846d3581c3c7007f54da86d27006,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722444159409456506,Labels:m
ap[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e3f9681-80f4-4d36-9897-9103dcd23543,},Annotations:map[string]string{io.kubernetes.container.hash: a0cfa204,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf02a171405c16c5b33884703776484ed4ef25b7a98c921e000897ade7c3f7f5,PodSandboxId:e3c08cf6b87977be5815118cf527ed7673856398de7961f292fb2ca7a27091d5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722444156494522009,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5xsd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e335215e-3b35-4219-aebf-5cb36b99f501,},Annotations:map[string]string{io.kubernetes.container.hash: 95dbeee8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71c2430355fdd45d105e4f0a661352d41ef566f3830a43f4c7a29120639b9b24,PodSandboxId:6df6605b88ca96d64f40013b818e348dc291fc090148fdb57b8f8413584cd189,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},
UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722444153834639592,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p46dc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f47ba8b-8470-4e58-aabc-6cc47f18d726,},Annotations:map[string]string{io.kubernetes.container.hash: 7061031b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37fa9519210b38e6c0b8654e3767fa4299a2f1e74f6e8b921b965aa18a752f7d,PodSandboxId:c1c9c7146766f915a3072287e8b12ad11b89b3cae446cdd9d2b5dbdd204faad9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},I
mageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722444133635200994,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-190022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abdcd5276b59307086e39e5a0733e075,},Annotations:map[string]string{io.kubernetes.container.hash: f6eb76e5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9196ff47535f2ce157de81e4cff038af22e270b2559341c143d31fee8ef1914,PodSandboxId:953e8051b6a6ca722e0a4a2272ffc53736ad34b7fb4b064740f736bdcfbe1b85,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09
caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722444133630129123,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-190022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cae04f723b44db246bbd25dc52dcd209,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6e02749c1e2f10d2e9e617185cc64a68404a6d54853b78cd0a71d5920fe8984,PodSandboxId:4bc06c234387f8877ab21f3854b4d8cc6998c9c371976fbbca4afbbe8d353f86,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466d
d273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722444133640130087,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-190022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35e5d3a0b14653fa12e710c67ba3ee39,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4259df48d13e2dfd05141f8bbfbb4340cdeeaf42b3182a0b13fe16cf804ec32b,PodSandboxId:d4a9685db8a6d1e75221757fd611aed144e97c8c55a8e2c3f29940e27c5e166f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4
c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722444133418400683,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-190022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 241e9b0c677781fd11f1e06d10601799,},Annotations:map[string]string{io.kubernetes.container.hash: 65e98503,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f925e613-4005-45fa-9e6c-24b8bff46425 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 16:50:34 addons-190022 crio[684]: time="2024-07-31 16:50:34.875504169Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5193ef32-9b3e-49b4-a38b-2259cb9b783b name=/runtime.v1.RuntimeService/Version
	Jul 31 16:50:34 addons-190022 crio[684]: time="2024-07-31 16:50:34.875573397Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5193ef32-9b3e-49b4-a38b-2259cb9b783b name=/runtime.v1.RuntimeService/Version
	Jul 31 16:50:34 addons-190022 crio[684]: time="2024-07-31 16:50:34.876652256Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bf799c58-4447-4b68-9e59-c3950d1c3963 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 16:50:34 addons-190022 crio[684]: time="2024-07-31 16:50:34.877980739Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722444634877953298,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589584,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bf799c58-4447-4b68-9e59-c3950d1c3963 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 16:50:34 addons-190022 crio[684]: time="2024-07-31 16:50:34.878682894Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=67405249-d37b-4d7e-a818-517f184866c7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 16:50:34 addons-190022 crio[684]: time="2024-07-31 16:50:34.878740230Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=67405249-d37b-4d7e-a818-517f184866c7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 16:50:34 addons-190022 crio[684]: time="2024-07-31 16:50:34.879173826Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b72204b0c0ed63cb5ba3e6090ed222bdade12c033b2e3f53691e28f6f7aef6b9,PodSandboxId:beedf3034d96440fe3491657f0f664abc0e4488a45d7098d1ec47505f1c3297b,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722444442631510028,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-xlw6l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 01fab767-1af8-4759-a725-caff0c1428fc,},Annotations:map[string]string{io.kubernetes.container.hash: 132eb6ad,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0765324d0b239089b8e31e17c569e8ca6eca916c178da715b3356863d6e5b08e,PodSandboxId:cfe2fa74adbbad1df1d72ade4e161087a46619bff52b02ab070983322b461d99,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722444303518929417,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3744fa0a-cce8-4eb8-9ae6-27e8475392e6,},Annotations:map[string]string{io.kubernet
es.container.hash: 55f2e5a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92bd13a40acd241cc986696357382f44b2b86374d42804b30215eebc5d8ea971,PodSandboxId:882ed477f9de5ffe5374fc4225dec00f9c5fb51c8a58edddefd3708ef1302852,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722444248528960505,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5e4e17e2-79e1-4a39-86
01-44265c1314a6,},Annotations:map[string]string{io.kubernetes.container.hash: 91997ced,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b4ec7714ed78c410ef58fb9d24df3a4cb05ade3bd4cf91657da0b63033da347,PodSandboxId:49140dfffc40eb129afc9fd3533c09a10bced535dd56d13034359be5f956dbcd,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1722444214462966358,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-9hlt7,io.kubernetes.pod.namesp
ace: local-path-storage,io.kubernetes.pod.uid: b506afd6-0c0a-470e-b1b3-a48ad3f6977d,},Annotations:map[string]string{io.kubernetes.container.hash: 58aac654,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:583dd18f14d371d009203df28c79f7b43401f51f624f86e8785026321eca760b,PodSandboxId:952962996be74f0b20c4732212097ab1cb35e33f5daebe47ba8ed7fd8f5a4ef2,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722444205420694074,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metric
s-server-c59844bb4-j57l6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4638cda1-728a-48d6-9736-4f6234e9f6c1,},Annotations:map[string]string{io.kubernetes.container.hash: bc7a47c0,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:689504ce84d77edb6d4fa85ce92f64fe541eb2f4c1bb3dd5402f35a76268d00a,PodSandboxId:0fa0ab46a357457b73d44fabedfc912952f7846d3581c3c7007f54da86d27006,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722444159409456506,Labels:m
ap[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e3f9681-80f4-4d36-9897-9103dcd23543,},Annotations:map[string]string{io.kubernetes.container.hash: a0cfa204,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf02a171405c16c5b33884703776484ed4ef25b7a98c921e000897ade7c3f7f5,PodSandboxId:e3c08cf6b87977be5815118cf527ed7673856398de7961f292fb2ca7a27091d5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722444156494522009,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5xsd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e335215e-3b35-4219-aebf-5cb36b99f501,},Annotations:map[string]string{io.kubernetes.container.hash: 95dbeee8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71c2430355fdd45d105e4f0a661352d41ef566f3830a43f4c7a29120639b9b24,PodSandboxId:6df6605b88ca96d64f40013b818e348dc291fc090148fdb57b8f8413584cd189,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},
UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722444153834639592,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p46dc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f47ba8b-8470-4e58-aabc-6cc47f18d726,},Annotations:map[string]string{io.kubernetes.container.hash: 7061031b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37fa9519210b38e6c0b8654e3767fa4299a2f1e74f6e8b921b965aa18a752f7d,PodSandboxId:c1c9c7146766f915a3072287e8b12ad11b89b3cae446cdd9d2b5dbdd204faad9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},I
mageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722444133635200994,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-190022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abdcd5276b59307086e39e5a0733e075,},Annotations:map[string]string{io.kubernetes.container.hash: f6eb76e5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9196ff47535f2ce157de81e4cff038af22e270b2559341c143d31fee8ef1914,PodSandboxId:953e8051b6a6ca722e0a4a2272ffc53736ad34b7fb4b064740f736bdcfbe1b85,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09
caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722444133630129123,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-190022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cae04f723b44db246bbd25dc52dcd209,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6e02749c1e2f10d2e9e617185cc64a68404a6d54853b78cd0a71d5920fe8984,PodSandboxId:4bc06c234387f8877ab21f3854b4d8cc6998c9c371976fbbca4afbbe8d353f86,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466d
d273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722444133640130087,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-190022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35e5d3a0b14653fa12e710c67ba3ee39,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4259df48d13e2dfd05141f8bbfbb4340cdeeaf42b3182a0b13fe16cf804ec32b,PodSandboxId:d4a9685db8a6d1e75221757fd611aed144e97c8c55a8e2c3f29940e27c5e166f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4
c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722444133418400683,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-190022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 241e9b0c677781fd11f1e06d10601799,},Annotations:map[string]string{io.kubernetes.container.hash: 65e98503,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=67405249-d37b-4d7e-a818-517f184866c7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 16:50:34 addons-190022 crio[684]: time="2024-07-31 16:50:34.915206669Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5b1ae517-67e7-4d0a-b898-d0b4de93992a name=/runtime.v1.RuntimeService/Version
	Jul 31 16:50:34 addons-190022 crio[684]: time="2024-07-31 16:50:34.915338812Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5b1ae517-67e7-4d0a-b898-d0b4de93992a name=/runtime.v1.RuntimeService/Version
	Jul 31 16:50:34 addons-190022 crio[684]: time="2024-07-31 16:50:34.916653692Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2fc82b12-ffd8-43b4-893e-ebed3988f942 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 16:50:34 addons-190022 crio[684]: time="2024-07-31 16:50:34.918010028Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722444634917983805,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589584,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2fc82b12-ffd8-43b4-893e-ebed3988f942 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 16:50:34 addons-190022 crio[684]: time="2024-07-31 16:50:34.918670452Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d7a6b45b-7872-42b0-a32a-c17e5691f758 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 16:50:34 addons-190022 crio[684]: time="2024-07-31 16:50:34.918733065Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d7a6b45b-7872-42b0-a32a-c17e5691f758 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 16:50:34 addons-190022 crio[684]: time="2024-07-31 16:50:34.918985501Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b72204b0c0ed63cb5ba3e6090ed222bdade12c033b2e3f53691e28f6f7aef6b9,PodSandboxId:beedf3034d96440fe3491657f0f664abc0e4488a45d7098d1ec47505f1c3297b,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722444442631510028,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-xlw6l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 01fab767-1af8-4759-a725-caff0c1428fc,},Annotations:map[string]string{io.kubernetes.container.hash: 132eb6ad,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0765324d0b239089b8e31e17c569e8ca6eca916c178da715b3356863d6e5b08e,PodSandboxId:cfe2fa74adbbad1df1d72ade4e161087a46619bff52b02ab070983322b461d99,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722444303518929417,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3744fa0a-cce8-4eb8-9ae6-27e8475392e6,},Annotations:map[string]string{io.kubernet
es.container.hash: 55f2e5a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92bd13a40acd241cc986696357382f44b2b86374d42804b30215eebc5d8ea971,PodSandboxId:882ed477f9de5ffe5374fc4225dec00f9c5fb51c8a58edddefd3708ef1302852,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722444248528960505,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5e4e17e2-79e1-4a39-86
01-44265c1314a6,},Annotations:map[string]string{io.kubernetes.container.hash: 91997ced,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b4ec7714ed78c410ef58fb9d24df3a4cb05ade3bd4cf91657da0b63033da347,PodSandboxId:49140dfffc40eb129afc9fd3533c09a10bced535dd56d13034359be5f956dbcd,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1722444214462966358,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-9hlt7,io.kubernetes.pod.namesp
ace: local-path-storage,io.kubernetes.pod.uid: b506afd6-0c0a-470e-b1b3-a48ad3f6977d,},Annotations:map[string]string{io.kubernetes.container.hash: 58aac654,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:583dd18f14d371d009203df28c79f7b43401f51f624f86e8785026321eca760b,PodSandboxId:952962996be74f0b20c4732212097ab1cb35e33f5daebe47ba8ed7fd8f5a4ef2,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722444205420694074,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metric
s-server-c59844bb4-j57l6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4638cda1-728a-48d6-9736-4f6234e9f6c1,},Annotations:map[string]string{io.kubernetes.container.hash: bc7a47c0,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:689504ce84d77edb6d4fa85ce92f64fe541eb2f4c1bb3dd5402f35a76268d00a,PodSandboxId:0fa0ab46a357457b73d44fabedfc912952f7846d3581c3c7007f54da86d27006,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722444159409456506,Labels:m
ap[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e3f9681-80f4-4d36-9897-9103dcd23543,},Annotations:map[string]string{io.kubernetes.container.hash: a0cfa204,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf02a171405c16c5b33884703776484ed4ef25b7a98c921e000897ade7c3f7f5,PodSandboxId:e3c08cf6b87977be5815118cf527ed7673856398de7961f292fb2ca7a27091d5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722444156494522009,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5xsd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e335215e-3b35-4219-aebf-5cb36b99f501,},Annotations:map[string]string{io.kubernetes.container.hash: 95dbeee8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71c2430355fdd45d105e4f0a661352d41ef566f3830a43f4c7a29120639b9b24,PodSandboxId:6df6605b88ca96d64f40013b818e348dc291fc090148fdb57b8f8413584cd189,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},
UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722444153834639592,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p46dc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f47ba8b-8470-4e58-aabc-6cc47f18d726,},Annotations:map[string]string{io.kubernetes.container.hash: 7061031b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37fa9519210b38e6c0b8654e3767fa4299a2f1e74f6e8b921b965aa18a752f7d,PodSandboxId:c1c9c7146766f915a3072287e8b12ad11b89b3cae446cdd9d2b5dbdd204faad9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},I
mageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722444133635200994,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-190022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abdcd5276b59307086e39e5a0733e075,},Annotations:map[string]string{io.kubernetes.container.hash: f6eb76e5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9196ff47535f2ce157de81e4cff038af22e270b2559341c143d31fee8ef1914,PodSandboxId:953e8051b6a6ca722e0a4a2272ffc53736ad34b7fb4b064740f736bdcfbe1b85,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09
caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722444133630129123,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-190022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cae04f723b44db246bbd25dc52dcd209,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6e02749c1e2f10d2e9e617185cc64a68404a6d54853b78cd0a71d5920fe8984,PodSandboxId:4bc06c234387f8877ab21f3854b4d8cc6998c9c371976fbbca4afbbe8d353f86,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466d
d273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722444133640130087,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-190022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35e5d3a0b14653fa12e710c67ba3ee39,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4259df48d13e2dfd05141f8bbfbb4340cdeeaf42b3182a0b13fe16cf804ec32b,PodSandboxId:d4a9685db8a6d1e75221757fd611aed144e97c8c55a8e2c3f29940e27c5e166f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4
c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722444133418400683,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-190022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 241e9b0c677781fd11f1e06d10601799,},Annotations:map[string]string{io.kubernetes.container.hash: 65e98503,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d7a6b45b-7872-42b0-a32a-c17e5691f758 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 16:50:34 addons-190022 crio[684]: time="2024-07-31 16:50:34.952406743Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b7064e04-5f39-42b0-81c5-c518622f68e9 name=/runtime.v1.RuntimeService/Version
	Jul 31 16:50:34 addons-190022 crio[684]: time="2024-07-31 16:50:34.952480202Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b7064e04-5f39-42b0-81c5-c518622f68e9 name=/runtime.v1.RuntimeService/Version
	Jul 31 16:50:34 addons-190022 crio[684]: time="2024-07-31 16:50:34.953468136Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=361435b3-3ada-43a9-8571-82011bf8a39c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 16:50:34 addons-190022 crio[684]: time="2024-07-31 16:50:34.954811012Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722444634954786581,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589584,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=361435b3-3ada-43a9-8571-82011bf8a39c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 16:50:34 addons-190022 crio[684]: time="2024-07-31 16:50:34.955403158Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=96473eec-6e69-480b-994a-d5a62b73acc5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 16:50:34 addons-190022 crio[684]: time="2024-07-31 16:50:34.955470821Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=96473eec-6e69-480b-994a-d5a62b73acc5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 16:50:34 addons-190022 crio[684]: time="2024-07-31 16:50:34.956187429Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b72204b0c0ed63cb5ba3e6090ed222bdade12c033b2e3f53691e28f6f7aef6b9,PodSandboxId:beedf3034d96440fe3491657f0f664abc0e4488a45d7098d1ec47505f1c3297b,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722444442631510028,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-xlw6l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 01fab767-1af8-4759-a725-caff0c1428fc,},Annotations:map[string]string{io.kubernetes.container.hash: 132eb6ad,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0765324d0b239089b8e31e17c569e8ca6eca916c178da715b3356863d6e5b08e,PodSandboxId:cfe2fa74adbbad1df1d72ade4e161087a46619bff52b02ab070983322b461d99,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722444303518929417,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3744fa0a-cce8-4eb8-9ae6-27e8475392e6,},Annotations:map[string]string{io.kubernet
es.container.hash: 55f2e5a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92bd13a40acd241cc986696357382f44b2b86374d42804b30215eebc5d8ea971,PodSandboxId:882ed477f9de5ffe5374fc4225dec00f9c5fb51c8a58edddefd3708ef1302852,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722444248528960505,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5e4e17e2-79e1-4a39-86
01-44265c1314a6,},Annotations:map[string]string{io.kubernetes.container.hash: 91997ced,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b4ec7714ed78c410ef58fb9d24df3a4cb05ade3bd4cf91657da0b63033da347,PodSandboxId:49140dfffc40eb129afc9fd3533c09a10bced535dd56d13034359be5f956dbcd,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1722444214462966358,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-9hlt7,io.kubernetes.pod.namesp
ace: local-path-storage,io.kubernetes.pod.uid: b506afd6-0c0a-470e-b1b3-a48ad3f6977d,},Annotations:map[string]string{io.kubernetes.container.hash: 58aac654,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:583dd18f14d371d009203df28c79f7b43401f51f624f86e8785026321eca760b,PodSandboxId:952962996be74f0b20c4732212097ab1cb35e33f5daebe47ba8ed7fd8f5a4ef2,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722444205420694074,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metric
s-server-c59844bb4-j57l6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4638cda1-728a-48d6-9736-4f6234e9f6c1,},Annotations:map[string]string{io.kubernetes.container.hash: bc7a47c0,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:689504ce84d77edb6d4fa85ce92f64fe541eb2f4c1bb3dd5402f35a76268d00a,PodSandboxId:0fa0ab46a357457b73d44fabedfc912952f7846d3581c3c7007f54da86d27006,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722444159409456506,Labels:m
ap[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e3f9681-80f4-4d36-9897-9103dcd23543,},Annotations:map[string]string{io.kubernetes.container.hash: a0cfa204,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf02a171405c16c5b33884703776484ed4ef25b7a98c921e000897ade7c3f7f5,PodSandboxId:e3c08cf6b87977be5815118cf527ed7673856398de7961f292fb2ca7a27091d5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722444156494522009,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5xsd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e335215e-3b35-4219-aebf-5cb36b99f501,},Annotations:map[string]string{io.kubernetes.container.hash: 95dbeee8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71c2430355fdd45d105e4f0a661352d41ef566f3830a43f4c7a29120639b9b24,PodSandboxId:6df6605b88ca96d64f40013b818e348dc291fc090148fdb57b8f8413584cd189,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},
UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722444153834639592,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p46dc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f47ba8b-8470-4e58-aabc-6cc47f18d726,},Annotations:map[string]string{io.kubernetes.container.hash: 7061031b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37fa9519210b38e6c0b8654e3767fa4299a2f1e74f6e8b921b965aa18a752f7d,PodSandboxId:c1c9c7146766f915a3072287e8b12ad11b89b3cae446cdd9d2b5dbdd204faad9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},I
mageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722444133635200994,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-190022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abdcd5276b59307086e39e5a0733e075,},Annotations:map[string]string{io.kubernetes.container.hash: f6eb76e5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9196ff47535f2ce157de81e4cff038af22e270b2559341c143d31fee8ef1914,PodSandboxId:953e8051b6a6ca722e0a4a2272ffc53736ad34b7fb4b064740f736bdcfbe1b85,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09
caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722444133630129123,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-190022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cae04f723b44db246bbd25dc52dcd209,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6e02749c1e2f10d2e9e617185cc64a68404a6d54853b78cd0a71d5920fe8984,PodSandboxId:4bc06c234387f8877ab21f3854b4d8cc6998c9c371976fbbca4afbbe8d353f86,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466d
d273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722444133640130087,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-190022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35e5d3a0b14653fa12e710c67ba3ee39,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4259df48d13e2dfd05141f8bbfbb4340cdeeaf42b3182a0b13fe16cf804ec32b,PodSandboxId:d4a9685db8a6d1e75221757fd611aed144e97c8c55a8e2c3f29940e27c5e166f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4
c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722444133418400683,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-190022,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 241e9b0c677781fd11f1e06d10601799,},Annotations:map[string]string{io.kubernetes.container.hash: 65e98503,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=96473eec-6e69-480b-994a-d5a62b73acc5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b72204b0c0ed6       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   3 minutes ago       Running             hello-world-app           0                   beedf3034d964       hello-world-app-6778b5fc9f-xlw6l
	0765324d0b239       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                         5 minutes ago       Running             nginx                     0                   cfe2fa74adbba       nginx
	92bd13a40acd2       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     6 minutes ago       Running             busybox                   0                   882ed477f9de5       busybox
	9b4ec7714ed78       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef        7 minutes ago       Running             local-path-provisioner    0                   49140dfffc40e       local-path-provisioner-8d985888d-9hlt7
	583dd18f14d37       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   7 minutes ago       Running             metrics-server            0                   952962996be74       metrics-server-c59844bb4-j57l6
	689504ce84d77       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        7 minutes ago       Running             storage-provisioner       0                   0fa0ab46a3574       storage-provisioner
	bf02a171405c1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        7 minutes ago       Running             coredns                   0                   e3c08cf6b8797       coredns-7db6d8ff4d-5xsd7
	71c2430355fdd       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                                        8 minutes ago       Running             kube-proxy                0                   6df6605b88ca9       kube-proxy-p46dc
	b6e02749c1e2f       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                                        8 minutes ago       Running             kube-controller-manager   0                   4bc06c234387f       kube-controller-manager-addons-190022
	37fa9519210b3       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                        8 minutes ago       Running             etcd                      0                   c1c9c7146766f       etcd-addons-190022
	a9196ff47535f       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                                        8 minutes ago       Running             kube-scheduler            0                   953e8051b6a6c       kube-scheduler-addons-190022
	4259df48d13e2       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                                        8 minutes ago       Running             kube-apiserver            0                   d4a9685db8a6d       kube-apiserver-addons-190022
	
	
	==> coredns [bf02a171405c16c5b33884703776484ed4ef25b7a98c921e000897ade7c3f7f5] <==
	[INFO] 10.244.0.7:35275 - 2356 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.001253212s
	[INFO] 10.244.0.7:47135 - 24081 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000074895s
	[INFO] 10.244.0.7:47135 - 19723 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00007575s
	[INFO] 10.244.0.7:45814 - 12542 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00012956s
	[INFO] 10.244.0.7:45814 - 36093 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000078848s
	[INFO] 10.244.0.7:39930 - 18624 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000100773s
	[INFO] 10.244.0.7:39930 - 22466 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000108035s
	[INFO] 10.244.0.7:58512 - 28535 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000052653s
	[INFO] 10.244.0.7:58512 - 45128 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000063382s
	[INFO] 10.244.0.7:47265 - 25829 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000028845s
	[INFO] 10.244.0.7:47265 - 46824 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000061601s
	[INFO] 10.244.0.7:34028 - 29937 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000026686s
	[INFO] 10.244.0.7:34028 - 22007 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000028446s
	[INFO] 10.244.0.7:38783 - 18194 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000027676s
	[INFO] 10.244.0.7:38783 - 35093 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000063226s
	[INFO] 10.244.0.22:54467 - 29428 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001089104s
	[INFO] 10.244.0.22:49179 - 8534 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000411106s
	[INFO] 10.244.0.22:42935 - 10254 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000127186s
	[INFO] 10.244.0.22:39706 - 28067 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000104068s
	[INFO] 10.244.0.22:38280 - 13891 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000114332s
	[INFO] 10.244.0.22:49870 - 12013 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000112283s
	[INFO] 10.244.0.22:40432 - 31354 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.000832301s
	[INFO] 10.244.0.22:38095 - 44901 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003515395s
	[INFO] 10.244.0.25:60167 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000369871s
	[INFO] 10.244.0.25:48255 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000111281s
	
	
	==> describe nodes <==
	Name:               addons-190022
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-190022
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1d737dad7efa60c56d30434fcd857dd3b14c91d9
	                    minikube.k8s.io/name=addons-190022
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T16_42_19_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-190022
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 16:42:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-190022
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 16:50:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 16:47:55 +0000   Wed, 31 Jul 2024 16:42:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 16:47:55 +0000   Wed, 31 Jul 2024 16:42:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 16:47:55 +0000   Wed, 31 Jul 2024 16:42:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 16:47:55 +0000   Wed, 31 Jul 2024 16:42:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.140
	  Hostname:    addons-190022
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 f12e0c90c0b74bb2ab73fd663fb74722
	  System UUID:                f12e0c90-c0b7-4bb2-ab73-fd663fb74722
	  Boot ID:                    3779d878-0e6a-41ae-98eb-58e93b91f1b1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                      ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m30s
	  default                     hello-world-app-6778b5fc9f-xlw6l          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m16s
	  default                     nginx                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m36s
	  kube-system                 coredns-7db6d8ff4d-5xsd7                  100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     8m2s
	  kube-system                 etcd-addons-190022                        100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         8m17s
	  kube-system                 kube-apiserver-addons-190022              250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m17s
	  kube-system                 kube-controller-manager-addons-190022     200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m17s
	  kube-system                 kube-proxy-p46dc                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m3s
	  kube-system                 kube-scheduler-addons-190022              100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m17s
	  kube-system                 metrics-server-c59844bb4-j57l6            100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         7m57s
	  kube-system                 storage-provisioner                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m58s
	  local-path-storage          local-path-provisioner-8d985888d-9hlt7    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (9%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 8m     kube-proxy       
	  Normal  Starting                 8m17s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m17s  kubelet          Node addons-190022 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m17s  kubelet          Node addons-190022 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m17s  kubelet          Node addons-190022 status is now: NodeHasSufficientPID
	  Normal  NodeReady                8m16s  kubelet          Node addons-190022 status is now: NodeReady
	  Normal  RegisteredNode           8m3s   node-controller  Node addons-190022 event: Registered Node addons-190022 in Controller
	
	
	==> dmesg <==
	[ +14.749703] systemd-fstab-generator[1475]: Ignoring "noauto" option for root device
	[  +0.139032] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.041311] kauditd_printk_skb: 119 callbacks suppressed
	[  +5.057407] kauditd_printk_skb: 150 callbacks suppressed
	[  +5.432897] kauditd_printk_skb: 55 callbacks suppressed
	[Jul31 16:43] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.952538] kauditd_printk_skb: 1 callbacks suppressed
	[  +5.496129] kauditd_printk_skb: 23 callbacks suppressed
	[  +9.020916] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.815173] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.652736] kauditd_printk_skb: 28 callbacks suppressed
	[  +9.292292] kauditd_printk_skb: 85 callbacks suppressed
	[  +5.099346] kauditd_printk_skb: 11 callbacks suppressed
	[Jul31 16:44] kauditd_printk_skb: 2 callbacks suppressed
	[ +14.999978] kauditd_printk_skb: 50 callbacks suppressed
	[ +10.306622] kauditd_printk_skb: 2 callbacks suppressed
	[ +11.639899] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.397971] kauditd_printk_skb: 43 callbacks suppressed
	[  +6.540824] kauditd_printk_skb: 56 callbacks suppressed
	[  +5.806532] kauditd_printk_skb: 22 callbacks suppressed
	[Jul31 16:45] kauditd_printk_skb: 55 callbacks suppressed
	[ +29.802053] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.231383] kauditd_printk_skb: 33 callbacks suppressed
	[Jul31 16:47] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.102617] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [37fa9519210b38e6c0b8654e3767fa4299a2f1e74f6e8b921b965aa18a752f7d] <==
	{"level":"warn","ts":"2024-07-31T16:43:50.693816Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-31T16:43:50.294997Z","time spent":"396.679149ms","remote":"127.0.0.1:38316","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":541,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-190022\" mod_revision:1021 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-190022\" value_size:487 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-190022\" > >"}
	{"level":"warn","ts":"2024-07-31T16:43:50.695523Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"221.016475ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/roles/\" range_end:\"/registry/roles0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-07-31T16:43:50.695578Z","caller":"traceutil/trace.go:171","msg":"trace[203305282] range","detail":"{range_begin:/registry/roles/; range_end:/registry/roles0; response_count:0; response_revision:1097; }","duration":"221.101194ms","start":"2024-07-31T16:43:50.474468Z","end":"2024-07-31T16:43:50.69557Z","steps":["trace[203305282] 'agreement among raft nodes before linearized reading'  (duration: 220.977966ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T16:43:50.69576Z","caller":"traceutil/trace.go:171","msg":"trace[903938694] transaction","detail":"{read_only:false; response_revision:1097; number_of_response:1; }","duration":"279.79359ms","start":"2024-07-31T16:43:50.415959Z","end":"2024-07-31T16:43:50.695753Z","steps":["trace[903938694] 'process raft request'  (duration: 279.414648ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T16:43:50.69592Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.928449ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11453"}
	{"level":"warn","ts":"2024-07-31T16:43:50.696376Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"221.160402ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/gadget/gadget-f5ddz.17e759e646c4e513\" ","response":"range_response_count:1 size:779"}
	{"level":"info","ts":"2024-07-31T16:43:50.696414Z","caller":"traceutil/trace.go:171","msg":"trace[1097566836] range","detail":"{range_begin:/registry/events/gadget/gadget-f5ddz.17e759e646c4e513; range_end:; response_count:1; response_revision:1097; }","duration":"221.220289ms","start":"2024-07-31T16:43:50.475187Z","end":"2024-07-31T16:43:50.696407Z","steps":["trace[1097566836] 'agreement among raft nodes before linearized reading'  (duration: 221.133272ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T16:43:50.69666Z","caller":"traceutil/trace.go:171","msg":"trace[1272129351] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1097; }","duration":"105.019388ms","start":"2024-07-31T16:43:50.590961Z","end":"2024-07-31T16:43:50.695981Z","steps":["trace[1272129351] 'agreement among raft nodes before linearized reading'  (duration: 104.892665ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T16:44:01.459193Z","caller":"traceutil/trace.go:171","msg":"trace[1792784800] transaction","detail":"{read_only:false; response_revision:1155; number_of_response:1; }","duration":"111.773545ms","start":"2024-07-31T16:44:01.347397Z","end":"2024-07-31T16:44:01.45917Z","steps":["trace[1792784800] 'process raft request'  (duration: 111.411046ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T16:44:01.82981Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"211.860222ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2024-07-31T16:44:01.830007Z","caller":"traceutil/trace.go:171","msg":"trace[148798408] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1155; }","duration":"212.118848ms","start":"2024-07-31T16:44:01.617875Z","end":"2024-07-31T16:44:01.829994Z","steps":["trace[148798408] 'range keys from in-memory index tree'  (duration: 211.706506ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T16:44:04.790297Z","caller":"traceutil/trace.go:171","msg":"trace[231479675] transaction","detail":"{read_only:false; response_revision:1170; number_of_response:1; }","duration":"155.69714ms","start":"2024-07-31T16:44:04.634582Z","end":"2024-07-31T16:44:04.790279Z","steps":["trace[231479675] 'process raft request'  (duration: 155.518894ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T16:45:01.717109Z","caller":"traceutil/trace.go:171","msg":"trace[1846756209] linearizableReadLoop","detail":"{readStateIndex:1677; appliedIndex:1676; }","duration":"274.751668ms","start":"2024-07-31T16:45:01.442324Z","end":"2024-07-31T16:45:01.717076Z","steps":["trace[1846756209] 'read index received'  (duration: 274.573584ms)","trace[1846756209] 'applied index is now lower than readState.Index'  (duration: 177.471µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-31T16:45:01.717284Z","caller":"traceutil/trace.go:171","msg":"trace[1667804110] transaction","detail":"{read_only:false; response_revision:1617; number_of_response:1; }","duration":"318.989675ms","start":"2024-07-31T16:45:01.398288Z","end":"2024-07-31T16:45:01.717278Z","steps":["trace[1667804110] 'process raft request'  (duration: 318.670611ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T16:45:01.717455Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-31T16:45:01.398271Z","time spent":"319.033235ms","remote":"127.0.0.1:38206","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3802,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/headlamp/headlamp-9d868696f-kxkw6\" mod_revision:1601 > success:<request_put:<key:\"/registry/pods/headlamp/headlamp-9d868696f-kxkw6\" value_size:3746 >> failure:<request_range:<key:\"/registry/pods/headlamp/headlamp-9d868696f-kxkw6\" > >"}
	{"level":"warn","ts":"2024-07-31T16:45:01.717574Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"275.264411ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas/\" range_end:\"/registry/resourcequotas0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-31T16:45:01.71761Z","caller":"traceutil/trace.go:171","msg":"trace[505274928] range","detail":"{range_begin:/registry/resourcequotas/; range_end:/registry/resourcequotas0; response_count:0; response_revision:1617; }","duration":"275.333998ms","start":"2024-07-31T16:45:01.44227Z","end":"2024-07-31T16:45:01.717604Z","steps":["trace[505274928] 'agreement among raft nodes before linearized reading'  (duration: 275.275441ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T16:45:01.717741Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"258.925085ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-31T16:45:01.717774Z","caller":"traceutil/trace.go:171","msg":"trace[398093156] range","detail":"{range_begin:/registry/horizontalpodautoscalers/; range_end:/registry/horizontalpodautoscalers0; response_count:0; response_revision:1617; }","duration":"258.982988ms","start":"2024-07-31T16:45:01.458785Z","end":"2024-07-31T16:45:01.717768Z","steps":["trace[398093156] 'agreement among raft nodes before linearized reading'  (duration: 258.937037ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T16:45:01.718106Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.571059ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:14 size:70538"}
	{"level":"info","ts":"2024-07-31T16:45:01.718148Z","caller":"traceutil/trace.go:171","msg":"trace[1312937675] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:14; response_revision:1617; }","duration":"101.643381ms","start":"2024-07-31T16:45:01.616497Z","end":"2024-07-31T16:45:01.718141Z","steps":["trace[1312937675] 'agreement among raft nodes before linearized reading'  (duration: 101.48067ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T16:45:01.718304Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"171.608555ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:5242"}
	{"level":"info","ts":"2024-07-31T16:45:01.718334Z","caller":"traceutil/trace.go:171","msg":"trace[74531476] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1617; }","duration":"171.641584ms","start":"2024-07-31T16:45:01.546686Z","end":"2024-07-31T16:45:01.718328Z","steps":["trace[74531476] 'agreement among raft nodes before linearized reading'  (duration: 171.529137ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T16:45:40.093464Z","caller":"traceutil/trace.go:171","msg":"trace[1985704140] transaction","detail":"{read_only:false; response_revision:1830; number_of_response:1; }","duration":"133.990425ms","start":"2024-07-31T16:45:39.959449Z","end":"2024-07-31T16:45:40.093439Z","steps":["trace[1985704140] 'process raft request'  (duration: 133.838386ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T16:45:46.22431Z","caller":"traceutil/trace.go:171","msg":"trace[133151561] transaction","detail":"{read_only:false; response_revision:1867; number_of_response:1; }","duration":"108.37674ms","start":"2024-07-31T16:45:46.11591Z","end":"2024-07-31T16:45:46.224287Z","steps":["trace[133151561] 'process raft request'  (duration: 108.174866ms)"],"step_count":1}
	
	
	==> kernel <==
	 16:50:35 up 8 min,  0 users,  load average: 0.15, 0.73, 0.58
	Linux addons-190022 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4259df48d13e2dfd05141f8bbfbb4340cdeeaf42b3182a0b13fe16cf804ec32b] <==
	E0731 16:44:28.988176       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0731 16:44:28.989040       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.82.240:443/apis/metrics.k8s.io/v1beta1: Get "https://10.108.82.240:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.108.82.240:443: connect: connection refused
	I0731 16:44:29.035962       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0731 16:44:36.631090       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0731 16:44:37.658368       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0731 16:44:48.587906       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.104.33.233"}
	I0731 16:44:59.382701       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0731 16:44:59.540123       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.144.237"}
	I0731 16:45:17.906762       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0731 16:45:41.241947       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 16:45:41.242033       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0731 16:45:41.273135       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 16:45:41.273191       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0731 16:45:41.282114       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 16:45:41.282158       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0731 16:45:41.295069       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 16:45:41.295119       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0731 16:45:41.320587       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 16:45:41.320717       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0731 16:45:42.282839       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0731 16:45:42.320654       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0731 16:45:42.375879       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0731 16:47:19.940191       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.105.116.6"}
	E0731 16:47:22.068809       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [b6e02749c1e2f10d2e9e617185cc64a68404a6d54853b78cd0a71d5920fe8984] <==
	W0731 16:48:35.190881       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 16:48:35.191011       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 16:48:43.438852       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 16:48:43.438896       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 16:48:44.435209       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 16:48:44.435293       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 16:49:12.247391       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 16:49:12.247597       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 16:49:18.390143       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 16:49:18.390187       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 16:49:20.250534       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 16:49:20.250639       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 16:49:28.744195       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 16:49:28.744328       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 16:49:51.988597       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 16:49:51.988758       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 16:49:54.653967       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 16:49:54.654087       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 16:50:00.161025       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 16:50:00.161158       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 16:50:27.655130       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 16:50:27.655376       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 16:50:28.973329       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 16:50:28.973370       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0731 16:50:33.980258       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="17.616µs"
	
	
	==> kube-proxy [71c2430355fdd45d105e4f0a661352d41ef566f3830a43f4c7a29120639b9b24] <==
	I0731 16:42:34.695147       1 server_linux.go:69] "Using iptables proxy"
	I0731 16:42:34.706680       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.140"]
	I0731 16:42:34.816880       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 16:42:34.816941       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 16:42:34.816962       1 server_linux.go:165] "Using iptables Proxier"
	I0731 16:42:34.830525       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 16:42:34.830764       1 server.go:872] "Version info" version="v1.30.3"
	I0731 16:42:34.830787       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 16:42:34.831885       1 config.go:192] "Starting service config controller"
	I0731 16:42:34.831914       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 16:42:34.831934       1 config.go:101] "Starting endpoint slice config controller"
	I0731 16:42:34.831938       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 16:42:34.832480       1 config.go:319] "Starting node config controller"
	I0731 16:42:34.832505       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 16:42:34.932669       1 shared_informer.go:320] Caches are synced for node config
	I0731 16:42:34.932700       1 shared_informer.go:320] Caches are synced for service config
	I0731 16:42:34.932748       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [a9196ff47535f2ce157de81e4cff038af22e270b2559341c143d31fee8ef1914] <==
	W0731 16:42:16.094276       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0731 16:42:16.094298       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0731 16:42:16.094337       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 16:42:16.094359       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0731 16:42:16.094420       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 16:42:16.094471       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 16:42:16.915664       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 16:42:16.915741       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0731 16:42:16.975998       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 16:42:16.976060       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0731 16:42:16.999604       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 16:42:17.000321       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0731 16:42:17.166569       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 16:42:17.166706       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0731 16:42:17.180096       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 16:42:17.180180       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 16:42:17.186396       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 16:42:17.186476       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0731 16:42:17.218736       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0731 16:42:17.218815       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0731 16:42:17.218748       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 16:42:17.218896       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0731 16:42:17.448355       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 16:42:17.448491       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0731 16:42:19.887509       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 31 16:48:18 addons-190022 kubelet[1270]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 16:48:18 addons-190022 kubelet[1270]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 16:48:18 addons-190022 kubelet[1270]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 16:48:18 addons-190022 kubelet[1270]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 16:48:18 addons-190022 kubelet[1270]: I0731 16:48:18.902993    1270 scope.go:117] "RemoveContainer" containerID="eb0e5c820bdb410ad6ab2ee4c5e760ba94055502cdcbab390bf0250334bb10f5"
	Jul 31 16:48:18 addons-190022 kubelet[1270]: I0731 16:48:18.933674    1270 scope.go:117] "RemoveContainer" containerID="b4041dd1f1c11f032d0d76bfe7b25341c0780ec9baf9f0c16dccd7331ab44680"
	Jul 31 16:48:58 addons-190022 kubelet[1270]: I0731 16:48:58.392950    1270 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Jul 31 16:49:18 addons-190022 kubelet[1270]: E0731 16:49:18.411217    1270 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 16:49:18 addons-190022 kubelet[1270]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 16:49:18 addons-190022 kubelet[1270]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 16:49:18 addons-190022 kubelet[1270]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 16:49:18 addons-190022 kubelet[1270]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 16:50:03 addons-190022 kubelet[1270]: I0731 16:50:03.393331    1270 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Jul 31 16:50:18 addons-190022 kubelet[1270]: E0731 16:50:18.411378    1270 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 16:50:18 addons-190022 kubelet[1270]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 16:50:18 addons-190022 kubelet[1270]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 16:50:18 addons-190022 kubelet[1270]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 16:50:18 addons-190022 kubelet[1270]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 16:50:34 addons-190022 kubelet[1270]: I0731 16:50:34.006310    1270 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-6778b5fc9f-xlw6l" podStartSLOduration=192.703835361 podStartE2EDuration="3m15.006281145s" podCreationTimestamp="2024-07-31 16:47:19 +0000 UTC" firstStartedPulling="2024-07-31 16:47:20.317311157 +0000 UTC m=+302.041801835" lastFinishedPulling="2024-07-31 16:47:22.619756942 +0000 UTC m=+304.344247619" observedRunningTime="2024-07-31 16:47:23.356337292 +0000 UTC m=+305.080827989" watchObservedRunningTime="2024-07-31 16:50:34.006281145 +0000 UTC m=+495.730771842"
	Jul 31 16:50:35 addons-190022 kubelet[1270]: I0731 16:50:35.341858    1270 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4638cda1-728a-48d6-9736-4f6234e9f6c1-tmp-dir\") pod \"4638cda1-728a-48d6-9736-4f6234e9f6c1\" (UID: \"4638cda1-728a-48d6-9736-4f6234e9f6c1\") "
	Jul 31 16:50:35 addons-190022 kubelet[1270]: I0731 16:50:35.341915    1270 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7lh6\" (UniqueName: \"kubernetes.io/projected/4638cda1-728a-48d6-9736-4f6234e9f6c1-kube-api-access-c7lh6\") pod \"4638cda1-728a-48d6-9736-4f6234e9f6c1\" (UID: \"4638cda1-728a-48d6-9736-4f6234e9f6c1\") "
	Jul 31 16:50:35 addons-190022 kubelet[1270]: I0731 16:50:35.342589    1270 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4638cda1-728a-48d6-9736-4f6234e9f6c1-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "4638cda1-728a-48d6-9736-4f6234e9f6c1" (UID: "4638cda1-728a-48d6-9736-4f6234e9f6c1"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Jul 31 16:50:35 addons-190022 kubelet[1270]: I0731 16:50:35.345990    1270 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4638cda1-728a-48d6-9736-4f6234e9f6c1-kube-api-access-c7lh6" (OuterVolumeSpecName: "kube-api-access-c7lh6") pod "4638cda1-728a-48d6-9736-4f6234e9f6c1" (UID: "4638cda1-728a-48d6-9736-4f6234e9f6c1"). InnerVolumeSpecName "kube-api-access-c7lh6". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 31 16:50:35 addons-190022 kubelet[1270]: I0731 16:50:35.442583    1270 reconciler_common.go:289] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4638cda1-728a-48d6-9736-4f6234e9f6c1-tmp-dir\") on node \"addons-190022\" DevicePath \"\""
	Jul 31 16:50:35 addons-190022 kubelet[1270]: I0731 16:50:35.442615    1270 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-c7lh6\" (UniqueName: \"kubernetes.io/projected/4638cda1-728a-48d6-9736-4f6234e9f6c1-kube-api-access-c7lh6\") on node \"addons-190022\" DevicePath \"\""
	
	
	==> storage-provisioner [689504ce84d77edb6d4fa85ce92f64fe541eb2f4c1bb3dd5402f35a76268d00a] <==
	I0731 16:42:39.952704       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0731 16:42:40.170723       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0731 16:42:40.170804       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0731 16:42:40.330646       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0731 16:42:40.330826       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-190022_6b9d968e-7497-4863-82bc-7a85b6c38769!
	I0731 16:42:40.391591       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"408bdfd9-3c16-4ebc-97ef-d44c7164d237", APIVersion:"v1", ResourceVersion:"649", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-190022_6b9d968e-7497-4863-82bc-7a85b6c38769 became leader
	I0731 16:42:40.533695       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-190022_6b9d968e-7497-4863-82bc-7a85b6c38769!
	E0731 16:45:33.889779       1 controller.go:1050] claim "178a4605-3a05-4309-ab54-4c88c6342c99" in work queue no longer exists
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-190022 -n addons-190022
helpers_test.go:261: (dbg) Run:  kubectl --context addons-190022 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-c59844bb4-j57l6
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-190022 describe pod metrics-server-c59844bb4-j57l6
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-190022 describe pod metrics-server-c59844bb4-j57l6: exit status 1 (79.034198ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-c59844bb4-j57l6" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-190022 describe pod metrics-server-c59844bb4-j57l6: exit status 1
--- FAIL: TestAddons/parallel/MetricsServer (349.22s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.38s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-190022
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-190022: exit status 82 (2m0.479059971s)

                                                
                                                
-- stdout --
	* Stopping node "addons-190022"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-190022" : exit status 82
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-190022
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-190022: exit status 11 (21.608676355s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.140:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-190022" : exit status 11
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-190022
addons_test.go:182: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-190022: exit status 11 (6.143381008s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.140:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:184: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-190022" : exit status 11
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-190022
addons_test.go:187: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-190022: exit status 11 (6.143434498s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.140:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:189: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-190022" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 node stop m02 -v=7 --alsologtostderr
E0731 17:04:05.346693   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/client.crt: no such file or directory
E0731 17:04:18.929430   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/functional-496242/client.crt: no such file or directory
E0731 17:05:40.849868   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/functional-496242/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-234651 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.465770494s)

                                                
                                                
-- stdout --
	* Stopping node "ha-234651-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 17:03:40.894431   30423 out.go:291] Setting OutFile to fd 1 ...
	I0731 17:03:40.894697   30423 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 17:03:40.894706   30423 out.go:304] Setting ErrFile to fd 2...
	I0731 17:03:40.894711   30423 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 17:03:40.894921   30423 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19349-8084/.minikube/bin
	I0731 17:03:40.895217   30423 mustload.go:65] Loading cluster: ha-234651
	I0731 17:03:40.895550   30423 config.go:182] Loaded profile config "ha-234651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 17:03:40.895568   30423 stop.go:39] StopHost: ha-234651-m02
	I0731 17:03:40.895935   30423 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:03:40.895985   30423 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:03:40.911072   30423 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39829
	I0731 17:03:40.911607   30423 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:03:40.912135   30423 main.go:141] libmachine: Using API Version  1
	I0731 17:03:40.912158   30423 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:03:40.912532   30423 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:03:40.914984   30423 out.go:177] * Stopping node "ha-234651-m02"  ...
	I0731 17:03:40.916298   30423 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0731 17:03:40.916336   30423 main.go:141] libmachine: (ha-234651-m02) Calling .DriverName
	I0731 17:03:40.916570   30423 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0731 17:03:40.916598   30423 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHHostname
	I0731 17:03:40.919705   30423 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:03:40.920115   30423 main.go:141] libmachine: (ha-234651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:97:0e", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:00:10 +0000 UTC Type:0 Mac:52:54:00:4c:97:0e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-234651-m02 Clientid:01:52:54:00:4c:97:0e}
	I0731 17:03:40.920140   30423 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined IP address 192.168.39.235 and MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:03:40.920277   30423 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHPort
	I0731 17:03:40.920439   30423 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHKeyPath
	I0731 17:03:40.920595   30423 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHUsername
	I0731 17:03:40.920749   30423 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m02/id_rsa Username:docker}
	I0731 17:03:41.003246   30423 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0731 17:03:41.055461   30423 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0731 17:03:41.111232   30423 main.go:141] libmachine: Stopping "ha-234651-m02"...
	I0731 17:03:41.111330   30423 main.go:141] libmachine: (ha-234651-m02) Calling .GetState
	I0731 17:03:41.112821   30423 main.go:141] libmachine: (ha-234651-m02) Calling .Stop
	I0731 17:03:41.116268   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 0/120
	I0731 17:03:42.117582   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 1/120
	I0731 17:03:43.119766   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 2/120
	I0731 17:03:44.122153   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 3/120
	I0731 17:03:45.123641   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 4/120
	I0731 17:03:46.125536   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 5/120
	I0731 17:03:47.126649   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 6/120
	I0731 17:03:48.128131   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 7/120
	I0731 17:03:49.130068   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 8/120
	I0731 17:03:50.131486   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 9/120
	I0731 17:03:51.133869   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 10/120
	I0731 17:03:52.135145   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 11/120
	I0731 17:03:53.136873   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 12/120
	I0731 17:03:54.138612   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 13/120
	I0731 17:03:55.140270   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 14/120
	I0731 17:03:56.142075   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 15/120
	I0731 17:03:57.143765   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 16/120
	I0731 17:03:58.145498   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 17/120
	I0731 17:03:59.147007   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 18/120
	I0731 17:04:00.148321   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 19/120
	I0731 17:04:01.150479   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 20/120
	I0731 17:04:02.151865   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 21/120
	I0731 17:04:03.153520   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 22/120
	I0731 17:04:04.154904   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 23/120
	I0731 17:04:05.156198   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 24/120
	I0731 17:04:06.158092   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 25/120
	I0731 17:04:07.159662   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 26/120
	I0731 17:04:08.161614   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 27/120
	I0731 17:04:09.162908   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 28/120
	I0731 17:04:10.164128   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 29/120
	I0731 17:04:11.166427   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 30/120
	I0731 17:04:12.167838   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 31/120
	I0731 17:04:13.169775   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 32/120
	I0731 17:04:14.171863   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 33/120
	I0731 17:04:15.173486   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 34/120
	I0731 17:04:16.175576   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 35/120
	I0731 17:04:17.176785   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 36/120
	I0731 17:04:18.178313   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 37/120
	I0731 17:04:19.179523   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 38/120
	I0731 17:04:20.180740   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 39/120
	I0731 17:04:21.182889   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 40/120
	I0731 17:04:22.184580   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 41/120
	I0731 17:04:23.185867   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 42/120
	I0731 17:04:24.187209   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 43/120
	I0731 17:04:25.188544   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 44/120
	I0731 17:04:26.189888   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 45/120
	I0731 17:04:27.191125   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 46/120
	I0731 17:04:28.192294   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 47/120
	I0731 17:04:29.194270   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 48/120
	I0731 17:04:30.195678   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 49/120
	I0731 17:04:31.197304   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 50/120
	I0731 17:04:32.198797   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 51/120
	I0731 17:04:33.200156   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 52/120
	I0731 17:04:34.201750   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 53/120
	I0731 17:04:35.203599   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 54/120
	I0731 17:04:36.205718   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 55/120
	I0731 17:04:37.207145   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 56/120
	I0731 17:04:38.208580   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 57/120
	I0731 17:04:39.210045   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 58/120
	I0731 17:04:40.211454   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 59/120
	I0731 17:04:41.213688   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 60/120
	I0731 17:04:42.214988   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 61/120
	I0731 17:04:43.217226   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 62/120
	I0731 17:04:44.218613   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 63/120
	I0731 17:04:45.220332   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 64/120
	I0731 17:04:46.222306   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 65/120
	I0731 17:04:47.223683   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 66/120
	I0731 17:04:48.225756   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 67/120
	I0731 17:04:49.227027   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 68/120
	I0731 17:04:50.228419   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 69/120
	I0731 17:04:51.230722   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 70/120
	I0731 17:04:52.232089   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 71/120
	I0731 17:04:53.233753   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 72/120
	I0731 17:04:54.235715   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 73/120
	I0731 17:04:55.236922   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 74/120
	I0731 17:04:56.238845   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 75/120
	I0731 17:04:57.240075   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 76/120
	I0731 17:04:58.241871   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 77/120
	I0731 17:04:59.243431   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 78/120
	I0731 17:05:00.244797   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 79/120
	I0731 17:05:01.246965   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 80/120
	I0731 17:05:02.248312   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 81/120
	I0731 17:05:03.249771   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 82/120
	I0731 17:05:04.251304   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 83/120
	I0731 17:05:05.253748   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 84/120
	I0731 17:05:06.255526   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 85/120
	I0731 17:05:07.257494   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 86/120
	I0731 17:05:08.259274   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 87/120
	I0731 17:05:09.261991   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 88/120
	I0731 17:05:10.263311   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 89/120
	I0731 17:05:11.265703   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 90/120
	I0731 17:05:12.267936   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 91/120
	I0731 17:05:13.269516   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 92/120
	I0731 17:05:14.271190   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 93/120
	I0731 17:05:15.272483   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 94/120
	I0731 17:05:16.274311   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 95/120
	I0731 17:05:17.276107   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 96/120
	I0731 17:05:18.277654   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 97/120
	I0731 17:05:19.279121   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 98/120
	I0731 17:05:20.280807   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 99/120
	I0731 17:05:21.282675   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 100/120
	I0731 17:05:22.284079   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 101/120
	I0731 17:05:23.286315   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 102/120
	I0731 17:05:24.287720   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 103/120
	I0731 17:05:25.289630   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 104/120
	I0731 17:05:26.291444   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 105/120
	I0731 17:05:27.293494   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 106/120
	I0731 17:05:28.294746   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 107/120
	I0731 17:05:29.296099   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 108/120
	I0731 17:05:30.297386   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 109/120
	I0731 17:05:31.299612   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 110/120
	I0731 17:05:32.301505   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 111/120
	I0731 17:05:33.303283   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 112/120
	I0731 17:05:34.305489   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 113/120
	I0731 17:05:35.306923   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 114/120
	I0731 17:05:36.308871   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 115/120
	I0731 17:05:37.310134   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 116/120
	I0731 17:05:38.311411   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 117/120
	I0731 17:05:39.312704   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 118/120
	I0731 17:05:40.315051   30423 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 119/120
	I0731 17:05:41.316414   30423 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0731 17:05:41.316573   30423 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-234651 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-234651 status -v=7 --alsologtostderr: exit status 3 (19.13806581s)

                                                
                                                
-- stdout --
	ha-234651
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-234651-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-234651-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-234651-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 17:05:41.360546   30865 out.go:291] Setting OutFile to fd 1 ...
	I0731 17:05:41.360666   30865 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 17:05:41.360676   30865 out.go:304] Setting ErrFile to fd 2...
	I0731 17:05:41.360682   30865 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 17:05:41.360858   30865 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19349-8084/.minikube/bin
	I0731 17:05:41.361040   30865 out.go:298] Setting JSON to false
	I0731 17:05:41.361070   30865 mustload.go:65] Loading cluster: ha-234651
	I0731 17:05:41.361158   30865 notify.go:220] Checking for updates...
	I0731 17:05:41.361480   30865 config.go:182] Loaded profile config "ha-234651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 17:05:41.361498   30865 status.go:255] checking status of ha-234651 ...
	I0731 17:05:41.361878   30865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:05:41.361953   30865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:05:41.380310   30865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33087
	I0731 17:05:41.380728   30865 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:05:41.381371   30865 main.go:141] libmachine: Using API Version  1
	I0731 17:05:41.381406   30865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:05:41.381748   30865 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:05:41.381956   30865 main.go:141] libmachine: (ha-234651) Calling .GetState
	I0731 17:05:41.383504   30865 status.go:330] ha-234651 host status = "Running" (err=<nil>)
	I0731 17:05:41.383537   30865 host.go:66] Checking if "ha-234651" exists ...
	I0731 17:05:41.383932   30865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:05:41.383980   30865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:05:41.399154   30865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46635
	I0731 17:05:41.399599   30865 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:05:41.400183   30865 main.go:141] libmachine: Using API Version  1
	I0731 17:05:41.400218   30865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:05:41.400584   30865 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:05:41.400784   30865 main.go:141] libmachine: (ha-234651) Calling .GetIP
	I0731 17:05:41.403464   30865 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:05:41.403857   30865 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 17:05:41.403880   30865 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:05:41.404084   30865 host.go:66] Checking if "ha-234651" exists ...
	I0731 17:05:41.404362   30865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:05:41.404406   30865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:05:41.419273   30865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43835
	I0731 17:05:41.419627   30865 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:05:41.420041   30865 main.go:141] libmachine: Using API Version  1
	I0731 17:05:41.420064   30865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:05:41.420419   30865 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:05:41.420604   30865 main.go:141] libmachine: (ha-234651) Calling .DriverName
	I0731 17:05:41.420777   30865 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 17:05:41.420801   30865 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 17:05:41.423475   30865 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:05:41.423859   30865 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 17:05:41.423882   30865 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:05:41.424004   30865 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 17:05:41.424169   30865 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 17:05:41.424330   30865 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 17:05:41.424832   30865 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651/id_rsa Username:docker}
	I0731 17:05:41.507467   30865 ssh_runner.go:195] Run: systemctl --version
	I0731 17:05:41.513816   30865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 17:05:41.529120   30865 kubeconfig.go:125] found "ha-234651" server: "https://192.168.39.254:8443"
	I0731 17:05:41.529147   30865 api_server.go:166] Checking apiserver status ...
	I0731 17:05:41.529184   30865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 17:05:41.544247   30865 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1220/cgroup
	W0731 17:05:41.553326   30865 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1220/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 17:05:41.553374   30865 ssh_runner.go:195] Run: ls
	I0731 17:05:41.557764   30865 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 17:05:41.563840   30865 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 17:05:41.563866   30865 status.go:422] ha-234651 apiserver status = Running (err=<nil>)
	I0731 17:05:41.563877   30865 status.go:257] ha-234651 status: &{Name:ha-234651 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 17:05:41.563896   30865 status.go:255] checking status of ha-234651-m02 ...
	I0731 17:05:41.564269   30865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:05:41.564308   30865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:05:41.581780   30865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45343
	I0731 17:05:41.582225   30865 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:05:41.582723   30865 main.go:141] libmachine: Using API Version  1
	I0731 17:05:41.582745   30865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:05:41.583141   30865 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:05:41.583341   30865 main.go:141] libmachine: (ha-234651-m02) Calling .GetState
	I0731 17:05:41.585009   30865 status.go:330] ha-234651-m02 host status = "Running" (err=<nil>)
	I0731 17:05:41.585025   30865 host.go:66] Checking if "ha-234651-m02" exists ...
	I0731 17:05:41.585321   30865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:05:41.585372   30865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:05:41.599579   30865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40439
	I0731 17:05:41.600005   30865 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:05:41.600469   30865 main.go:141] libmachine: Using API Version  1
	I0731 17:05:41.600491   30865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:05:41.600799   30865 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:05:41.600967   30865 main.go:141] libmachine: (ha-234651-m02) Calling .GetIP
	I0731 17:05:41.603668   30865 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:05:41.603990   30865 main.go:141] libmachine: (ha-234651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:97:0e", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:00:10 +0000 UTC Type:0 Mac:52:54:00:4c:97:0e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-234651-m02 Clientid:01:52:54:00:4c:97:0e}
	I0731 17:05:41.604013   30865 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined IP address 192.168.39.235 and MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:05:41.604163   30865 host.go:66] Checking if "ha-234651-m02" exists ...
	I0731 17:05:41.604475   30865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:05:41.604507   30865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:05:41.618729   30865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46005
	I0731 17:05:41.619066   30865 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:05:41.619497   30865 main.go:141] libmachine: Using API Version  1
	I0731 17:05:41.619518   30865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:05:41.619823   30865 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:05:41.619995   30865 main.go:141] libmachine: (ha-234651-m02) Calling .DriverName
	I0731 17:05:41.620171   30865 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 17:05:41.620189   30865 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHHostname
	I0731 17:05:41.622578   30865 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:05:41.622927   30865 main.go:141] libmachine: (ha-234651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:97:0e", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:00:10 +0000 UTC Type:0 Mac:52:54:00:4c:97:0e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-234651-m02 Clientid:01:52:54:00:4c:97:0e}
	I0731 17:05:41.622946   30865 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined IP address 192.168.39.235 and MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:05:41.623099   30865 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHPort
	I0731 17:05:41.623313   30865 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHKeyPath
	I0731 17:05:41.623435   30865 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHUsername
	I0731 17:05:41.623560   30865 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m02/id_rsa Username:docker}
	W0731 17:06:00.099316   30865 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.235:22: connect: no route to host
	W0731 17:06:00.099425   30865 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.235:22: connect: no route to host
	E0731 17:06:00.099448   30865 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.235:22: connect: no route to host
	I0731 17:06:00.099463   30865 status.go:257] ha-234651-m02 status: &{Name:ha-234651-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0731 17:06:00.099486   30865 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.235:22: connect: no route to host
	I0731 17:06:00.099496   30865 status.go:255] checking status of ha-234651-m03 ...
	I0731 17:06:00.099935   30865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:00.099981   30865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:00.114604   30865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45597
	I0731 17:06:00.115000   30865 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:00.115464   30865 main.go:141] libmachine: Using API Version  1
	I0731 17:06:00.115483   30865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:00.115844   30865 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:00.116097   30865 main.go:141] libmachine: (ha-234651-m03) Calling .GetState
	I0731 17:06:00.117776   30865 status.go:330] ha-234651-m03 host status = "Running" (err=<nil>)
	I0731 17:06:00.117788   30865 host.go:66] Checking if "ha-234651-m03" exists ...
	I0731 17:06:00.118097   30865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:00.118127   30865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:00.132620   30865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38489
	I0731 17:06:00.132963   30865 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:00.133400   30865 main.go:141] libmachine: Using API Version  1
	I0731 17:06:00.133437   30865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:00.133753   30865 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:00.133935   30865 main.go:141] libmachine: (ha-234651-m03) Calling .GetIP
	I0731 17:06:00.136253   30865 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:06:00.136672   30865 main.go:141] libmachine: (ha-234651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c0:cf", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:01:23 +0000 UTC Type:0 Mac:52:54:00:ac:c0:cf Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-234651-m03 Clientid:01:52:54:00:ac:c0:cf}
	I0731 17:06:00.136703   30865 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined IP address 192.168.39.139 and MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:06:00.136870   30865 host.go:66] Checking if "ha-234651-m03" exists ...
	I0731 17:06:00.137147   30865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:00.137183   30865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:00.152855   30865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43171
	I0731 17:06:00.153236   30865 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:00.153667   30865 main.go:141] libmachine: Using API Version  1
	I0731 17:06:00.153686   30865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:00.153969   30865 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:00.154143   30865 main.go:141] libmachine: (ha-234651-m03) Calling .DriverName
	I0731 17:06:00.154309   30865 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 17:06:00.154329   30865 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHHostname
	I0731 17:06:00.156829   30865 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:06:00.157224   30865 main.go:141] libmachine: (ha-234651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c0:cf", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:01:23 +0000 UTC Type:0 Mac:52:54:00:ac:c0:cf Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-234651-m03 Clientid:01:52:54:00:ac:c0:cf}
	I0731 17:06:00.157258   30865 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined IP address 192.168.39.139 and MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:06:00.157409   30865 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHPort
	I0731 17:06:00.157568   30865 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHKeyPath
	I0731 17:06:00.157733   30865 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHUsername
	I0731 17:06:00.157873   30865 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m03/id_rsa Username:docker}
	I0731 17:06:00.235588   30865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 17:06:00.252662   30865 kubeconfig.go:125] found "ha-234651" server: "https://192.168.39.254:8443"
	I0731 17:06:00.252684   30865 api_server.go:166] Checking apiserver status ...
	I0731 17:06:00.252712   30865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 17:06:00.272435   30865 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1530/cgroup
	W0731 17:06:00.282535   30865 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1530/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 17:06:00.282589   30865 ssh_runner.go:195] Run: ls
	I0731 17:06:00.287370   30865 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 17:06:00.294459   30865 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 17:06:00.294497   30865 status.go:422] ha-234651-m03 apiserver status = Running (err=<nil>)
	I0731 17:06:00.294507   30865 status.go:257] ha-234651-m03 status: &{Name:ha-234651-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 17:06:00.294523   30865 status.go:255] checking status of ha-234651-m04 ...
	I0731 17:06:00.294814   30865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:00.294844   30865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:00.309789   30865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33097
	I0731 17:06:00.310183   30865 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:00.310656   30865 main.go:141] libmachine: Using API Version  1
	I0731 17:06:00.310677   30865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:00.310980   30865 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:00.311197   30865 main.go:141] libmachine: (ha-234651-m04) Calling .GetState
	I0731 17:06:00.312883   30865 status.go:330] ha-234651-m04 host status = "Running" (err=<nil>)
	I0731 17:06:00.312898   30865 host.go:66] Checking if "ha-234651-m04" exists ...
	I0731 17:06:00.313261   30865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:00.313303   30865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:00.328455   30865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36537
	I0731 17:06:00.328941   30865 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:00.329412   30865 main.go:141] libmachine: Using API Version  1
	I0731 17:06:00.329442   30865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:00.329743   30865 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:00.329893   30865 main.go:141] libmachine: (ha-234651-m04) Calling .GetIP
	I0731 17:06:00.332440   30865 main.go:141] libmachine: (ha-234651-m04) DBG | domain ha-234651-m04 has defined MAC address 52:54:00:25:f7:22 in network mk-ha-234651
	I0731 17:06:00.332834   30865 main.go:141] libmachine: (ha-234651-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f7:22", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:02:48 +0000 UTC Type:0 Mac:52:54:00:25:f7:22 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-234651-m04 Clientid:01:52:54:00:25:f7:22}
	I0731 17:06:00.332856   30865 main.go:141] libmachine: (ha-234651-m04) DBG | domain ha-234651-m04 has defined IP address 192.168.39.216 and MAC address 52:54:00:25:f7:22 in network mk-ha-234651
	I0731 17:06:00.333009   30865 host.go:66] Checking if "ha-234651-m04" exists ...
	I0731 17:06:00.333302   30865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:00.333350   30865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:00.349106   30865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46689
	I0731 17:06:00.349471   30865 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:00.349955   30865 main.go:141] libmachine: Using API Version  1
	I0731 17:06:00.349985   30865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:00.350270   30865 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:00.350445   30865 main.go:141] libmachine: (ha-234651-m04) Calling .DriverName
	I0731 17:06:00.350609   30865 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 17:06:00.350627   30865 main.go:141] libmachine: (ha-234651-m04) Calling .GetSSHHostname
	I0731 17:06:00.353159   30865 main.go:141] libmachine: (ha-234651-m04) DBG | domain ha-234651-m04 has defined MAC address 52:54:00:25:f7:22 in network mk-ha-234651
	I0731 17:06:00.353516   30865 main.go:141] libmachine: (ha-234651-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f7:22", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:02:48 +0000 UTC Type:0 Mac:52:54:00:25:f7:22 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-234651-m04 Clientid:01:52:54:00:25:f7:22}
	I0731 17:06:00.353544   30865 main.go:141] libmachine: (ha-234651-m04) DBG | domain ha-234651-m04 has defined IP address 192.168.39.216 and MAC address 52:54:00:25:f7:22 in network mk-ha-234651
	I0731 17:06:00.353715   30865 main.go:141] libmachine: (ha-234651-m04) Calling .GetSSHPort
	I0731 17:06:00.353882   30865 main.go:141] libmachine: (ha-234651-m04) Calling .GetSSHKeyPath
	I0731 17:06:00.354022   30865 main.go:141] libmachine: (ha-234651-m04) Calling .GetSSHUsername
	I0731 17:06:00.354123   30865 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m04/id_rsa Username:docker}
	I0731 17:06:00.439026   30865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 17:06:00.454724   30865 status.go:257] ha-234651-m04 status: &{Name:ha-234651-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-234651 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-234651 -n ha-234651
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-234651 logs -n 25: (1.30666697s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-234651 cp ha-234651-m03:/home/docker/cp-test.txt                              | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2373933749/001/cp-test_ha-234651-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-234651 ssh -n                                                                 | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | ha-234651-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-234651 cp ha-234651-m03:/home/docker/cp-test.txt                              | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | ha-234651:/home/docker/cp-test_ha-234651-m03_ha-234651.txt                       |           |         |         |                     |                     |
	| ssh     | ha-234651 ssh -n                                                                 | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | ha-234651-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-234651 ssh -n ha-234651 sudo cat                                              | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | /home/docker/cp-test_ha-234651-m03_ha-234651.txt                                 |           |         |         |                     |                     |
	| cp      | ha-234651 cp ha-234651-m03:/home/docker/cp-test.txt                              | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | ha-234651-m02:/home/docker/cp-test_ha-234651-m03_ha-234651-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-234651 ssh -n                                                                 | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | ha-234651-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-234651 ssh -n ha-234651-m02 sudo cat                                          | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | /home/docker/cp-test_ha-234651-m03_ha-234651-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-234651 cp ha-234651-m03:/home/docker/cp-test.txt                              | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | ha-234651-m04:/home/docker/cp-test_ha-234651-m03_ha-234651-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-234651 ssh -n                                                                 | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | ha-234651-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-234651 ssh -n ha-234651-m04 sudo cat                                          | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | /home/docker/cp-test_ha-234651-m03_ha-234651-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-234651 cp testdata/cp-test.txt                                                | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | ha-234651-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-234651 ssh -n                                                                 | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | ha-234651-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-234651 cp ha-234651-m04:/home/docker/cp-test.txt                              | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2373933749/001/cp-test_ha-234651-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-234651 ssh -n                                                                 | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | ha-234651-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-234651 cp ha-234651-m04:/home/docker/cp-test.txt                              | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | ha-234651:/home/docker/cp-test_ha-234651-m04_ha-234651.txt                       |           |         |         |                     |                     |
	| ssh     | ha-234651 ssh -n                                                                 | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | ha-234651-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-234651 ssh -n ha-234651 sudo cat                                              | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | /home/docker/cp-test_ha-234651-m04_ha-234651.txt                                 |           |         |         |                     |                     |
	| cp      | ha-234651 cp ha-234651-m04:/home/docker/cp-test.txt                              | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | ha-234651-m02:/home/docker/cp-test_ha-234651-m04_ha-234651-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-234651 ssh -n                                                                 | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | ha-234651-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-234651 ssh -n ha-234651-m02 sudo cat                                          | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | /home/docker/cp-test_ha-234651-m04_ha-234651-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-234651 cp ha-234651-m04:/home/docker/cp-test.txt                              | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | ha-234651-m03:/home/docker/cp-test_ha-234651-m04_ha-234651-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-234651 ssh -n                                                                 | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | ha-234651-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-234651 ssh -n ha-234651-m03 sudo cat                                          | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | /home/docker/cp-test_ha-234651-m04_ha-234651-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-234651 node stop m02 -v=7                                                     | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 16:59:02
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 16:59:02.086616   26392 out.go:291] Setting OutFile to fd 1 ...
	I0731 16:59:02.086847   26392 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 16:59:02.086856   26392 out.go:304] Setting ErrFile to fd 2...
	I0731 16:59:02.086860   26392 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 16:59:02.087017   26392 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19349-8084/.minikube/bin
	I0731 16:59:02.087598   26392 out.go:298] Setting JSON to false
	I0731 16:59:02.088397   26392 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2486,"bootTime":1722442656,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 16:59:02.088452   26392 start.go:139] virtualization: kvm guest
	I0731 16:59:02.090518   26392 out.go:177] * [ha-234651] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 16:59:02.091938   26392 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 16:59:02.091946   26392 notify.go:220] Checking for updates...
	I0731 16:59:02.094020   26392 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 16:59:02.095139   26392 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 16:59:02.096213   26392 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19349-8084/.minikube
	I0731 16:59:02.097279   26392 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 16:59:02.098361   26392 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 16:59:02.099733   26392 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 16:59:02.134045   26392 out.go:177] * Using the kvm2 driver based on user configuration
	I0731 16:59:02.135190   26392 start.go:297] selected driver: kvm2
	I0731 16:59:02.135203   26392 start.go:901] validating driver "kvm2" against <nil>
	I0731 16:59:02.135212   26392 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 16:59:02.135908   26392 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 16:59:02.135972   26392 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19349-8084/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 16:59:02.150423   26392 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 16:59:02.150475   26392 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 16:59:02.150683   26392 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 16:59:02.150736   26392 cni.go:84] Creating CNI manager for ""
	I0731 16:59:02.150748   26392 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0731 16:59:02.150753   26392 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0731 16:59:02.150810   26392 start.go:340] cluster config:
	{Name:ha-234651 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-234651 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0731 16:59:02.150893   26392 iso.go:125] acquiring lock: {Name:mk4494bb5a63902bf12a909c3109db85229bc516 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 16:59:02.152634   26392 out.go:177] * Starting "ha-234651" primary control-plane node in "ha-234651" cluster
	I0731 16:59:02.153827   26392 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 16:59:02.153858   26392 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0731 16:59:02.153866   26392 cache.go:56] Caching tarball of preloaded images
	I0731 16:59:02.153961   26392 preload.go:172] Found /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 16:59:02.153975   26392 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 16:59:02.154325   26392 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/config.json ...
	I0731 16:59:02.154361   26392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/config.json: {Name:mk345cf47c371bb2b8d9e899fabd4f55ea2e688d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 16:59:02.154511   26392 start.go:360] acquireMachinesLock for ha-234651: {Name:mkd78eacfab7d6c3058c6674434b3d889ec957e0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 16:59:02.154550   26392 start.go:364] duration metric: took 23.284µs to acquireMachinesLock for "ha-234651"
	I0731 16:59:02.154573   26392 start.go:93] Provisioning new machine with config: &{Name:ha-234651 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-234651 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 16:59:02.154632   26392 start.go:125] createHost starting for "" (driver="kvm2")
	I0731 16:59:02.157048   26392 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 16:59:02.157187   26392 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:59:02.157242   26392 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:59:02.171049   26392 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37109
	I0731 16:59:02.171465   26392 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:59:02.172023   26392 main.go:141] libmachine: Using API Version  1
	I0731 16:59:02.172049   26392 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:59:02.172390   26392 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:59:02.172559   26392 main.go:141] libmachine: (ha-234651) Calling .GetMachineName
	I0731 16:59:02.172680   26392 main.go:141] libmachine: (ha-234651) Calling .DriverName
	I0731 16:59:02.172817   26392 start.go:159] libmachine.API.Create for "ha-234651" (driver="kvm2")
	I0731 16:59:02.172846   26392 client.go:168] LocalClient.Create starting
	I0731 16:59:02.172879   26392 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem
	I0731 16:59:02.172910   26392 main.go:141] libmachine: Decoding PEM data...
	I0731 16:59:02.172925   26392 main.go:141] libmachine: Parsing certificate...
	I0731 16:59:02.173003   26392 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem
	I0731 16:59:02.173022   26392 main.go:141] libmachine: Decoding PEM data...
	I0731 16:59:02.173034   26392 main.go:141] libmachine: Parsing certificate...
	I0731 16:59:02.173050   26392 main.go:141] libmachine: Running pre-create checks...
	I0731 16:59:02.173062   26392 main.go:141] libmachine: (ha-234651) Calling .PreCreateCheck
	I0731 16:59:02.173410   26392 main.go:141] libmachine: (ha-234651) Calling .GetConfigRaw
	I0731 16:59:02.173764   26392 main.go:141] libmachine: Creating machine...
	I0731 16:59:02.173777   26392 main.go:141] libmachine: (ha-234651) Calling .Create
	I0731 16:59:02.173883   26392 main.go:141] libmachine: (ha-234651) Creating KVM machine...
	I0731 16:59:02.175020   26392 main.go:141] libmachine: (ha-234651) DBG | found existing default KVM network
	I0731 16:59:02.175675   26392 main.go:141] libmachine: (ha-234651) DBG | I0731 16:59:02.175519   26415 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ac0}
	I0731 16:59:02.175700   26392 main.go:141] libmachine: (ha-234651) DBG | created network xml: 
	I0731 16:59:02.175716   26392 main.go:141] libmachine: (ha-234651) DBG | <network>
	I0731 16:59:02.175727   26392 main.go:141] libmachine: (ha-234651) DBG |   <name>mk-ha-234651</name>
	I0731 16:59:02.175736   26392 main.go:141] libmachine: (ha-234651) DBG |   <dns enable='no'/>
	I0731 16:59:02.175743   26392 main.go:141] libmachine: (ha-234651) DBG |   
	I0731 16:59:02.175749   26392 main.go:141] libmachine: (ha-234651) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0731 16:59:02.175755   26392 main.go:141] libmachine: (ha-234651) DBG |     <dhcp>
	I0731 16:59:02.175762   26392 main.go:141] libmachine: (ha-234651) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0731 16:59:02.175768   26392 main.go:141] libmachine: (ha-234651) DBG |     </dhcp>
	I0731 16:59:02.175775   26392 main.go:141] libmachine: (ha-234651) DBG |   </ip>
	I0731 16:59:02.175785   26392 main.go:141] libmachine: (ha-234651) DBG |   
	I0731 16:59:02.175792   26392 main.go:141] libmachine: (ha-234651) DBG | </network>
	I0731 16:59:02.175807   26392 main.go:141] libmachine: (ha-234651) DBG | 
	I0731 16:59:02.181065   26392 main.go:141] libmachine: (ha-234651) DBG | trying to create private KVM network mk-ha-234651 192.168.39.0/24...
	I0731 16:59:02.245390   26392 main.go:141] libmachine: (ha-234651) DBG | private KVM network mk-ha-234651 192.168.39.0/24 created
	I0731 16:59:02.245418   26392 main.go:141] libmachine: (ha-234651) DBG | I0731 16:59:02.245363   26415 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19349-8084/.minikube
	I0731 16:59:02.245427   26392 main.go:141] libmachine: (ha-234651) Setting up store path in /home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651 ...
	I0731 16:59:02.245440   26392 main.go:141] libmachine: (ha-234651) Building disk image from file:///home/jenkins/minikube-integration/19349-8084/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0731 16:59:02.245494   26392 main.go:141] libmachine: (ha-234651) Downloading /home/jenkins/minikube-integration/19349-8084/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19349-8084/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0731 16:59:02.479460   26392 main.go:141] libmachine: (ha-234651) DBG | I0731 16:59:02.479343   26415 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651/id_rsa...
	I0731 16:59:02.575082   26392 main.go:141] libmachine: (ha-234651) DBG | I0731 16:59:02.574936   26415 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651/ha-234651.rawdisk...
	I0731 16:59:02.575138   26392 main.go:141] libmachine: (ha-234651) DBG | Writing magic tar header
	I0731 16:59:02.575154   26392 main.go:141] libmachine: (ha-234651) DBG | Writing SSH key tar header
	I0731 16:59:02.575181   26392 main.go:141] libmachine: (ha-234651) DBG | I0731 16:59:02.575046   26415 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651 ...
	I0731 16:59:02.575197   26392 main.go:141] libmachine: (ha-234651) Setting executable bit set on /home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651 (perms=drwx------)
	I0731 16:59:02.575207   26392 main.go:141] libmachine: (ha-234651) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651
	I0731 16:59:02.575218   26392 main.go:141] libmachine: (ha-234651) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19349-8084/.minikube/machines
	I0731 16:59:02.575224   26392 main.go:141] libmachine: (ha-234651) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19349-8084/.minikube
	I0731 16:59:02.575233   26392 main.go:141] libmachine: (ha-234651) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19349-8084
	I0731 16:59:02.575238   26392 main.go:141] libmachine: (ha-234651) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0731 16:59:02.575248   26392 main.go:141] libmachine: (ha-234651) DBG | Checking permissions on dir: /home/jenkins
	I0731 16:59:02.575256   26392 main.go:141] libmachine: (ha-234651) DBG | Checking permissions on dir: /home
	I0731 16:59:02.575281   26392 main.go:141] libmachine: (ha-234651) DBG | Skipping /home - not owner
	I0731 16:59:02.575295   26392 main.go:141] libmachine: (ha-234651) Setting executable bit set on /home/jenkins/minikube-integration/19349-8084/.minikube/machines (perms=drwxr-xr-x)
	I0731 16:59:02.575307   26392 main.go:141] libmachine: (ha-234651) Setting executable bit set on /home/jenkins/minikube-integration/19349-8084/.minikube (perms=drwxr-xr-x)
	I0731 16:59:02.575317   26392 main.go:141] libmachine: (ha-234651) Setting executable bit set on /home/jenkins/minikube-integration/19349-8084 (perms=drwxrwxr-x)
	I0731 16:59:02.575331   26392 main.go:141] libmachine: (ha-234651) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0731 16:59:02.575338   26392 main.go:141] libmachine: (ha-234651) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0731 16:59:02.575433   26392 main.go:141] libmachine: (ha-234651) Creating domain...
	I0731 16:59:02.576354   26392 main.go:141] libmachine: (ha-234651) define libvirt domain using xml: 
	I0731 16:59:02.576381   26392 main.go:141] libmachine: (ha-234651) <domain type='kvm'>
	I0731 16:59:02.576391   26392 main.go:141] libmachine: (ha-234651)   <name>ha-234651</name>
	I0731 16:59:02.576399   26392 main.go:141] libmachine: (ha-234651)   <memory unit='MiB'>2200</memory>
	I0731 16:59:02.576407   26392 main.go:141] libmachine: (ha-234651)   <vcpu>2</vcpu>
	I0731 16:59:02.576415   26392 main.go:141] libmachine: (ha-234651)   <features>
	I0731 16:59:02.576423   26392 main.go:141] libmachine: (ha-234651)     <acpi/>
	I0731 16:59:02.576430   26392 main.go:141] libmachine: (ha-234651)     <apic/>
	I0731 16:59:02.576453   26392 main.go:141] libmachine: (ha-234651)     <pae/>
	I0731 16:59:02.576464   26392 main.go:141] libmachine: (ha-234651)     
	I0731 16:59:02.576490   26392 main.go:141] libmachine: (ha-234651)   </features>
	I0731 16:59:02.576510   26392 main.go:141] libmachine: (ha-234651)   <cpu mode='host-passthrough'>
	I0731 16:59:02.576517   26392 main.go:141] libmachine: (ha-234651)   
	I0731 16:59:02.576522   26392 main.go:141] libmachine: (ha-234651)   </cpu>
	I0731 16:59:02.576529   26392 main.go:141] libmachine: (ha-234651)   <os>
	I0731 16:59:02.576533   26392 main.go:141] libmachine: (ha-234651)     <type>hvm</type>
	I0731 16:59:02.576539   26392 main.go:141] libmachine: (ha-234651)     <boot dev='cdrom'/>
	I0731 16:59:02.576544   26392 main.go:141] libmachine: (ha-234651)     <boot dev='hd'/>
	I0731 16:59:02.576554   26392 main.go:141] libmachine: (ha-234651)     <bootmenu enable='no'/>
	I0731 16:59:02.576559   26392 main.go:141] libmachine: (ha-234651)   </os>
	I0731 16:59:02.576566   26392 main.go:141] libmachine: (ha-234651)   <devices>
	I0731 16:59:02.576571   26392 main.go:141] libmachine: (ha-234651)     <disk type='file' device='cdrom'>
	I0731 16:59:02.576609   26392 main.go:141] libmachine: (ha-234651)       <source file='/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651/boot2docker.iso'/>
	I0731 16:59:02.576638   26392 main.go:141] libmachine: (ha-234651)       <target dev='hdc' bus='scsi'/>
	I0731 16:59:02.576667   26392 main.go:141] libmachine: (ha-234651)       <readonly/>
	I0731 16:59:02.576685   26392 main.go:141] libmachine: (ha-234651)     </disk>
	I0731 16:59:02.576700   26392 main.go:141] libmachine: (ha-234651)     <disk type='file' device='disk'>
	I0731 16:59:02.576712   26392 main.go:141] libmachine: (ha-234651)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0731 16:59:02.576730   26392 main.go:141] libmachine: (ha-234651)       <source file='/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651/ha-234651.rawdisk'/>
	I0731 16:59:02.576743   26392 main.go:141] libmachine: (ha-234651)       <target dev='hda' bus='virtio'/>
	I0731 16:59:02.576761   26392 main.go:141] libmachine: (ha-234651)     </disk>
	I0731 16:59:02.576779   26392 main.go:141] libmachine: (ha-234651)     <interface type='network'>
	I0731 16:59:02.576805   26392 main.go:141] libmachine: (ha-234651)       <source network='mk-ha-234651'/>
	I0731 16:59:02.576825   26392 main.go:141] libmachine: (ha-234651)       <model type='virtio'/>
	I0731 16:59:02.576836   26392 main.go:141] libmachine: (ha-234651)     </interface>
	I0731 16:59:02.576848   26392 main.go:141] libmachine: (ha-234651)     <interface type='network'>
	I0731 16:59:02.576867   26392 main.go:141] libmachine: (ha-234651)       <source network='default'/>
	I0731 16:59:02.576875   26392 main.go:141] libmachine: (ha-234651)       <model type='virtio'/>
	I0731 16:59:02.576880   26392 main.go:141] libmachine: (ha-234651)     </interface>
	I0731 16:59:02.576887   26392 main.go:141] libmachine: (ha-234651)     <serial type='pty'>
	I0731 16:59:02.576892   26392 main.go:141] libmachine: (ha-234651)       <target port='0'/>
	I0731 16:59:02.576898   26392 main.go:141] libmachine: (ha-234651)     </serial>
	I0731 16:59:02.576904   26392 main.go:141] libmachine: (ha-234651)     <console type='pty'>
	I0731 16:59:02.576910   26392 main.go:141] libmachine: (ha-234651)       <target type='serial' port='0'/>
	I0731 16:59:02.576915   26392 main.go:141] libmachine: (ha-234651)     </console>
	I0731 16:59:02.576922   26392 main.go:141] libmachine: (ha-234651)     <rng model='virtio'>
	I0731 16:59:02.576928   26392 main.go:141] libmachine: (ha-234651)       <backend model='random'>/dev/random</backend>
	I0731 16:59:02.576932   26392 main.go:141] libmachine: (ha-234651)     </rng>
	I0731 16:59:02.576937   26392 main.go:141] libmachine: (ha-234651)     
	I0731 16:59:02.576941   26392 main.go:141] libmachine: (ha-234651)     
	I0731 16:59:02.576946   26392 main.go:141] libmachine: (ha-234651)   </devices>
	I0731 16:59:02.576951   26392 main.go:141] libmachine: (ha-234651) </domain>
	I0731 16:59:02.576958   26392 main.go:141] libmachine: (ha-234651) 
	I0731 16:59:02.581016   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:a4:a7:99 in network default
	I0731 16:59:02.581631   26392 main.go:141] libmachine: (ha-234651) Ensuring networks are active...
	I0731 16:59:02.581649   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:02.582218   26392 main.go:141] libmachine: (ha-234651) Ensuring network default is active
	I0731 16:59:02.582490   26392 main.go:141] libmachine: (ha-234651) Ensuring network mk-ha-234651 is active
	I0731 16:59:02.582926   26392 main.go:141] libmachine: (ha-234651) Getting domain xml...
	I0731 16:59:02.583572   26392 main.go:141] libmachine: (ha-234651) Creating domain...
	I0731 16:59:03.758566   26392 main.go:141] libmachine: (ha-234651) Waiting to get IP...
	I0731 16:59:03.759252   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:03.759681   26392 main.go:141] libmachine: (ha-234651) DBG | unable to find current IP address of domain ha-234651 in network mk-ha-234651
	I0731 16:59:03.759703   26392 main.go:141] libmachine: (ha-234651) DBG | I0731 16:59:03.759661   26415 retry.go:31] will retry after 261.150283ms: waiting for machine to come up
	I0731 16:59:04.022061   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:04.022478   26392 main.go:141] libmachine: (ha-234651) DBG | unable to find current IP address of domain ha-234651 in network mk-ha-234651
	I0731 16:59:04.022501   26392 main.go:141] libmachine: (ha-234651) DBG | I0731 16:59:04.022433   26415 retry.go:31] will retry after 324.011133ms: waiting for machine to come up
	I0731 16:59:04.347982   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:04.348423   26392 main.go:141] libmachine: (ha-234651) DBG | unable to find current IP address of domain ha-234651 in network mk-ha-234651
	I0731 16:59:04.348442   26392 main.go:141] libmachine: (ha-234651) DBG | I0731 16:59:04.348383   26415 retry.go:31] will retry after 378.78361ms: waiting for machine to come up
	I0731 16:59:04.728908   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:04.729471   26392 main.go:141] libmachine: (ha-234651) DBG | unable to find current IP address of domain ha-234651 in network mk-ha-234651
	I0731 16:59:04.729500   26392 main.go:141] libmachine: (ha-234651) DBG | I0731 16:59:04.729404   26415 retry.go:31] will retry after 582.839678ms: waiting for machine to come up
	I0731 16:59:05.314006   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:05.314617   26392 main.go:141] libmachine: (ha-234651) DBG | unable to find current IP address of domain ha-234651 in network mk-ha-234651
	I0731 16:59:05.314640   26392 main.go:141] libmachine: (ha-234651) DBG | I0731 16:59:05.314578   26415 retry.go:31] will retry after 543.640775ms: waiting for machine to come up
	I0731 16:59:05.860403   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:05.860843   26392 main.go:141] libmachine: (ha-234651) DBG | unable to find current IP address of domain ha-234651 in network mk-ha-234651
	I0731 16:59:05.860867   26392 main.go:141] libmachine: (ha-234651) DBG | I0731 16:59:05.860796   26415 retry.go:31] will retry after 885.211489ms: waiting for machine to come up
	I0731 16:59:06.747859   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:06.748290   26392 main.go:141] libmachine: (ha-234651) DBG | unable to find current IP address of domain ha-234651 in network mk-ha-234651
	I0731 16:59:06.748326   26392 main.go:141] libmachine: (ha-234651) DBG | I0731 16:59:06.748244   26415 retry.go:31] will retry after 872.987133ms: waiting for machine to come up
	I0731 16:59:07.622973   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:07.623513   26392 main.go:141] libmachine: (ha-234651) DBG | unable to find current IP address of domain ha-234651 in network mk-ha-234651
	I0731 16:59:07.623541   26392 main.go:141] libmachine: (ha-234651) DBG | I0731 16:59:07.623457   26415 retry.go:31] will retry after 1.063595754s: waiting for machine to come up
	I0731 16:59:08.688832   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:08.689277   26392 main.go:141] libmachine: (ha-234651) DBG | unable to find current IP address of domain ha-234651 in network mk-ha-234651
	I0731 16:59:08.689309   26392 main.go:141] libmachine: (ha-234651) DBG | I0731 16:59:08.689226   26415 retry.go:31] will retry after 1.211748796s: waiting for machine to come up
	I0731 16:59:09.902688   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:09.903250   26392 main.go:141] libmachine: (ha-234651) DBG | unable to find current IP address of domain ha-234651 in network mk-ha-234651
	I0731 16:59:09.903282   26392 main.go:141] libmachine: (ha-234651) DBG | I0731 16:59:09.903203   26415 retry.go:31] will retry after 1.480030878s: waiting for machine to come up
	I0731 16:59:11.385039   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:11.385459   26392 main.go:141] libmachine: (ha-234651) DBG | unable to find current IP address of domain ha-234651 in network mk-ha-234651
	I0731 16:59:11.385483   26392 main.go:141] libmachine: (ha-234651) DBG | I0731 16:59:11.385395   26415 retry.go:31] will retry after 1.914673374s: waiting for machine to come up
	I0731 16:59:13.301279   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:13.301612   26392 main.go:141] libmachine: (ha-234651) DBG | unable to find current IP address of domain ha-234651 in network mk-ha-234651
	I0731 16:59:13.301648   26392 main.go:141] libmachine: (ha-234651) DBG | I0731 16:59:13.301555   26415 retry.go:31] will retry after 2.413581052s: waiting for machine to come up
	I0731 16:59:15.718131   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:15.718454   26392 main.go:141] libmachine: (ha-234651) DBG | unable to find current IP address of domain ha-234651 in network mk-ha-234651
	I0731 16:59:15.718482   26392 main.go:141] libmachine: (ha-234651) DBG | I0731 16:59:15.718405   26415 retry.go:31] will retry after 4.359438277s: waiting for machine to come up
	I0731 16:59:20.081334   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:20.081705   26392 main.go:141] libmachine: (ha-234651) DBG | unable to find current IP address of domain ha-234651 in network mk-ha-234651
	I0731 16:59:20.081730   26392 main.go:141] libmachine: (ha-234651) DBG | I0731 16:59:20.081673   26415 retry.go:31] will retry after 3.951412981s: waiting for machine to come up
	I0731 16:59:24.035653   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:24.036108   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has current primary IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:24.036193   26392 main.go:141] libmachine: (ha-234651) Found IP for machine: 192.168.39.243
	I0731 16:59:24.036235   26392 main.go:141] libmachine: (ha-234651) Reserving static IP address...
	I0731 16:59:24.036545   26392 main.go:141] libmachine: (ha-234651) DBG | unable to find host DHCP lease matching {name: "ha-234651", mac: "52:54:00:20:60:53", ip: "192.168.39.243"} in network mk-ha-234651
	I0731 16:59:24.106091   26392 main.go:141] libmachine: (ha-234651) DBG | Getting to WaitForSSH function...
	I0731 16:59:24.106174   26392 main.go:141] libmachine: (ha-234651) Reserved static IP address: 192.168.39.243
	I0731 16:59:24.106192   26392 main.go:141] libmachine: (ha-234651) Waiting for SSH to be available...
	I0731 16:59:24.108490   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:24.108857   26392 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:minikube Clientid:01:52:54:00:20:60:53}
	I0731 16:59:24.108896   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:24.109012   26392 main.go:141] libmachine: (ha-234651) DBG | Using SSH client type: external
	I0731 16:59:24.109035   26392 main.go:141] libmachine: (ha-234651) DBG | Using SSH private key: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651/id_rsa (-rw-------)
	I0731 16:59:24.109063   26392 main.go:141] libmachine: (ha-234651) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.243 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 16:59:24.109077   26392 main.go:141] libmachine: (ha-234651) DBG | About to run SSH command:
	I0731 16:59:24.109103   26392 main.go:141] libmachine: (ha-234651) DBG | exit 0
	I0731 16:59:24.234933   26392 main.go:141] libmachine: (ha-234651) DBG | SSH cmd err, output: <nil>: 
	I0731 16:59:24.235227   26392 main.go:141] libmachine: (ha-234651) KVM machine creation complete!
	I0731 16:59:24.235572   26392 main.go:141] libmachine: (ha-234651) Calling .GetConfigRaw
	I0731 16:59:24.236082   26392 main.go:141] libmachine: (ha-234651) Calling .DriverName
	I0731 16:59:24.236267   26392 main.go:141] libmachine: (ha-234651) Calling .DriverName
	I0731 16:59:24.236428   26392 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0731 16:59:24.236441   26392 main.go:141] libmachine: (ha-234651) Calling .GetState
	I0731 16:59:24.237676   26392 main.go:141] libmachine: Detecting operating system of created instance...
	I0731 16:59:24.237689   26392 main.go:141] libmachine: Waiting for SSH to be available...
	I0731 16:59:24.237694   26392 main.go:141] libmachine: Getting to WaitForSSH function...
	I0731 16:59:24.237700   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 16:59:24.239709   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:24.240031   26392 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 16:59:24.240057   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:24.240225   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 16:59:24.240399   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 16:59:24.240522   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 16:59:24.240650   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 16:59:24.240778   26392 main.go:141] libmachine: Using SSH client type: native
	I0731 16:59:24.240957   26392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0731 16:59:24.240967   26392 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0731 16:59:24.342185   26392 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 16:59:24.342206   26392 main.go:141] libmachine: Detecting the provisioner...
	I0731 16:59:24.342216   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 16:59:24.344783   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:24.345095   26392 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 16:59:24.345121   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:24.345282   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 16:59:24.345454   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 16:59:24.345623   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 16:59:24.345721   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 16:59:24.345887   26392 main.go:141] libmachine: Using SSH client type: native
	I0731 16:59:24.346058   26392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0731 16:59:24.346068   26392 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0731 16:59:24.451456   26392 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0731 16:59:24.451514   26392 main.go:141] libmachine: found compatible host: buildroot
	I0731 16:59:24.451520   26392 main.go:141] libmachine: Provisioning with buildroot...
	I0731 16:59:24.451527   26392 main.go:141] libmachine: (ha-234651) Calling .GetMachineName
	I0731 16:59:24.451760   26392 buildroot.go:166] provisioning hostname "ha-234651"
	I0731 16:59:24.451784   26392 main.go:141] libmachine: (ha-234651) Calling .GetMachineName
	I0731 16:59:24.451944   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 16:59:24.454316   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:24.454697   26392 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 16:59:24.454719   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:24.454943   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 16:59:24.455133   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 16:59:24.455260   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 16:59:24.455450   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 16:59:24.455628   26392 main.go:141] libmachine: Using SSH client type: native
	I0731 16:59:24.455820   26392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0731 16:59:24.455838   26392 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-234651 && echo "ha-234651" | sudo tee /etc/hostname
	I0731 16:59:24.572098   26392 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-234651
	
	I0731 16:59:24.572125   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 16:59:24.574462   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:24.574833   26392 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 16:59:24.574855   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:24.575006   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 16:59:24.575203   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 16:59:24.575334   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 16:59:24.575488   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 16:59:24.575626   26392 main.go:141] libmachine: Using SSH client type: native
	I0731 16:59:24.575805   26392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0731 16:59:24.575823   26392 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-234651' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-234651/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-234651' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 16:59:24.687017   26392 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 16:59:24.687044   26392 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19349-8084/.minikube CaCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19349-8084/.minikube}
	I0731 16:59:24.687092   26392 buildroot.go:174] setting up certificates
	I0731 16:59:24.687144   26392 provision.go:84] configureAuth start
	I0731 16:59:24.687262   26392 main.go:141] libmachine: (ha-234651) Calling .GetMachineName
	I0731 16:59:24.687587   26392 main.go:141] libmachine: (ha-234651) Calling .GetIP
	I0731 16:59:24.690308   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:24.690636   26392 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 16:59:24.690668   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:24.690787   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 16:59:24.692980   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:24.693269   26392 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 16:59:24.693290   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:24.693408   26392 provision.go:143] copyHostCerts
	I0731 16:59:24.693436   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem
	I0731 16:59:24.693474   26392 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem, removing ...
	I0731 16:59:24.693486   26392 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem
	I0731 16:59:24.693564   26392 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem (1082 bytes)
	I0731 16:59:24.693693   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem
	I0731 16:59:24.693720   26392 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem, removing ...
	I0731 16:59:24.693730   26392 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem
	I0731 16:59:24.693769   26392 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem (1123 bytes)
	I0731 16:59:24.693843   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem
	I0731 16:59:24.693865   26392 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem, removing ...
	I0731 16:59:24.693872   26392 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem
	I0731 16:59:24.693904   26392 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem (1675 bytes)
	I0731 16:59:24.693971   26392 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem org=jenkins.ha-234651 san=[127.0.0.1 192.168.39.243 ha-234651 localhost minikube]
	I0731 16:59:24.825150   26392 provision.go:177] copyRemoteCerts
	I0731 16:59:24.825214   26392 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 16:59:24.825237   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 16:59:24.828022   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:24.828285   26392 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 16:59:24.828307   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:24.828513   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 16:59:24.828654   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 16:59:24.828804   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 16:59:24.828951   26392 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651/id_rsa Username:docker}
	I0731 16:59:24.908983   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0731 16:59:24.909061   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 16:59:24.930932   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0731 16:59:24.931007   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0731 16:59:24.952360   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0731 16:59:24.952415   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 16:59:24.973200   26392 provision.go:87] duration metric: took 285.951239ms to configureAuth
	I0731 16:59:24.973235   26392 buildroot.go:189] setting minikube options for container-runtime
	I0731 16:59:24.973426   26392 config.go:182] Loaded profile config "ha-234651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 16:59:24.973500   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 16:59:24.975877   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:24.976220   26392 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 16:59:24.976239   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:24.976400   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 16:59:24.976556   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 16:59:24.976698   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 16:59:24.976814   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 16:59:24.976947   26392 main.go:141] libmachine: Using SSH client type: native
	I0731 16:59:24.977123   26392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0731 16:59:24.977145   26392 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 16:59:25.232552   26392 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 16:59:25.232574   26392 main.go:141] libmachine: Checking connection to Docker...
	I0731 16:59:25.232581   26392 main.go:141] libmachine: (ha-234651) Calling .GetURL
	I0731 16:59:25.233847   26392 main.go:141] libmachine: (ha-234651) DBG | Using libvirt version 6000000
	I0731 16:59:25.235805   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:25.236125   26392 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 16:59:25.236147   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:25.236299   26392 main.go:141] libmachine: Docker is up and running!
	I0731 16:59:25.236312   26392 main.go:141] libmachine: Reticulating splines...
	I0731 16:59:25.236317   26392 client.go:171] duration metric: took 23.06346261s to LocalClient.Create
	I0731 16:59:25.236342   26392 start.go:167] duration metric: took 23.063527006s to libmachine.API.Create "ha-234651"
	I0731 16:59:25.236351   26392 start.go:293] postStartSetup for "ha-234651" (driver="kvm2")
	I0731 16:59:25.236360   26392 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 16:59:25.236372   26392 main.go:141] libmachine: (ha-234651) Calling .DriverName
	I0731 16:59:25.236626   26392 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 16:59:25.236651   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 16:59:25.238593   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:25.238936   26392 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 16:59:25.238974   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:25.239086   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 16:59:25.239260   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 16:59:25.239404   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 16:59:25.239540   26392 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651/id_rsa Username:docker}
	I0731 16:59:25.320880   26392 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 16:59:25.324680   26392 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 16:59:25.324703   26392 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/addons for local assets ...
	I0731 16:59:25.324770   26392 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/files for local assets ...
	I0731 16:59:25.324875   26392 filesync.go:149] local asset: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem -> 152592.pem in /etc/ssl/certs
	I0731 16:59:25.324887   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem -> /etc/ssl/certs/152592.pem
	I0731 16:59:25.325012   26392 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 16:59:25.333559   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /etc/ssl/certs/152592.pem (1708 bytes)
	I0731 16:59:25.355130   26392 start.go:296] duration metric: took 118.766984ms for postStartSetup
	I0731 16:59:25.355184   26392 main.go:141] libmachine: (ha-234651) Calling .GetConfigRaw
	I0731 16:59:25.355771   26392 main.go:141] libmachine: (ha-234651) Calling .GetIP
	I0731 16:59:25.358270   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:25.358576   26392 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 16:59:25.358600   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:25.358857   26392 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/config.json ...
	I0731 16:59:25.359033   26392 start.go:128] duration metric: took 23.204391608s to createHost
	I0731 16:59:25.359054   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 16:59:25.361175   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:25.361424   26392 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 16:59:25.361449   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:25.361646   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 16:59:25.361799   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 16:59:25.361951   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 16:59:25.362075   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 16:59:25.362211   26392 main.go:141] libmachine: Using SSH client type: native
	I0731 16:59:25.362444   26392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0731 16:59:25.362466   26392 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 16:59:25.467488   26392 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722445165.445964437
	
	I0731 16:59:25.467508   26392 fix.go:216] guest clock: 1722445165.445964437
	I0731 16:59:25.467525   26392 fix.go:229] Guest: 2024-07-31 16:59:25.445964437 +0000 UTC Remote: 2024-07-31 16:59:25.359045152 +0000 UTC m=+23.305280078 (delta=86.919285ms)
	I0731 16:59:25.467549   26392 fix.go:200] guest clock delta is within tolerance: 86.919285ms
	I0731 16:59:25.467559   26392 start.go:83] releasing machines lock for "ha-234651", held for 23.312997688s
	I0731 16:59:25.467581   26392 main.go:141] libmachine: (ha-234651) Calling .DriverName
	I0731 16:59:25.467827   26392 main.go:141] libmachine: (ha-234651) Calling .GetIP
	I0731 16:59:25.470269   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:25.470547   26392 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 16:59:25.470588   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:25.470708   26392 main.go:141] libmachine: (ha-234651) Calling .DriverName
	I0731 16:59:25.471160   26392 main.go:141] libmachine: (ha-234651) Calling .DriverName
	I0731 16:59:25.471315   26392 main.go:141] libmachine: (ha-234651) Calling .DriverName
	I0731 16:59:25.471412   26392 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 16:59:25.471447   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 16:59:25.471500   26392 ssh_runner.go:195] Run: cat /version.json
	I0731 16:59:25.471523   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 16:59:25.473900   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:25.473923   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:25.474275   26392 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 16:59:25.474312   26392 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 16:59:25.474337   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:25.474398   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:25.474471   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 16:59:25.474670   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 16:59:25.474679   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 16:59:25.474819   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 16:59:25.474835   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 16:59:25.474970   26392 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651/id_rsa Username:docker}
	I0731 16:59:25.474978   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 16:59:25.475151   26392 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651/id_rsa Username:docker}
	I0731 16:59:25.577098   26392 ssh_runner.go:195] Run: systemctl --version
	I0731 16:59:25.582747   26392 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 16:59:25.737662   26392 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 16:59:25.743019   26392 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 16:59:25.743081   26392 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 16:59:25.758785   26392 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 16:59:25.758804   26392 start.go:495] detecting cgroup driver to use...
	I0731 16:59:25.758859   26392 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 16:59:25.773597   26392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 16:59:25.786568   26392 docker.go:217] disabling cri-docker service (if available) ...
	I0731 16:59:25.786640   26392 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 16:59:25.799385   26392 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 16:59:25.813818   26392 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 16:59:25.921385   26392 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 16:59:26.059608   26392 docker.go:233] disabling docker service ...
	I0731 16:59:26.059699   26392 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 16:59:26.073467   26392 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 16:59:26.085380   26392 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 16:59:26.225351   26392 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 16:59:26.343719   26392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 16:59:26.356563   26392 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 16:59:26.372956   26392 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 16:59:26.373021   26392 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 16:59:26.382181   26392 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 16:59:26.382235   26392 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 16:59:26.391652   26392 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 16:59:26.401109   26392 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 16:59:26.410465   26392 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 16:59:26.420129   26392 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 16:59:26.429507   26392 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 16:59:26.445726   26392 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 16:59:26.455319   26392 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 16:59:26.464186   26392 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 16:59:26.464236   26392 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 16:59:26.476232   26392 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 16:59:26.488515   26392 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 16:59:26.599047   26392 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 16:59:26.727563   26392 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 16:59:26.727633   26392 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 16:59:26.731803   26392 start.go:563] Will wait 60s for crictl version
	I0731 16:59:26.731863   26392 ssh_runner.go:195] Run: which crictl
	I0731 16:59:26.735353   26392 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 16:59:26.770458   26392 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 16:59:26.770538   26392 ssh_runner.go:195] Run: crio --version
	I0731 16:59:26.796805   26392 ssh_runner.go:195] Run: crio --version
	I0731 16:59:26.826014   26392 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 16:59:26.827253   26392 main.go:141] libmachine: (ha-234651) Calling .GetIP
	I0731 16:59:26.829618   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:26.829958   26392 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 16:59:26.829999   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:26.830170   26392 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 16:59:26.833815   26392 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 16:59:26.845446   26392 kubeadm.go:883] updating cluster {Name:ha-234651 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-234651 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.243 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 16:59:26.845537   26392 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 16:59:26.845578   26392 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 16:59:26.877140   26392 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 16:59:26.877195   26392 ssh_runner.go:195] Run: which lz4
	I0731 16:59:26.880717   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0731 16:59:26.880811   26392 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 16:59:26.884569   26392 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 16:59:26.884604   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 16:59:28.081998   26392 crio.go:462] duration metric: took 1.201214851s to copy over tarball
	I0731 16:59:28.082059   26392 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 16:59:30.220227   26392 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.138143947s)
	I0731 16:59:30.220261   26392 crio.go:469] duration metric: took 2.138235975s to extract the tarball
	I0731 16:59:30.220270   26392 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 16:59:30.257971   26392 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 16:59:30.299500   26392 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 16:59:30.299525   26392 cache_images.go:84] Images are preloaded, skipping loading
	I0731 16:59:30.299533   26392 kubeadm.go:934] updating node { 192.168.39.243 8443 v1.30.3 crio true true} ...
	I0731 16:59:30.299640   26392 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-234651 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.243
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-234651 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 16:59:30.299706   26392 ssh_runner.go:195] Run: crio config
	I0731 16:59:30.341032   26392 cni.go:84] Creating CNI manager for ""
	I0731 16:59:30.341053   26392 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0731 16:59:30.341066   26392 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 16:59:30.341085   26392 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.243 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-234651 NodeName:ha-234651 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.243"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.243 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 16:59:30.341212   26392 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.243
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-234651"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.243
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.243"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 16:59:30.341238   26392 kube-vip.go:115] generating kube-vip config ...
	I0731 16:59:30.341273   26392 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0731 16:59:30.359265   26392 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0731 16:59:30.359355   26392 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0731 16:59:30.359418   26392 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 16:59:30.368615   26392 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 16:59:30.368681   26392 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0731 16:59:30.377525   26392 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0731 16:59:30.392491   26392 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 16:59:30.407319   26392 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0731 16:59:30.422134   26392 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0731 16:59:30.436820   26392 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0731 16:59:30.440442   26392 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 16:59:30.451341   26392 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 16:59:30.581682   26392 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 16:59:30.597960   26392 certs.go:68] Setting up /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651 for IP: 192.168.39.243
	I0731 16:59:30.597987   26392 certs.go:194] generating shared ca certs ...
	I0731 16:59:30.598009   26392 certs.go:226] acquiring lock for ca certs: {Name:mk49eed439085f9c30746706460ce213570d1997 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 16:59:30.598199   26392 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key
	I0731 16:59:30.598259   26392 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key
	I0731 16:59:30.598275   26392 certs.go:256] generating profile certs ...
	I0731 16:59:30.598341   26392 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/client.key
	I0731 16:59:30.598384   26392 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/client.crt with IP's: []
	I0731 16:59:30.700029   26392 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/client.crt ...
	I0731 16:59:30.700055   26392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/client.crt: {Name:mk7ee64628046b1d2da8c67709ceb5f483c647c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 16:59:30.700250   26392 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/client.key ...
	I0731 16:59:30.700268   26392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/client.key: {Name:mk9a3b2bee7d0d6eb498143fed75ea79c6d5cd05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 16:59:30.700383   26392 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.key.414d6831
	I0731 16:59:30.700408   26392 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.crt.414d6831 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.243 192.168.39.254]
	I0731 16:59:30.953973   26392 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.crt.414d6831 ...
	I0731 16:59:30.954003   26392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.crt.414d6831: {Name:mk17313042b397a79965fb7698fed9783403c484 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 16:59:30.954153   26392 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.key.414d6831 ...
	I0731 16:59:30.954165   26392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.key.414d6831: {Name:mk7b1e0e449b763530a552eb308f6593ad6d0ed2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 16:59:30.954236   26392 certs.go:381] copying /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.crt.414d6831 -> /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.crt
	I0731 16:59:30.954316   26392 certs.go:385] copying /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.key.414d6831 -> /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.key
	I0731 16:59:30.954381   26392 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/proxy-client.key
	I0731 16:59:30.954402   26392 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/proxy-client.crt with IP's: []
	I0731 16:59:31.190411   26392 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/proxy-client.crt ...
	I0731 16:59:31.190442   26392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/proxy-client.crt: {Name:mkbe422e4a5b3ad16cdbcc06c237d001864e7f29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 16:59:31.190605   26392 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/proxy-client.key ...
	I0731 16:59:31.190616   26392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/proxy-client.key: {Name:mkd04cb40fa82623f4bd1825fcdb903f6f94bfe6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 16:59:31.190677   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 16:59:31.190694   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0731 16:59:31.190705   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 16:59:31.190718   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 16:59:31.190731   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 16:59:31.190744   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 16:59:31.190757   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 16:59:31.190769   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 16:59:31.190821   26392 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem (1338 bytes)
	W0731 16:59:31.190853   26392 certs.go:480] ignoring /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259_empty.pem, impossibly tiny 0 bytes
	I0731 16:59:31.190862   26392 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 16:59:31.190884   26392 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem (1082 bytes)
	I0731 16:59:31.190906   26392 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem (1123 bytes)
	I0731 16:59:31.190930   26392 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem (1675 bytes)
	I0731 16:59:31.190972   26392 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem (1708 bytes)
	I0731 16:59:31.191001   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 16:59:31.191014   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem -> /usr/share/ca-certificates/15259.pem
	I0731 16:59:31.191026   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem -> /usr/share/ca-certificates/152592.pem
	I0731 16:59:31.191561   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 16:59:31.219008   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 16:59:31.243811   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 16:59:31.268574   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 16:59:31.293150   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0731 16:59:31.317997   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 16:59:31.342785   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 16:59:31.373077   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 16:59:31.412217   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 16:59:31.439225   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem --> /usr/share/ca-certificates/15259.pem (1338 bytes)
	I0731 16:59:31.462362   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /usr/share/ca-certificates/152592.pem (1708 bytes)
	I0731 16:59:31.483715   26392 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 16:59:31.498726   26392 ssh_runner.go:195] Run: openssl version
	I0731 16:59:31.503960   26392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 16:59:31.514138   26392 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 16:59:31.518194   26392 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 16:42 /usr/share/ca-certificates/minikubeCA.pem
	I0731 16:59:31.518274   26392 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 16:59:31.523607   26392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 16:59:31.533617   26392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15259.pem && ln -fs /usr/share/ca-certificates/15259.pem /etc/ssl/certs/15259.pem"
	I0731 16:59:31.543851   26392 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15259.pem
	I0731 16:59:31.547852   26392 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 16:54 /usr/share/ca-certificates/15259.pem
	I0731 16:59:31.547912   26392 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15259.pem
	I0731 16:59:31.553469   26392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15259.pem /etc/ssl/certs/51391683.0"
	I0731 16:59:31.563615   26392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/152592.pem && ln -fs /usr/share/ca-certificates/152592.pem /etc/ssl/certs/152592.pem"
	I0731 16:59:31.573678   26392 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/152592.pem
	I0731 16:59:31.577713   26392 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 16:54 /usr/share/ca-certificates/152592.pem
	I0731 16:59:31.577753   26392 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/152592.pem
	I0731 16:59:31.582846   26392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/152592.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 16:59:31.593077   26392 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 16:59:31.596896   26392 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 16:59:31.596943   26392 kubeadm.go:392] StartCluster: {Name:ha-234651 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-234651 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.243 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 16:59:31.597021   26392 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 16:59:31.597103   26392 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 16:59:31.630641   26392 cri.go:89] found id: ""
	I0731 16:59:31.630714   26392 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 16:59:31.640096   26392 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 16:59:31.649935   26392 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 16:59:31.660459   26392 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 16:59:31.660477   26392 kubeadm.go:157] found existing configuration files:
	
	I0731 16:59:31.660528   26392 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 16:59:31.669310   26392 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 16:59:31.669357   26392 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 16:59:31.678192   26392 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 16:59:31.686853   26392 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 16:59:31.686922   26392 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 16:59:31.695910   26392 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 16:59:31.704263   26392 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 16:59:31.704311   26392 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 16:59:31.713139   26392 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 16:59:31.722155   26392 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 16:59:31.722221   26392 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 16:59:31.731387   26392 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 16:59:31.832450   26392 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0731 16:59:31.832542   26392 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 16:59:31.953896   26392 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 16:59:31.954043   26392 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 16:59:31.954158   26392 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 16:59:32.151343   26392 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 16:59:32.359125   26392 out.go:204]   - Generating certificates and keys ...
	I0731 16:59:32.359246   26392 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 16:59:32.359314   26392 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 16:59:32.359435   26392 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0731 16:59:32.377248   26392 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0731 16:59:32.677549   26392 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0731 16:59:32.867146   26392 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0731 16:59:33.360775   26392 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0731 16:59:33.360993   26392 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-234651 localhost] and IPs [192.168.39.243 127.0.0.1 ::1]
	I0731 16:59:33.466112   26392 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0731 16:59:33.466409   26392 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-234651 localhost] and IPs [192.168.39.243 127.0.0.1 ::1]
	I0731 16:59:33.625099   26392 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0731 16:59:33.962462   26392 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0731 16:59:34.432296   26392 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0731 16:59:34.432396   26392 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 16:59:34.697433   26392 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 16:59:34.764397   26392 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0731 16:59:34.908374   26392 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 16:59:34.990770   26392 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 16:59:35.091185   26392 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 16:59:35.092604   26392 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 16:59:35.096555   26392 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 16:59:35.145654   26392 out.go:204]   - Booting up control plane ...
	I0731 16:59:35.145821   26392 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 16:59:35.145942   26392 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 16:59:35.146050   26392 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 16:59:35.146209   26392 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 16:59:35.146339   26392 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 16:59:35.146418   26392 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 16:59:35.270078   26392 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0731 16:59:35.270238   26392 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0731 16:59:35.772279   26392 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.37404ms
	I0731 16:59:35.772411   26392 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0731 16:59:41.880564   26392 kubeadm.go:310] [api-check] The API server is healthy after 6.111372285s
	I0731 16:59:41.892891   26392 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 16:59:41.913605   26392 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 16:59:42.438762   26392 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 16:59:42.439017   26392 kubeadm.go:310] [mark-control-plane] Marking the node ha-234651 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 16:59:42.449883   26392 kubeadm.go:310] [bootstrap-token] Using token: nfptp5.vhfyienhf110vt3u
	I0731 16:59:42.451360   26392 out.go:204]   - Configuring RBAC rules ...
	I0731 16:59:42.451490   26392 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 16:59:42.458097   26392 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 16:59:42.468508   26392 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 16:59:42.471813   26392 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 16:59:42.474759   26392 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 16:59:42.478202   26392 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 16:59:42.498715   26392 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 16:59:42.764249   26392 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 16:59:43.287784   26392 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 16:59:43.287808   26392 kubeadm.go:310] 
	I0731 16:59:43.287870   26392 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 16:59:43.287903   26392 kubeadm.go:310] 
	I0731 16:59:43.288019   26392 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 16:59:43.288030   26392 kubeadm.go:310] 
	I0731 16:59:43.288092   26392 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 16:59:43.288172   26392 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 16:59:43.288259   26392 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 16:59:43.288280   26392 kubeadm.go:310] 
	I0731 16:59:43.288355   26392 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 16:59:43.288362   26392 kubeadm.go:310] 
	I0731 16:59:43.288431   26392 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 16:59:43.288440   26392 kubeadm.go:310] 
	I0731 16:59:43.288514   26392 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 16:59:43.288636   26392 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 16:59:43.288740   26392 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 16:59:43.288750   26392 kubeadm.go:310] 
	I0731 16:59:43.288863   26392 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 16:59:43.288980   26392 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 16:59:43.288989   26392 kubeadm.go:310] 
	I0731 16:59:43.289087   26392 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token nfptp5.vhfyienhf110vt3u \
	I0731 16:59:43.289260   26392 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:460b24b51d10a3760f2391542a8a89e401359854f437f922cbee3f2d8ed6c856 \
	I0731 16:59:43.289295   26392 kubeadm.go:310] 	--control-plane 
	I0731 16:59:43.289306   26392 kubeadm.go:310] 
	I0731 16:59:43.289443   26392 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 16:59:43.289461   26392 kubeadm.go:310] 
	I0731 16:59:43.289574   26392 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token nfptp5.vhfyienhf110vt3u \
	I0731 16:59:43.289714   26392 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:460b24b51d10a3760f2391542a8a89e401359854f437f922cbee3f2d8ed6c856 
	I0731 16:59:43.289863   26392 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 16:59:43.289898   26392 cni.go:84] Creating CNI manager for ""
	I0731 16:59:43.289910   26392 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0731 16:59:43.291714   26392 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0731 16:59:43.293067   26392 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0731 16:59:43.299040   26392 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0731 16:59:43.299057   26392 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0731 16:59:43.318479   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0731 16:59:43.641272   26392 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 16:59:43.641365   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-234651 minikube.k8s.io/updated_at=2024_07_31T16_59_43_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1d737dad7efa60c56d30434fcd857dd3b14c91d9 minikube.k8s.io/name=ha-234651 minikube.k8s.io/primary=true
	I0731 16:59:43.641384   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:59:43.763146   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:59:43.763278   26392 ops.go:34] apiserver oom_adj: -16
	I0731 16:59:44.263961   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:59:44.763593   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:59:45.263430   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:59:45.763237   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:59:46.262918   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:59:46.763733   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:59:47.263588   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:59:47.763260   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:59:48.263680   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:59:48.763380   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:59:49.263682   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:59:49.762871   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:59:50.263309   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:59:50.763587   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:59:51.263226   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:59:51.763668   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:59:52.263227   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:59:52.763318   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:59:53.263205   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:59:53.763797   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:59:54.262996   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:59:54.763149   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:59:55.262996   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:59:55.762890   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:59:56.263910   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:59:56.344269   26392 kubeadm.go:1113] duration metric: took 12.702990197s to wait for elevateKubeSystemPrivileges
	I0731 16:59:56.344312   26392 kubeadm.go:394] duration metric: took 24.747371577s to StartCluster
	I0731 16:59:56.344330   26392 settings.go:142] acquiring lock: {Name:mk8ac8269296bf0b887a00ff74caaad180005f89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 16:59:56.344404   26392 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 16:59:56.345043   26392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/kubeconfig: {Name:mkf2340bc1ada3c683f132aca37228d7588e1fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 16:59:56.345263   26392 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0731 16:59:56.345267   26392 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.243 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 16:59:56.345284   26392 start.go:241] waiting for startup goroutines ...
	I0731 16:59:56.345292   26392 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 16:59:56.345336   26392 addons.go:69] Setting storage-provisioner=true in profile "ha-234651"
	I0731 16:59:56.345353   26392 addons.go:69] Setting default-storageclass=true in profile "ha-234651"
	I0731 16:59:56.345366   26392 addons.go:234] Setting addon storage-provisioner=true in "ha-234651"
	I0731 16:59:56.345391   26392 host.go:66] Checking if "ha-234651" exists ...
	I0731 16:59:56.345398   26392 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-234651"
	I0731 16:59:56.345506   26392 config.go:182] Loaded profile config "ha-234651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 16:59:56.345772   26392 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:59:56.345790   26392 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:59:56.345815   26392 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:59:56.345910   26392 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:59:56.361291   26392 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42931
	I0731 16:59:56.361349   26392 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45401
	I0731 16:59:56.361793   26392 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:59:56.361835   26392 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:59:56.362300   26392 main.go:141] libmachine: Using API Version  1
	I0731 16:59:56.362319   26392 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:59:56.362450   26392 main.go:141] libmachine: Using API Version  1
	I0731 16:59:56.362472   26392 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:59:56.362631   26392 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:59:56.362787   26392 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:59:56.362937   26392 main.go:141] libmachine: (ha-234651) Calling .GetState
	I0731 16:59:56.363241   26392 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:59:56.363286   26392 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:59:56.364992   26392 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 16:59:56.365190   26392 kapi.go:59] client config for ha-234651: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/client.crt", KeyFile:"/home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/client.key", CAFile:"/home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 16:59:56.365640   26392 cert_rotation.go:137] Starting client certificate rotation controller
	I0731 16:59:56.365778   26392 addons.go:234] Setting addon default-storageclass=true in "ha-234651"
	I0731 16:59:56.365818   26392 host.go:66] Checking if "ha-234651" exists ...
	I0731 16:59:56.366094   26392 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:59:56.366126   26392 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:59:56.377968   26392 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39165
	I0731 16:59:56.378432   26392 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:59:56.378938   26392 main.go:141] libmachine: Using API Version  1
	I0731 16:59:56.378966   26392 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:59:56.379329   26392 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:59:56.379497   26392 main.go:141] libmachine: (ha-234651) Calling .GetState
	I0731 16:59:56.380724   26392 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35845
	I0731 16:59:56.381083   26392 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:59:56.381396   26392 main.go:141] libmachine: (ha-234651) Calling .DriverName
	I0731 16:59:56.381537   26392 main.go:141] libmachine: Using API Version  1
	I0731 16:59:56.381558   26392 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:59:56.381875   26392 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:59:56.382348   26392 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:59:56.382384   26392 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:59:56.383775   26392 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 16:59:56.385342   26392 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 16:59:56.385362   26392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 16:59:56.385381   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 16:59:56.388154   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:56.388571   26392 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 16:59:56.388592   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:56.388729   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 16:59:56.388910   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 16:59:56.389059   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 16:59:56.389201   26392 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651/id_rsa Username:docker}
	I0731 16:59:56.398746   26392 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41463
	I0731 16:59:56.399161   26392 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:59:56.399652   26392 main.go:141] libmachine: Using API Version  1
	I0731 16:59:56.399674   26392 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:59:56.400071   26392 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:59:56.400250   26392 main.go:141] libmachine: (ha-234651) Calling .GetState
	I0731 16:59:56.401966   26392 main.go:141] libmachine: (ha-234651) Calling .DriverName
	I0731 16:59:56.402156   26392 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 16:59:56.402172   26392 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 16:59:56.402189   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 16:59:56.404481   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:56.404847   26392 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 16:59:56.404880   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:56.404995   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 16:59:56.405161   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 16:59:56.405297   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 16:59:56.405423   26392 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651/id_rsa Username:docker}
	I0731 16:59:56.487386   26392 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0731 16:59:56.554849   26392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 16:59:56.574262   26392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 16:59:56.780210   26392 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0731 16:59:57.037064   26392 main.go:141] libmachine: Making call to close driver server
	I0731 16:59:57.037091   26392 main.go:141] libmachine: (ha-234651) Calling .Close
	I0731 16:59:57.037161   26392 main.go:141] libmachine: Making call to close driver server
	I0731 16:59:57.037184   26392 main.go:141] libmachine: (ha-234651) Calling .Close
	I0731 16:59:57.037387   26392 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:59:57.037400   26392 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:59:57.037409   26392 main.go:141] libmachine: Making call to close driver server
	I0731 16:59:57.037416   26392 main.go:141] libmachine: (ha-234651) Calling .Close
	I0731 16:59:57.037506   26392 main.go:141] libmachine: (ha-234651) DBG | Closing plugin on server side
	I0731 16:59:57.037530   26392 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:59:57.037543   26392 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:59:57.037552   26392 main.go:141] libmachine: Making call to close driver server
	I0731 16:59:57.037559   26392 main.go:141] libmachine: (ha-234651) Calling .Close
	I0731 16:59:57.037593   26392 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:59:57.037614   26392 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:59:57.037752   26392 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:59:57.037765   26392 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:59:57.037834   26392 main.go:141] libmachine: (ha-234651) DBG | Closing plugin on server side
	I0731 16:59:57.037878   26392 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0731 16:59:57.037889   26392 round_trippers.go:469] Request Headers:
	I0731 16:59:57.037899   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 16:59:57.037906   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 16:59:57.047242   26392 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0731 16:59:57.047913   26392 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0731 16:59:57.047928   26392 round_trippers.go:469] Request Headers:
	I0731 16:59:57.047936   26392 round_trippers.go:473]     Content-Type: application/json
	I0731 16:59:57.047944   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 16:59:57.047949   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 16:59:57.051404   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 16:59:57.051567   26392 main.go:141] libmachine: Making call to close driver server
	I0731 16:59:57.051579   26392 main.go:141] libmachine: (ha-234651) Calling .Close
	I0731 16:59:57.051797   26392 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:59:57.051821   26392 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:59:57.053464   26392 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0731 16:59:57.054697   26392 addons.go:510] duration metric: took 709.400523ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0731 16:59:57.054733   26392 start.go:246] waiting for cluster config update ...
	I0731 16:59:57.054747   26392 start.go:255] writing updated cluster config ...
	I0731 16:59:57.056334   26392 out.go:177] 
	I0731 16:59:57.057638   26392 config.go:182] Loaded profile config "ha-234651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 16:59:57.057709   26392 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/config.json ...
	I0731 16:59:57.059150   26392 out.go:177] * Starting "ha-234651-m02" control-plane node in "ha-234651" cluster
	I0731 16:59:57.060105   26392 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 16:59:57.060125   26392 cache.go:56] Caching tarball of preloaded images
	I0731 16:59:57.060204   26392 preload.go:172] Found /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 16:59:57.060214   26392 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 16:59:57.060282   26392 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/config.json ...
	I0731 16:59:57.060470   26392 start.go:360] acquireMachinesLock for ha-234651-m02: {Name:mkd78eacfab7d6c3058c6674434b3d889ec957e0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 16:59:57.060513   26392 start.go:364] duration metric: took 24.628µs to acquireMachinesLock for "ha-234651-m02"
	I0731 16:59:57.060537   26392 start.go:93] Provisioning new machine with config: &{Name:ha-234651 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-234651 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.243 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 16:59:57.060617   26392 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0731 16:59:57.062693   26392 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 16:59:57.062769   26392 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:59:57.062791   26392 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:59:57.076864   26392 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43663
	I0731 16:59:57.077265   26392 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:59:57.077768   26392 main.go:141] libmachine: Using API Version  1
	I0731 16:59:57.077790   26392 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:59:57.078055   26392 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:59:57.078270   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetMachineName
	I0731 16:59:57.078418   26392 main.go:141] libmachine: (ha-234651-m02) Calling .DriverName
	I0731 16:59:57.078532   26392 start.go:159] libmachine.API.Create for "ha-234651" (driver="kvm2")
	I0731 16:59:57.078552   26392 client.go:168] LocalClient.Create starting
	I0731 16:59:57.078582   26392 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem
	I0731 16:59:57.078620   26392 main.go:141] libmachine: Decoding PEM data...
	I0731 16:59:57.078637   26392 main.go:141] libmachine: Parsing certificate...
	I0731 16:59:57.078683   26392 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem
	I0731 16:59:57.078702   26392 main.go:141] libmachine: Decoding PEM data...
	I0731 16:59:57.078713   26392 main.go:141] libmachine: Parsing certificate...
	I0731 16:59:57.078728   26392 main.go:141] libmachine: Running pre-create checks...
	I0731 16:59:57.078736   26392 main.go:141] libmachine: (ha-234651-m02) Calling .PreCreateCheck
	I0731 16:59:57.078891   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetConfigRaw
	I0731 16:59:57.079226   26392 main.go:141] libmachine: Creating machine...
	I0731 16:59:57.079238   26392 main.go:141] libmachine: (ha-234651-m02) Calling .Create
	I0731 16:59:57.079339   26392 main.go:141] libmachine: (ha-234651-m02) Creating KVM machine...
	I0731 16:59:57.080488   26392 main.go:141] libmachine: (ha-234651-m02) DBG | found existing default KVM network
	I0731 16:59:57.080588   26392 main.go:141] libmachine: (ha-234651-m02) DBG | found existing private KVM network mk-ha-234651
	I0731 16:59:57.080678   26392 main.go:141] libmachine: (ha-234651-m02) Setting up store path in /home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m02 ...
	I0731 16:59:57.080697   26392 main.go:141] libmachine: (ha-234651-m02) Building disk image from file:///home/jenkins/minikube-integration/19349-8084/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0731 16:59:57.080738   26392 main.go:141] libmachine: (ha-234651-m02) DBG | I0731 16:59:57.080659   26772 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19349-8084/.minikube
	I0731 16:59:57.080790   26392 main.go:141] libmachine: (ha-234651-m02) Downloading /home/jenkins/minikube-integration/19349-8084/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19349-8084/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0731 16:59:57.334000   26392 main.go:141] libmachine: (ha-234651-m02) DBG | I0731 16:59:57.333829   26772 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m02/id_rsa...
	I0731 16:59:57.482649   26392 main.go:141] libmachine: (ha-234651-m02) DBG | I0731 16:59:57.482546   26772 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m02/ha-234651-m02.rawdisk...
	I0731 16:59:57.482673   26392 main.go:141] libmachine: (ha-234651-m02) DBG | Writing magic tar header
	I0731 16:59:57.482684   26392 main.go:141] libmachine: (ha-234651-m02) DBG | Writing SSH key tar header
	I0731 16:59:57.482735   26392 main.go:141] libmachine: (ha-234651-m02) DBG | I0731 16:59:57.482680   26772 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m02 ...
	I0731 16:59:57.482817   26392 main.go:141] libmachine: (ha-234651-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m02
	I0731 16:59:57.482845   26392 main.go:141] libmachine: (ha-234651-m02) Setting executable bit set on /home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m02 (perms=drwx------)
	I0731 16:59:57.482861   26392 main.go:141] libmachine: (ha-234651-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19349-8084/.minikube/machines
	I0731 16:59:57.482876   26392 main.go:141] libmachine: (ha-234651-m02) Setting executable bit set on /home/jenkins/minikube-integration/19349-8084/.minikube/machines (perms=drwxr-xr-x)
	I0731 16:59:57.482889   26392 main.go:141] libmachine: (ha-234651-m02) Setting executable bit set on /home/jenkins/minikube-integration/19349-8084/.minikube (perms=drwxr-xr-x)
	I0731 16:59:57.482900   26392 main.go:141] libmachine: (ha-234651-m02) Setting executable bit set on /home/jenkins/minikube-integration/19349-8084 (perms=drwxrwxr-x)
	I0731 16:59:57.482912   26392 main.go:141] libmachine: (ha-234651-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0731 16:59:57.482925   26392 main.go:141] libmachine: (ha-234651-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19349-8084/.minikube
	I0731 16:59:57.482940   26392 main.go:141] libmachine: (ha-234651-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0731 16:59:57.482954   26392 main.go:141] libmachine: (ha-234651-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19349-8084
	I0731 16:59:57.482966   26392 main.go:141] libmachine: (ha-234651-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0731 16:59:57.482977   26392 main.go:141] libmachine: (ha-234651-m02) DBG | Checking permissions on dir: /home/jenkins
	I0731 16:59:57.482988   26392 main.go:141] libmachine: (ha-234651-m02) DBG | Checking permissions on dir: /home
	I0731 16:59:57.483006   26392 main.go:141] libmachine: (ha-234651-m02) DBG | Skipping /home - not owner
	I0731 16:59:57.483018   26392 main.go:141] libmachine: (ha-234651-m02) Creating domain...
	I0731 16:59:57.484125   26392 main.go:141] libmachine: (ha-234651-m02) define libvirt domain using xml: 
	I0731 16:59:57.484158   26392 main.go:141] libmachine: (ha-234651-m02) <domain type='kvm'>
	I0731 16:59:57.484171   26392 main.go:141] libmachine: (ha-234651-m02)   <name>ha-234651-m02</name>
	I0731 16:59:57.484183   26392 main.go:141] libmachine: (ha-234651-m02)   <memory unit='MiB'>2200</memory>
	I0731 16:59:57.484192   26392 main.go:141] libmachine: (ha-234651-m02)   <vcpu>2</vcpu>
	I0731 16:59:57.484202   26392 main.go:141] libmachine: (ha-234651-m02)   <features>
	I0731 16:59:57.484210   26392 main.go:141] libmachine: (ha-234651-m02)     <acpi/>
	I0731 16:59:57.484220   26392 main.go:141] libmachine: (ha-234651-m02)     <apic/>
	I0731 16:59:57.484231   26392 main.go:141] libmachine: (ha-234651-m02)     <pae/>
	I0731 16:59:57.484241   26392 main.go:141] libmachine: (ha-234651-m02)     
	I0731 16:59:57.484250   26392 main.go:141] libmachine: (ha-234651-m02)   </features>
	I0731 16:59:57.484261   26392 main.go:141] libmachine: (ha-234651-m02)   <cpu mode='host-passthrough'>
	I0731 16:59:57.484272   26392 main.go:141] libmachine: (ha-234651-m02)   
	I0731 16:59:57.484281   26392 main.go:141] libmachine: (ha-234651-m02)   </cpu>
	I0731 16:59:57.484289   26392 main.go:141] libmachine: (ha-234651-m02)   <os>
	I0731 16:59:57.484298   26392 main.go:141] libmachine: (ha-234651-m02)     <type>hvm</type>
	I0731 16:59:57.484307   26392 main.go:141] libmachine: (ha-234651-m02)     <boot dev='cdrom'/>
	I0731 16:59:57.484320   26392 main.go:141] libmachine: (ha-234651-m02)     <boot dev='hd'/>
	I0731 16:59:57.484332   26392 main.go:141] libmachine: (ha-234651-m02)     <bootmenu enable='no'/>
	I0731 16:59:57.484341   26392 main.go:141] libmachine: (ha-234651-m02)   </os>
	I0731 16:59:57.484348   26392 main.go:141] libmachine: (ha-234651-m02)   <devices>
	I0731 16:59:57.484359   26392 main.go:141] libmachine: (ha-234651-m02)     <disk type='file' device='cdrom'>
	I0731 16:59:57.484375   26392 main.go:141] libmachine: (ha-234651-m02)       <source file='/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m02/boot2docker.iso'/>
	I0731 16:59:57.484386   26392 main.go:141] libmachine: (ha-234651-m02)       <target dev='hdc' bus='scsi'/>
	I0731 16:59:57.484406   26392 main.go:141] libmachine: (ha-234651-m02)       <readonly/>
	I0731 16:59:57.484426   26392 main.go:141] libmachine: (ha-234651-m02)     </disk>
	I0731 16:59:57.484438   26392 main.go:141] libmachine: (ha-234651-m02)     <disk type='file' device='disk'>
	I0731 16:59:57.484451   26392 main.go:141] libmachine: (ha-234651-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0731 16:59:57.484468   26392 main.go:141] libmachine: (ha-234651-m02)       <source file='/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m02/ha-234651-m02.rawdisk'/>
	I0731 16:59:57.484480   26392 main.go:141] libmachine: (ha-234651-m02)       <target dev='hda' bus='virtio'/>
	I0731 16:59:57.484491   26392 main.go:141] libmachine: (ha-234651-m02)     </disk>
	I0731 16:59:57.484500   26392 main.go:141] libmachine: (ha-234651-m02)     <interface type='network'>
	I0731 16:59:57.484513   26392 main.go:141] libmachine: (ha-234651-m02)       <source network='mk-ha-234651'/>
	I0731 16:59:57.484525   26392 main.go:141] libmachine: (ha-234651-m02)       <model type='virtio'/>
	I0731 16:59:57.484537   26392 main.go:141] libmachine: (ha-234651-m02)     </interface>
	I0731 16:59:57.484548   26392 main.go:141] libmachine: (ha-234651-m02)     <interface type='network'>
	I0731 16:59:57.484559   26392 main.go:141] libmachine: (ha-234651-m02)       <source network='default'/>
	I0731 16:59:57.484574   26392 main.go:141] libmachine: (ha-234651-m02)       <model type='virtio'/>
	I0731 16:59:57.484586   26392 main.go:141] libmachine: (ha-234651-m02)     </interface>
	I0731 16:59:57.484593   26392 main.go:141] libmachine: (ha-234651-m02)     <serial type='pty'>
	I0731 16:59:57.484604   26392 main.go:141] libmachine: (ha-234651-m02)       <target port='0'/>
	I0731 16:59:57.484612   26392 main.go:141] libmachine: (ha-234651-m02)     </serial>
	I0731 16:59:57.484624   26392 main.go:141] libmachine: (ha-234651-m02)     <console type='pty'>
	I0731 16:59:57.484636   26392 main.go:141] libmachine: (ha-234651-m02)       <target type='serial' port='0'/>
	I0731 16:59:57.484669   26392 main.go:141] libmachine: (ha-234651-m02)     </console>
	I0731 16:59:57.484699   26392 main.go:141] libmachine: (ha-234651-m02)     <rng model='virtio'>
	I0731 16:59:57.484717   26392 main.go:141] libmachine: (ha-234651-m02)       <backend model='random'>/dev/random</backend>
	I0731 16:59:57.484732   26392 main.go:141] libmachine: (ha-234651-m02)     </rng>
	I0731 16:59:57.484745   26392 main.go:141] libmachine: (ha-234651-m02)     
	I0731 16:59:57.484756   26392 main.go:141] libmachine: (ha-234651-m02)     
	I0731 16:59:57.484770   26392 main.go:141] libmachine: (ha-234651-m02)   </devices>
	I0731 16:59:57.484783   26392 main.go:141] libmachine: (ha-234651-m02) </domain>
	I0731 16:59:57.484799   26392 main.go:141] libmachine: (ha-234651-m02) 
	I0731 16:59:57.492379   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:87:d0:bf in network default
	I0731 16:59:57.493034   26392 main.go:141] libmachine: (ha-234651-m02) Ensuring networks are active...
	I0731 16:59:57.493054   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 16:59:57.493802   26392 main.go:141] libmachine: (ha-234651-m02) Ensuring network default is active
	I0731 16:59:57.494113   26392 main.go:141] libmachine: (ha-234651-m02) Ensuring network mk-ha-234651 is active
	I0731 16:59:57.494448   26392 main.go:141] libmachine: (ha-234651-m02) Getting domain xml...
	I0731 16:59:57.495283   26392 main.go:141] libmachine: (ha-234651-m02) Creating domain...
	I0731 16:59:58.698086   26392 main.go:141] libmachine: (ha-234651-m02) Waiting to get IP...
	I0731 16:59:58.698849   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 16:59:58.699286   26392 main.go:141] libmachine: (ha-234651-m02) DBG | unable to find current IP address of domain ha-234651-m02 in network mk-ha-234651
	I0731 16:59:58.699314   26392 main.go:141] libmachine: (ha-234651-m02) DBG | I0731 16:59:58.699260   26772 retry.go:31] will retry after 237.684145ms: waiting for machine to come up
	I0731 16:59:58.938824   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 16:59:58.939376   26392 main.go:141] libmachine: (ha-234651-m02) DBG | unable to find current IP address of domain ha-234651-m02 in network mk-ha-234651
	I0731 16:59:58.939406   26392 main.go:141] libmachine: (ha-234651-m02) DBG | I0731 16:59:58.939313   26772 retry.go:31] will retry after 380.331665ms: waiting for machine to come up
	I0731 16:59:59.320818   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 16:59:59.321283   26392 main.go:141] libmachine: (ha-234651-m02) DBG | unable to find current IP address of domain ha-234651-m02 in network mk-ha-234651
	I0731 16:59:59.321314   26392 main.go:141] libmachine: (ha-234651-m02) DBG | I0731 16:59:59.321229   26772 retry.go:31] will retry after 409.470005ms: waiting for machine to come up
	I0731 16:59:59.732928   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 16:59:59.733349   26392 main.go:141] libmachine: (ha-234651-m02) DBG | unable to find current IP address of domain ha-234651-m02 in network mk-ha-234651
	I0731 16:59:59.733377   26392 main.go:141] libmachine: (ha-234651-m02) DBG | I0731 16:59:59.733301   26772 retry.go:31] will retry after 539.092112ms: waiting for machine to come up
	I0731 17:00:00.274038   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:00.274440   26392 main.go:141] libmachine: (ha-234651-m02) DBG | unable to find current IP address of domain ha-234651-m02 in network mk-ha-234651
	I0731 17:00:00.274494   26392 main.go:141] libmachine: (ha-234651-m02) DBG | I0731 17:00:00.274418   26772 retry.go:31] will retry after 704.175056ms: waiting for machine to come up
	I0731 17:00:00.980162   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:00.980593   26392 main.go:141] libmachine: (ha-234651-m02) DBG | unable to find current IP address of domain ha-234651-m02 in network mk-ha-234651
	I0731 17:00:00.980631   26392 main.go:141] libmachine: (ha-234651-m02) DBG | I0731 17:00:00.980535   26772 retry.go:31] will retry after 904.538693ms: waiting for machine to come up
	I0731 17:00:01.886662   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:01.887100   26392 main.go:141] libmachine: (ha-234651-m02) DBG | unable to find current IP address of domain ha-234651-m02 in network mk-ha-234651
	I0731 17:00:01.887139   26392 main.go:141] libmachine: (ha-234651-m02) DBG | I0731 17:00:01.887051   26772 retry.go:31] will retry after 930.755767ms: waiting for machine to come up
	I0731 17:00:02.819648   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:02.820080   26392 main.go:141] libmachine: (ha-234651-m02) DBG | unable to find current IP address of domain ha-234651-m02 in network mk-ha-234651
	I0731 17:00:02.820107   26392 main.go:141] libmachine: (ha-234651-m02) DBG | I0731 17:00:02.820029   26772 retry.go:31] will retry after 1.34592852s: waiting for machine to come up
	I0731 17:00:04.168273   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:04.168755   26392 main.go:141] libmachine: (ha-234651-m02) DBG | unable to find current IP address of domain ha-234651-m02 in network mk-ha-234651
	I0731 17:00:04.168785   26392 main.go:141] libmachine: (ha-234651-m02) DBG | I0731 17:00:04.168700   26772 retry.go:31] will retry after 1.692001302s: waiting for machine to come up
	I0731 17:00:05.862244   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:05.862748   26392 main.go:141] libmachine: (ha-234651-m02) DBG | unable to find current IP address of domain ha-234651-m02 in network mk-ha-234651
	I0731 17:00:05.862779   26392 main.go:141] libmachine: (ha-234651-m02) DBG | I0731 17:00:05.862711   26772 retry.go:31] will retry after 2.150428945s: waiting for machine to come up
	I0731 17:00:08.014515   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:08.014935   26392 main.go:141] libmachine: (ha-234651-m02) DBG | unable to find current IP address of domain ha-234651-m02 in network mk-ha-234651
	I0731 17:00:08.014970   26392 main.go:141] libmachine: (ha-234651-m02) DBG | I0731 17:00:08.014893   26772 retry.go:31] will retry after 2.239362339s: waiting for machine to come up
	I0731 17:00:10.256555   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:10.256967   26392 main.go:141] libmachine: (ha-234651-m02) DBG | unable to find current IP address of domain ha-234651-m02 in network mk-ha-234651
	I0731 17:00:10.257008   26392 main.go:141] libmachine: (ha-234651-m02) DBG | I0731 17:00:10.256949   26772 retry.go:31] will retry after 2.400335015s: waiting for machine to come up
	I0731 17:00:12.658945   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:12.659349   26392 main.go:141] libmachine: (ha-234651-m02) DBG | unable to find current IP address of domain ha-234651-m02 in network mk-ha-234651
	I0731 17:00:12.659377   26392 main.go:141] libmachine: (ha-234651-m02) DBG | I0731 17:00:12.659299   26772 retry.go:31] will retry after 4.392574536s: waiting for machine to come up
	I0731 17:00:17.056090   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:17.056590   26392 main.go:141] libmachine: (ha-234651-m02) Found IP for machine: 192.168.39.235
	I0731 17:00:17.056616   26392 main.go:141] libmachine: (ha-234651-m02) Reserving static IP address...
	I0731 17:00:17.056625   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has current primary IP address 192.168.39.235 and MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:17.057033   26392 main.go:141] libmachine: (ha-234651-m02) DBG | unable to find host DHCP lease matching {name: "ha-234651-m02", mac: "52:54:00:4c:97:0e", ip: "192.168.39.235"} in network mk-ha-234651
	I0731 17:00:17.129001   26392 main.go:141] libmachine: (ha-234651-m02) Reserved static IP address: 192.168.39.235
	I0731 17:00:17.129026   26392 main.go:141] libmachine: (ha-234651-m02) Waiting for SSH to be available...
	I0731 17:00:17.129036   26392 main.go:141] libmachine: (ha-234651-m02) DBG | Getting to WaitForSSH function...
	I0731 17:00:17.132214   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:17.132647   26392 main.go:141] libmachine: (ha-234651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:97:0e", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:00:10 +0000 UTC Type:0 Mac:52:54:00:4c:97:0e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:minikube Clientid:01:52:54:00:4c:97:0e}
	I0731 17:00:17.132673   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined IP address 192.168.39.235 and MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:17.132805   26392 main.go:141] libmachine: (ha-234651-m02) DBG | Using SSH client type: external
	I0731 17:00:17.132831   26392 main.go:141] libmachine: (ha-234651-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m02/id_rsa (-rw-------)
	I0731 17:00:17.132863   26392 main.go:141] libmachine: (ha-234651-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.235 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 17:00:17.132876   26392 main.go:141] libmachine: (ha-234651-m02) DBG | About to run SSH command:
	I0731 17:00:17.132888   26392 main.go:141] libmachine: (ha-234651-m02) DBG | exit 0
	I0731 17:00:17.255197   26392 main.go:141] libmachine: (ha-234651-m02) DBG | SSH cmd err, output: <nil>: 
	I0731 17:00:17.255468   26392 main.go:141] libmachine: (ha-234651-m02) KVM machine creation complete!
	I0731 17:00:17.255825   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetConfigRaw
	I0731 17:00:17.256384   26392 main.go:141] libmachine: (ha-234651-m02) Calling .DriverName
	I0731 17:00:17.256582   26392 main.go:141] libmachine: (ha-234651-m02) Calling .DriverName
	I0731 17:00:17.256740   26392 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0731 17:00:17.256753   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetState
	I0731 17:00:17.258006   26392 main.go:141] libmachine: Detecting operating system of created instance...
	I0731 17:00:17.258026   26392 main.go:141] libmachine: Waiting for SSH to be available...
	I0731 17:00:17.258042   26392 main.go:141] libmachine: Getting to WaitForSSH function...
	I0731 17:00:17.258056   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHHostname
	I0731 17:00:17.260254   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:17.260688   26392 main.go:141] libmachine: (ha-234651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:97:0e", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:00:10 +0000 UTC Type:0 Mac:52:54:00:4c:97:0e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-234651-m02 Clientid:01:52:54:00:4c:97:0e}
	I0731 17:00:17.260716   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined IP address 192.168.39.235 and MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:17.260821   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHPort
	I0731 17:00:17.261006   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHKeyPath
	I0731 17:00:17.261159   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHKeyPath
	I0731 17:00:17.261312   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHUsername
	I0731 17:00:17.261500   26392 main.go:141] libmachine: Using SSH client type: native
	I0731 17:00:17.261716   26392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I0731 17:00:17.261731   26392 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0731 17:00:17.358219   26392 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 17:00:17.358239   26392 main.go:141] libmachine: Detecting the provisioner...
	I0731 17:00:17.358246   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHHostname
	I0731 17:00:17.361134   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:17.361437   26392 main.go:141] libmachine: (ha-234651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:97:0e", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:00:10 +0000 UTC Type:0 Mac:52:54:00:4c:97:0e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-234651-m02 Clientid:01:52:54:00:4c:97:0e}
	I0731 17:00:17.361455   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined IP address 192.168.39.235 and MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:17.361603   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHPort
	I0731 17:00:17.361821   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHKeyPath
	I0731 17:00:17.362006   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHKeyPath
	I0731 17:00:17.362168   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHUsername
	I0731 17:00:17.362348   26392 main.go:141] libmachine: Using SSH client type: native
	I0731 17:00:17.362511   26392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I0731 17:00:17.362522   26392 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0731 17:00:17.463637   26392 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0731 17:00:17.463701   26392 main.go:141] libmachine: found compatible host: buildroot
	I0731 17:00:17.463709   26392 main.go:141] libmachine: Provisioning with buildroot...
	I0731 17:00:17.463717   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetMachineName
	I0731 17:00:17.463967   26392 buildroot.go:166] provisioning hostname "ha-234651-m02"
	I0731 17:00:17.463989   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetMachineName
	I0731 17:00:17.464189   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHHostname
	I0731 17:00:17.466804   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:17.467152   26392 main.go:141] libmachine: (ha-234651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:97:0e", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:00:10 +0000 UTC Type:0 Mac:52:54:00:4c:97:0e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-234651-m02 Clientid:01:52:54:00:4c:97:0e}
	I0731 17:00:17.467182   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined IP address 192.168.39.235 and MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:17.467328   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHPort
	I0731 17:00:17.467486   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHKeyPath
	I0731 17:00:17.467713   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHKeyPath
	I0731 17:00:17.467853   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHUsername
	I0731 17:00:17.468031   26392 main.go:141] libmachine: Using SSH client type: native
	I0731 17:00:17.468201   26392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I0731 17:00:17.468214   26392 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-234651-m02 && echo "ha-234651-m02" | sudo tee /etc/hostname
	I0731 17:00:17.580654   26392 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-234651-m02
	
	I0731 17:00:17.580682   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHHostname
	I0731 17:00:17.583497   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:17.583988   26392 main.go:141] libmachine: (ha-234651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:97:0e", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:00:10 +0000 UTC Type:0 Mac:52:54:00:4c:97:0e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-234651-m02 Clientid:01:52:54:00:4c:97:0e}
	I0731 17:00:17.584018   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined IP address 192.168.39.235 and MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:17.584218   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHPort
	I0731 17:00:17.584432   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHKeyPath
	I0731 17:00:17.584605   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHKeyPath
	I0731 17:00:17.584748   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHUsername
	I0731 17:00:17.584976   26392 main.go:141] libmachine: Using SSH client type: native
	I0731 17:00:17.585148   26392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I0731 17:00:17.585170   26392 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-234651-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-234651-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-234651-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 17:00:17.690947   26392 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 17:00:17.690971   26392 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19349-8084/.minikube CaCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19349-8084/.minikube}
	I0731 17:00:17.690984   26392 buildroot.go:174] setting up certificates
	I0731 17:00:17.690993   26392 provision.go:84] configureAuth start
	I0731 17:00:17.691001   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetMachineName
	I0731 17:00:17.691317   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetIP
	I0731 17:00:17.694019   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:17.694376   26392 main.go:141] libmachine: (ha-234651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:97:0e", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:00:10 +0000 UTC Type:0 Mac:52:54:00:4c:97:0e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-234651-m02 Clientid:01:52:54:00:4c:97:0e}
	I0731 17:00:17.694397   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined IP address 192.168.39.235 and MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:17.694595   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHHostname
	I0731 17:00:17.696590   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:17.696852   26392 main.go:141] libmachine: (ha-234651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:97:0e", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:00:10 +0000 UTC Type:0 Mac:52:54:00:4c:97:0e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-234651-m02 Clientid:01:52:54:00:4c:97:0e}
	I0731 17:00:17.696871   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined IP address 192.168.39.235 and MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:17.697034   26392 provision.go:143] copyHostCerts
	I0731 17:00:17.697067   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem
	I0731 17:00:17.697098   26392 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem, removing ...
	I0731 17:00:17.697107   26392 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem
	I0731 17:00:17.697161   26392 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem (1082 bytes)
	I0731 17:00:17.697230   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem
	I0731 17:00:17.697254   26392 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem, removing ...
	I0731 17:00:17.697265   26392 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem
	I0731 17:00:17.697305   26392 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem (1123 bytes)
	I0731 17:00:17.697375   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem
	I0731 17:00:17.697398   26392 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem, removing ...
	I0731 17:00:17.697407   26392 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem
	I0731 17:00:17.697441   26392 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem (1675 bytes)
	I0731 17:00:17.697516   26392 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem org=jenkins.ha-234651-m02 san=[127.0.0.1 192.168.39.235 ha-234651-m02 localhost minikube]
	I0731 17:00:17.908626   26392 provision.go:177] copyRemoteCerts
	I0731 17:00:17.908682   26392 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 17:00:17.908703   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHHostname
	I0731 17:00:17.911371   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:17.911692   26392 main.go:141] libmachine: (ha-234651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:97:0e", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:00:10 +0000 UTC Type:0 Mac:52:54:00:4c:97:0e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-234651-m02 Clientid:01:52:54:00:4c:97:0e}
	I0731 17:00:17.911722   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined IP address 192.168.39.235 and MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:17.911904   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHPort
	I0731 17:00:17.912099   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHKeyPath
	I0731 17:00:17.912274   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHUsername
	I0731 17:00:17.912401   26392 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m02/id_rsa Username:docker}
	I0731 17:00:17.992622   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0731 17:00:17.992722   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 17:00:18.015551   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0731 17:00:18.015631   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0731 17:00:18.041263   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0731 17:00:18.041332   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 17:00:18.066724   26392 provision.go:87] duration metric: took 375.720136ms to configureAuth
	I0731 17:00:18.066748   26392 buildroot.go:189] setting minikube options for container-runtime
	I0731 17:00:18.066908   26392 config.go:182] Loaded profile config "ha-234651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 17:00:18.066973   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHHostname
	I0731 17:00:18.069440   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:18.069773   26392 main.go:141] libmachine: (ha-234651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:97:0e", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:00:10 +0000 UTC Type:0 Mac:52:54:00:4c:97:0e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-234651-m02 Clientid:01:52:54:00:4c:97:0e}
	I0731 17:00:18.069802   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined IP address 192.168.39.235 and MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:18.069932   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHPort
	I0731 17:00:18.070099   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHKeyPath
	I0731 17:00:18.070230   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHKeyPath
	I0731 17:00:18.070368   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHUsername
	I0731 17:00:18.070543   26392 main.go:141] libmachine: Using SSH client type: native
	I0731 17:00:18.070697   26392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I0731 17:00:18.070712   26392 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 17:00:18.347128   26392 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 17:00:18.347153   26392 main.go:141] libmachine: Checking connection to Docker...
	I0731 17:00:18.347164   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetURL
	I0731 17:00:18.348381   26392 main.go:141] libmachine: (ha-234651-m02) DBG | Using libvirt version 6000000
	I0731 17:00:18.350311   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:18.350648   26392 main.go:141] libmachine: (ha-234651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:97:0e", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:00:10 +0000 UTC Type:0 Mac:52:54:00:4c:97:0e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-234651-m02 Clientid:01:52:54:00:4c:97:0e}
	I0731 17:00:18.350668   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined IP address 192.168.39.235 and MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:18.350842   26392 main.go:141] libmachine: Docker is up and running!
	I0731 17:00:18.350851   26392 main.go:141] libmachine: Reticulating splines...
	I0731 17:00:18.350859   26392 client.go:171] duration metric: took 21.272299468s to LocalClient.Create
	I0731 17:00:18.350883   26392 start.go:167] duration metric: took 21.272351031s to libmachine.API.Create "ha-234651"
	I0731 17:00:18.350895   26392 start.go:293] postStartSetup for "ha-234651-m02" (driver="kvm2")
	I0731 17:00:18.350910   26392 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 17:00:18.350931   26392 main.go:141] libmachine: (ha-234651-m02) Calling .DriverName
	I0731 17:00:18.351157   26392 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 17:00:18.351183   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHHostname
	I0731 17:00:18.353164   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:18.353519   26392 main.go:141] libmachine: (ha-234651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:97:0e", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:00:10 +0000 UTC Type:0 Mac:52:54:00:4c:97:0e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-234651-m02 Clientid:01:52:54:00:4c:97:0e}
	I0731 17:00:18.353538   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined IP address 192.168.39.235 and MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:18.353702   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHPort
	I0731 17:00:18.353871   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHKeyPath
	I0731 17:00:18.354025   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHUsername
	I0731 17:00:18.354128   26392 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m02/id_rsa Username:docker}
	I0731 17:00:18.434127   26392 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 17:00:18.438634   26392 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 17:00:18.438660   26392 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/addons for local assets ...
	I0731 17:00:18.438733   26392 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/files for local assets ...
	I0731 17:00:18.438812   26392 filesync.go:149] local asset: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem -> 152592.pem in /etc/ssl/certs
	I0731 17:00:18.438822   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem -> /etc/ssl/certs/152592.pem
	I0731 17:00:18.438899   26392 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 17:00:18.448874   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /etc/ssl/certs/152592.pem (1708 bytes)
	I0731 17:00:18.471819   26392 start.go:296] duration metric: took 120.909521ms for postStartSetup
	I0731 17:00:18.471862   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetConfigRaw
	I0731 17:00:18.472501   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetIP
	I0731 17:00:18.474939   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:18.475288   26392 main.go:141] libmachine: (ha-234651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:97:0e", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:00:10 +0000 UTC Type:0 Mac:52:54:00:4c:97:0e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-234651-m02 Clientid:01:52:54:00:4c:97:0e}
	I0731 17:00:18.475336   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined IP address 192.168.39.235 and MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:18.475514   26392 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/config.json ...
	I0731 17:00:18.475705   26392 start.go:128] duration metric: took 21.415075174s to createHost
	I0731 17:00:18.475728   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHHostname
	I0731 17:00:18.477838   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:18.478170   26392 main.go:141] libmachine: (ha-234651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:97:0e", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:00:10 +0000 UTC Type:0 Mac:52:54:00:4c:97:0e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-234651-m02 Clientid:01:52:54:00:4c:97:0e}
	I0731 17:00:18.478198   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined IP address 192.168.39.235 and MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:18.478304   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHPort
	I0731 17:00:18.478481   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHKeyPath
	I0731 17:00:18.478664   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHKeyPath
	I0731 17:00:18.478817   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHUsername
	I0731 17:00:18.478972   26392 main.go:141] libmachine: Using SSH client type: native
	I0731 17:00:18.479176   26392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I0731 17:00:18.479192   26392 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 17:00:18.579698   26392 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722445218.541361703
	
	I0731 17:00:18.579716   26392 fix.go:216] guest clock: 1722445218.541361703
	I0731 17:00:18.579725   26392 fix.go:229] Guest: 2024-07-31 17:00:18.541361703 +0000 UTC Remote: 2024-07-31 17:00:18.475717804 +0000 UTC m=+76.421952730 (delta=65.643899ms)
	I0731 17:00:18.579748   26392 fix.go:200] guest clock delta is within tolerance: 65.643899ms
	I0731 17:00:18.579754   26392 start.go:83] releasing machines lock for "ha-234651-m02", held for 21.519229906s
	I0731 17:00:18.579782   26392 main.go:141] libmachine: (ha-234651-m02) Calling .DriverName
	I0731 17:00:18.580031   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetIP
	I0731 17:00:18.582506   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:18.582885   26392 main.go:141] libmachine: (ha-234651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:97:0e", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:00:10 +0000 UTC Type:0 Mac:52:54:00:4c:97:0e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-234651-m02 Clientid:01:52:54:00:4c:97:0e}
	I0731 17:00:18.582906   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined IP address 192.168.39.235 and MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:18.585329   26392 out.go:177] * Found network options:
	I0731 17:00:18.586805   26392 out.go:177]   - NO_PROXY=192.168.39.243
	W0731 17:00:18.588117   26392 proxy.go:119] fail to check proxy env: Error ip not in block
	I0731 17:00:18.588147   26392 main.go:141] libmachine: (ha-234651-m02) Calling .DriverName
	I0731 17:00:18.588782   26392 main.go:141] libmachine: (ha-234651-m02) Calling .DriverName
	I0731 17:00:18.588975   26392 main.go:141] libmachine: (ha-234651-m02) Calling .DriverName
	I0731 17:00:18.589060   26392 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 17:00:18.589098   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHHostname
	W0731 17:00:18.589138   26392 proxy.go:119] fail to check proxy env: Error ip not in block
	I0731 17:00:18.589203   26392 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 17:00:18.589224   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHHostname
	I0731 17:00:18.591692   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:18.592051   26392 main.go:141] libmachine: (ha-234651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:97:0e", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:00:10 +0000 UTC Type:0 Mac:52:54:00:4c:97:0e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-234651-m02 Clientid:01:52:54:00:4c:97:0e}
	I0731 17:00:18.592077   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:18.592095   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined IP address 192.168.39.235 and MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:18.592208   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHPort
	I0731 17:00:18.592411   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHKeyPath
	I0731 17:00:18.592495   26392 main.go:141] libmachine: (ha-234651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:97:0e", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:00:10 +0000 UTC Type:0 Mac:52:54:00:4c:97:0e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-234651-m02 Clientid:01:52:54:00:4c:97:0e}
	I0731 17:00:18.592518   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined IP address 192.168.39.235 and MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:18.592561   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHUsername
	I0731 17:00:18.592711   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHPort
	I0731 17:00:18.592715   26392 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m02/id_rsa Username:docker}
	I0731 17:00:18.592883   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHKeyPath
	I0731 17:00:18.593025   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHUsername
	I0731 17:00:18.593173   26392 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m02/id_rsa Username:docker}
	I0731 17:00:18.825075   26392 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 17:00:18.831022   26392 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 17:00:18.831083   26392 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 17:00:18.846216   26392 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 17:00:18.846241   26392 start.go:495] detecting cgroup driver to use...
	I0731 17:00:18.846293   26392 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 17:00:18.863094   26392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 17:00:18.876496   26392 docker.go:217] disabling cri-docker service (if available) ...
	I0731 17:00:18.876553   26392 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 17:00:18.889713   26392 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 17:00:18.903349   26392 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 17:00:19.025223   26392 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 17:00:19.158638   26392 docker.go:233] disabling docker service ...
	I0731 17:00:19.158697   26392 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 17:00:19.172351   26392 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 17:00:19.184900   26392 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 17:00:19.315583   26392 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 17:00:19.429374   26392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 17:00:19.442911   26392 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 17:00:19.461034   26392 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 17:00:19.461092   26392 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:00:19.470896   26392 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 17:00:19.470949   26392 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:00:19.481866   26392 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:00:19.491687   26392 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:00:19.501624   26392 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 17:00:19.511588   26392 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:00:19.521261   26392 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:00:19.537335   26392 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:00:19.547447   26392 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 17:00:19.556407   26392 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 17:00:19.556455   26392 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 17:00:19.569736   26392 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 17:00:19.579076   26392 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 17:00:19.698202   26392 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 17:00:19.843319   26392 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 17:00:19.843394   26392 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 17:00:19.848277   26392 start.go:563] Will wait 60s for crictl version
	I0731 17:00:19.848331   26392 ssh_runner.go:195] Run: which crictl
	I0731 17:00:19.851961   26392 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 17:00:19.894272   26392 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 17:00:19.894339   26392 ssh_runner.go:195] Run: crio --version
	I0731 17:00:19.920497   26392 ssh_runner.go:195] Run: crio --version
	I0731 17:00:19.949030   26392 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 17:00:19.950456   26392 out.go:177]   - env NO_PROXY=192.168.39.243
	I0731 17:00:19.951537   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetIP
	I0731 17:00:19.953921   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:19.954230   26392 main.go:141] libmachine: (ha-234651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:97:0e", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:00:10 +0000 UTC Type:0 Mac:52:54:00:4c:97:0e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-234651-m02 Clientid:01:52:54:00:4c:97:0e}
	I0731 17:00:19.954255   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined IP address 192.168.39.235 and MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:19.954452   26392 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 17:00:19.958187   26392 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 17:00:19.969356   26392 mustload.go:65] Loading cluster: ha-234651
	I0731 17:00:19.969576   26392 config.go:182] Loaded profile config "ha-234651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 17:00:19.969827   26392 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:00:19.969853   26392 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:00:19.984671   26392 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34657
	I0731 17:00:19.985040   26392 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:00:19.985467   26392 main.go:141] libmachine: Using API Version  1
	I0731 17:00:19.985491   26392 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:00:19.985830   26392 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:00:19.986020   26392 main.go:141] libmachine: (ha-234651) Calling .GetState
	I0731 17:00:19.987572   26392 host.go:66] Checking if "ha-234651" exists ...
	I0731 17:00:19.987863   26392 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:00:19.987885   26392 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:00:20.001991   26392 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44525
	I0731 17:00:20.002330   26392 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:00:20.002823   26392 main.go:141] libmachine: Using API Version  1
	I0731 17:00:20.002845   26392 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:00:20.003177   26392 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:00:20.003379   26392 main.go:141] libmachine: (ha-234651) Calling .DriverName
	I0731 17:00:20.003557   26392 certs.go:68] Setting up /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651 for IP: 192.168.39.235
	I0731 17:00:20.003566   26392 certs.go:194] generating shared ca certs ...
	I0731 17:00:20.003584   26392 certs.go:226] acquiring lock for ca certs: {Name:mk49eed439085f9c30746706460ce213570d1997 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 17:00:20.003728   26392 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key
	I0731 17:00:20.003781   26392 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key
	I0731 17:00:20.003793   26392 certs.go:256] generating profile certs ...
	I0731 17:00:20.003884   26392 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/client.key
	I0731 17:00:20.003915   26392 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.key.541fa027
	I0731 17:00:20.003935   26392 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.crt.541fa027 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.243 192.168.39.235 192.168.39.254]
	I0731 17:00:20.231073   26392 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.crt.541fa027 ...
	I0731 17:00:20.231099   26392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.crt.541fa027: {Name:mkd03ff98bd704ad38226e3ee0bb5356dbd65d25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 17:00:20.231310   26392 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.key.541fa027 ...
	I0731 17:00:20.231328   26392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.key.541fa027: {Name:mk7c4021e5a655d2f0b8e6095debb8ef91e562e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 17:00:20.231428   26392 certs.go:381] copying /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.crt.541fa027 -> /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.crt
	I0731 17:00:20.231581   26392 certs.go:385] copying /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.key.541fa027 -> /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.key
	I0731 17:00:20.231748   26392 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/proxy-client.key
	I0731 17:00:20.231768   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 17:00:20.231786   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0731 17:00:20.231803   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 17:00:20.231820   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 17:00:20.231837   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 17:00:20.231852   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 17:00:20.231869   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 17:00:20.231897   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 17:00:20.231965   26392 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem (1338 bytes)
	W0731 17:00:20.232010   26392 certs.go:480] ignoring /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259_empty.pem, impossibly tiny 0 bytes
	I0731 17:00:20.232023   26392 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 17:00:20.232053   26392 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem (1082 bytes)
	I0731 17:00:20.232084   26392 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem (1123 bytes)
	I0731 17:00:20.232115   26392 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem (1675 bytes)
	I0731 17:00:20.232169   26392 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem (1708 bytes)
	I0731 17:00:20.232209   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem -> /usr/share/ca-certificates/15259.pem
	I0731 17:00:20.232229   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem -> /usr/share/ca-certificates/152592.pem
	I0731 17:00:20.232248   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 17:00:20.232285   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 17:00:20.235380   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:00:20.235767   26392 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 17:00:20.235794   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:00:20.235980   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 17:00:20.236191   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 17:00:20.236362   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 17:00:20.236521   26392 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651/id_rsa Username:docker}
	I0731 17:00:20.311575   26392 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0731 17:00:20.316492   26392 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0731 17:00:20.327568   26392 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0731 17:00:20.331459   26392 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0731 17:00:20.341242   26392 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0731 17:00:20.344904   26392 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0731 17:00:20.354887   26392 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0731 17:00:20.358864   26392 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0731 17:00:20.368198   26392 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0731 17:00:20.371805   26392 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0731 17:00:20.381366   26392 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0731 17:00:20.385007   26392 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0731 17:00:20.395679   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 17:00:20.418874   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 17:00:20.441432   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 17:00:20.466283   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 17:00:20.490395   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0731 17:00:20.516051   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 17:00:20.541408   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 17:00:20.564348   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 17:00:20.586742   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem --> /usr/share/ca-certificates/15259.pem (1338 bytes)
	I0731 17:00:20.612026   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /usr/share/ca-certificates/152592.pem (1708 bytes)
	I0731 17:00:20.634527   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 17:00:20.657770   26392 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0731 17:00:20.673725   26392 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0731 17:00:20.689738   26392 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0731 17:00:20.705362   26392 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0731 17:00:20.720834   26392 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0731 17:00:20.737416   26392 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0731 17:00:20.753122   26392 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0731 17:00:20.768107   26392 ssh_runner.go:195] Run: openssl version
	I0731 17:00:20.773384   26392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/152592.pem && ln -fs /usr/share/ca-certificates/152592.pem /etc/ssl/certs/152592.pem"
	I0731 17:00:20.783132   26392 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/152592.pem
	I0731 17:00:20.787095   26392 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 16:54 /usr/share/ca-certificates/152592.pem
	I0731 17:00:20.787173   26392 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/152592.pem
	I0731 17:00:20.792533   26392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/152592.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 17:00:20.802491   26392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 17:00:20.812456   26392 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 17:00:20.816588   26392 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 16:42 /usr/share/ca-certificates/minikubeCA.pem
	I0731 17:00:20.816643   26392 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 17:00:20.822067   26392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 17:00:20.831770   26392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15259.pem && ln -fs /usr/share/ca-certificates/15259.pem /etc/ssl/certs/15259.pem"
	I0731 17:00:20.842065   26392 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15259.pem
	I0731 17:00:20.846099   26392 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 16:54 /usr/share/ca-certificates/15259.pem
	I0731 17:00:20.846146   26392 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15259.pem
	I0731 17:00:20.851266   26392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15259.pem /etc/ssl/certs/51391683.0"
	I0731 17:00:20.860658   26392 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 17:00:20.864271   26392 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 17:00:20.864317   26392 kubeadm.go:934] updating node {m02 192.168.39.235 8443 v1.30.3 crio true true} ...
	I0731 17:00:20.864398   26392 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-234651-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.235
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-234651 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 17:00:20.864421   26392 kube-vip.go:115] generating kube-vip config ...
	I0731 17:00:20.864448   26392 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0731 17:00:20.881697   26392 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0731 17:00:20.881758   26392 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0731 17:00:20.881805   26392 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 17:00:20.891145   26392 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0731 17:00:20.891210   26392 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0731 17:00:20.899812   26392 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0731 17:00:20.899836   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0731 17:00:20.899904   26392 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0731 17:00:20.899912   26392 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19349-8084/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0731 17:00:20.899940   26392 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19349-8084/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0731 17:00:20.904201   26392 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0731 17:00:20.904229   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0731 17:00:21.663380   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0731 17:00:21.663452   26392 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0731 17:00:21.668442   26392 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0731 17:00:21.668472   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0731 17:00:21.986946   26392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 17:00:22.003689   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0731 17:00:22.003795   26392 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0731 17:00:22.007955   26392 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0731 17:00:22.007994   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0731 17:00:22.382201   26392 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0731 17:00:22.390777   26392 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0731 17:00:22.406224   26392 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 17:00:22.421240   26392 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0731 17:00:22.436792   26392 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0731 17:00:22.441210   26392 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 17:00:22.452562   26392 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 17:00:22.576024   26392 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 17:00:22.594607   26392 host.go:66] Checking if "ha-234651" exists ...
	I0731 17:00:22.594940   26392 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:00:22.594978   26392 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:00:22.610985   26392 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46341
	I0731 17:00:22.611531   26392 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:00:22.612007   26392 main.go:141] libmachine: Using API Version  1
	I0731 17:00:22.612027   26392 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:00:22.612352   26392 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:00:22.612527   26392 main.go:141] libmachine: (ha-234651) Calling .DriverName
	I0731 17:00:22.612670   26392 start.go:317] joinCluster: &{Name:ha-234651 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-234651 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.243 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.235 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 17:00:22.612762   26392 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0731 17:00:22.612785   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 17:00:22.615818   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:00:22.616280   26392 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 17:00:22.616305   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:00:22.616439   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 17:00:22.616586   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 17:00:22.616726   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 17:00:22.616844   26392 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651/id_rsa Username:docker}
	I0731 17:00:22.761249   26392 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.235 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 17:00:22.761303   26392 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token hpnvqq.fpzfbqdbq5p8g3rv --discovery-token-ca-cert-hash sha256:460b24b51d10a3760f2391542a8a89e401359854f437f922cbee3f2d8ed6c856 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-234651-m02 --control-plane --apiserver-advertise-address=192.168.39.235 --apiserver-bind-port=8443"
	I0731 17:00:46.267965   26392 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token hpnvqq.fpzfbqdbq5p8g3rv --discovery-token-ca-cert-hash sha256:460b24b51d10a3760f2391542a8a89e401359854f437f922cbee3f2d8ed6c856 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-234651-m02 --control-plane --apiserver-advertise-address=192.168.39.235 --apiserver-bind-port=8443": (23.506634693s)
	I0731 17:00:46.268000   26392 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0731 17:00:46.768462   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-234651-m02 minikube.k8s.io/updated_at=2024_07_31T17_00_46_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1d737dad7efa60c56d30434fcd857dd3b14c91d9 minikube.k8s.io/name=ha-234651 minikube.k8s.io/primary=false
	I0731 17:00:46.921381   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-234651-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0731 17:00:47.029139   26392 start.go:319] duration metric: took 24.416465506s to joinCluster
	I0731 17:00:47.029219   26392 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.235 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 17:00:47.029494   26392 config.go:182] Loaded profile config "ha-234651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 17:00:47.030590   26392 out.go:177] * Verifying Kubernetes components...
	I0731 17:00:47.031802   26392 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 17:00:47.280650   26392 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 17:00:47.319645   26392 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 17:00:47.319853   26392 kapi.go:59] client config for ha-234651: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/client.crt", KeyFile:"/home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/client.key", CAFile:"/home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0731 17:00:47.319906   26392 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.243:8443
	I0731 17:00:47.320069   26392 node_ready.go:35] waiting up to 6m0s for node "ha-234651-m02" to be "Ready" ...
	I0731 17:00:47.320139   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:00:47.320144   26392 round_trippers.go:469] Request Headers:
	I0731 17:00:47.320151   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:00:47.320158   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:00:47.330603   26392 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0731 17:00:47.820885   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:00:47.820904   26392 round_trippers.go:469] Request Headers:
	I0731 17:00:47.820919   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:00:47.820924   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:00:47.824577   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:00:48.320541   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:00:48.320562   26392 round_trippers.go:469] Request Headers:
	I0731 17:00:48.320570   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:00:48.320575   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:00:48.324027   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:00:48.821053   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:00:48.821086   26392 round_trippers.go:469] Request Headers:
	I0731 17:00:48.821109   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:00:48.821113   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:00:48.824308   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:00:49.320290   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:00:49.320316   26392 round_trippers.go:469] Request Headers:
	I0731 17:00:49.320327   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:00:49.320333   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:00:49.323974   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:00:49.324681   26392 node_ready.go:53] node "ha-234651-m02" has status "Ready":"False"
	I0731 17:00:49.821158   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:00:49.821179   26392 round_trippers.go:469] Request Headers:
	I0731 17:00:49.821187   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:00:49.821192   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:00:49.824530   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:00:50.320318   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:00:50.320350   26392 round_trippers.go:469] Request Headers:
	I0731 17:00:50.320357   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:00:50.320365   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:00:50.323791   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:00:50.820653   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:00:50.820677   26392 round_trippers.go:469] Request Headers:
	I0731 17:00:50.820689   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:00:50.820696   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:00:50.824289   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:00:51.320251   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:00:51.320271   26392 round_trippers.go:469] Request Headers:
	I0731 17:00:51.320282   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:00:51.320290   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:00:51.323139   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:00:51.820380   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:00:51.820400   26392 round_trippers.go:469] Request Headers:
	I0731 17:00:51.820408   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:00:51.820412   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:00:51.823712   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:00:51.824612   26392 node_ready.go:53] node "ha-234651-m02" has status "Ready":"False"
	I0731 17:00:52.321232   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:00:52.321253   26392 round_trippers.go:469] Request Headers:
	I0731 17:00:52.321261   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:00:52.321264   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:00:52.324350   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:00:52.820794   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:00:52.820819   26392 round_trippers.go:469] Request Headers:
	I0731 17:00:52.820831   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:00:52.820838   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:00:52.850837   26392 round_trippers.go:574] Response Status: 200 OK in 29 milliseconds
	I0731 17:00:53.320599   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:00:53.320621   26392 round_trippers.go:469] Request Headers:
	I0731 17:00:53.320633   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:00:53.320641   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:00:53.324453   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:00:53.821278   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:00:53.821300   26392 round_trippers.go:469] Request Headers:
	I0731 17:00:53.821310   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:00:53.821314   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:00:53.824402   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:00:53.825080   26392 node_ready.go:53] node "ha-234651-m02" has status "Ready":"False"
	I0731 17:00:54.320491   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:00:54.320511   26392 round_trippers.go:469] Request Headers:
	I0731 17:00:54.320519   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:00:54.320522   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:00:54.323502   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:00:54.820491   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:00:54.820509   26392 round_trippers.go:469] Request Headers:
	I0731 17:00:54.820516   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:00:54.820521   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:00:54.823582   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:00:55.320329   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:00:55.320354   26392 round_trippers.go:469] Request Headers:
	I0731 17:00:55.320362   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:00:55.320366   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:00:55.324511   26392 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 17:00:55.820447   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:00:55.820469   26392 round_trippers.go:469] Request Headers:
	I0731 17:00:55.820478   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:00:55.820484   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:00:55.823836   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:00:56.320328   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:00:56.320350   26392 round_trippers.go:469] Request Headers:
	I0731 17:00:56.320358   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:00:56.320362   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:00:56.323672   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:00:56.324081   26392 node_ready.go:53] node "ha-234651-m02" has status "Ready":"False"
	I0731 17:00:56.820959   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:00:56.820978   26392 round_trippers.go:469] Request Headers:
	I0731 17:00:56.820985   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:00:56.820989   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:00:56.823706   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:00:57.321046   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:00:57.321065   26392 round_trippers.go:469] Request Headers:
	I0731 17:00:57.321073   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:00:57.321077   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:00:57.326002   26392 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 17:00:57.821159   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:00:57.821191   26392 round_trippers.go:469] Request Headers:
	I0731 17:00:57.821200   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:00:57.821205   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:00:57.824251   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:00:58.320326   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:00:58.320347   26392 round_trippers.go:469] Request Headers:
	I0731 17:00:58.320356   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:00:58.320361   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:00:58.323702   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:00:58.324189   26392 node_ready.go:53] node "ha-234651-m02" has status "Ready":"False"
	I0731 17:00:58.820578   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:00:58.820599   26392 round_trippers.go:469] Request Headers:
	I0731 17:00:58.820606   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:00:58.820611   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:00:58.823560   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:00:59.320493   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:00:59.320515   26392 round_trippers.go:469] Request Headers:
	I0731 17:00:59.320523   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:00:59.320526   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:00:59.323793   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:00:59.820247   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:00:59.820271   26392 round_trippers.go:469] Request Headers:
	I0731 17:00:59.820282   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:00:59.820290   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:00:59.823330   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:01:00.321048   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:01:00.321074   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:00.321084   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:00.321088   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:00.324269   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:01:00.324877   26392 node_ready.go:53] node "ha-234651-m02" has status "Ready":"False"
	I0731 17:01:00.821273   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:01:00.821297   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:00.821306   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:00.821312   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:00.823658   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:01:01.320514   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:01:01.320539   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:01.320547   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:01.320553   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:01.323351   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:01:01.820382   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:01:01.820404   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:01.820420   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:01.820426   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:01.823233   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:01:02.320692   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:01:02.320718   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:02.320728   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:02.320733   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:02.323624   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:01:02.820891   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:01:02.820925   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:02.820932   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:02.820936   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:02.824075   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:01:02.824690   26392 node_ready.go:53] node "ha-234651-m02" has status "Ready":"False"
	I0731 17:01:03.321136   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:01:03.321157   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:03.321166   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:03.321170   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:03.324114   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:01:03.820980   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:01:03.821004   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:03.821011   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:03.821014   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:03.825028   26392 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 17:01:04.320947   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:01:04.320967   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:04.320975   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:04.320981   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:04.324399   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:01:04.820225   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:01:04.820247   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:04.820256   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:04.820262   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:04.823891   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:01:04.824381   26392 node_ready.go:49] node "ha-234651-m02" has status "Ready":"True"
	I0731 17:01:04.824402   26392 node_ready.go:38] duration metric: took 17.504316493s for node "ha-234651-m02" to be "Ready" ...
	I0731 17:01:04.824413   26392 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 17:01:04.824479   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods
	I0731 17:01:04.824492   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:04.824502   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:04.824509   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:04.829454   26392 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 17:01:04.834900   26392 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-nsx9j" in "kube-system" namespace to be "Ready" ...
	I0731 17:01:04.834973   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nsx9j
	I0731 17:01:04.834984   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:04.834993   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:04.835003   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:04.837627   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:01:04.838113   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651
	I0731 17:01:04.838127   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:04.838133   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:04.838138   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:04.840091   26392 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0731 17:01:04.840502   26392 pod_ready.go:92] pod "coredns-7db6d8ff4d-nsx9j" in "kube-system" namespace has status "Ready":"True"
	I0731 17:01:04.840517   26392 pod_ready.go:81] duration metric: took 5.593343ms for pod "coredns-7db6d8ff4d-nsx9j" in "kube-system" namespace to be "Ready" ...
	I0731 17:01:04.840524   26392 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-qbqb9" in "kube-system" namespace to be "Ready" ...
	I0731 17:01:04.840565   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-qbqb9
	I0731 17:01:04.840572   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:04.840578   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:04.840581   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:04.842700   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:01:04.843202   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651
	I0731 17:01:04.843216   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:04.843222   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:04.843226   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:04.845356   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:01:04.845902   26392 pod_ready.go:92] pod "coredns-7db6d8ff4d-qbqb9" in "kube-system" namespace has status "Ready":"True"
	I0731 17:01:04.845921   26392 pod_ready.go:81] duration metric: took 5.388928ms for pod "coredns-7db6d8ff4d-qbqb9" in "kube-system" namespace to be "Ready" ...
	I0731 17:01:04.845932   26392 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-234651" in "kube-system" namespace to be "Ready" ...
	I0731 17:01:04.845986   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/etcd-ha-234651
	I0731 17:01:04.845997   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:04.846006   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:04.846015   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:04.848157   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:01:04.848691   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651
	I0731 17:01:04.848703   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:04.848708   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:04.848712   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:04.850608   26392 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0731 17:01:04.851183   26392 pod_ready.go:92] pod "etcd-ha-234651" in "kube-system" namespace has status "Ready":"True"
	I0731 17:01:04.851198   26392 pod_ready.go:81] duration metric: took 5.258896ms for pod "etcd-ha-234651" in "kube-system" namespace to be "Ready" ...
	I0731 17:01:04.851205   26392 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-234651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 17:01:04.851243   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/etcd-ha-234651-m02
	I0731 17:01:04.851250   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:04.851257   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:04.851262   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:04.853625   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:01:04.854050   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:01:04.854063   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:04.854068   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:04.854072   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:04.856215   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:01:04.856867   26392 pod_ready.go:92] pod "etcd-ha-234651-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 17:01:04.856882   26392 pod_ready.go:81] duration metric: took 5.67156ms for pod "etcd-ha-234651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 17:01:04.856893   26392 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-234651" in "kube-system" namespace to be "Ready" ...
	I0731 17:01:05.020903   26392 request.go:629] Waited for 163.95132ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-234651
	I0731 17:01:05.020981   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-234651
	I0731 17:01:05.020990   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:05.021004   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:05.021014   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:05.024257   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:01:05.220373   26392 request.go:629] Waited for 195.279659ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/nodes/ha-234651
	I0731 17:01:05.220443   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651
	I0731 17:01:05.220448   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:05.220455   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:05.220460   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:05.223551   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:01:05.224027   26392 pod_ready.go:92] pod "kube-apiserver-ha-234651" in "kube-system" namespace has status "Ready":"True"
	I0731 17:01:05.224044   26392 pod_ready.go:81] duration metric: took 367.145061ms for pod "kube-apiserver-ha-234651" in "kube-system" namespace to be "Ready" ...
	I0731 17:01:05.224053   26392 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-234651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 17:01:05.421120   26392 request.go:629] Waited for 197.005416ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-234651-m02
	I0731 17:01:05.421193   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-234651-m02
	I0731 17:01:05.421198   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:05.421206   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:05.421209   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:05.424140   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:01:05.621076   26392 request.go:629] Waited for 196.378614ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:01:05.621153   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:01:05.621161   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:05.621168   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:05.621172   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:05.625121   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:01:05.625607   26392 pod_ready.go:92] pod "kube-apiserver-ha-234651-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 17:01:05.625623   26392 pod_ready.go:81] duration metric: took 401.564067ms for pod "kube-apiserver-ha-234651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 17:01:05.625632   26392 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-234651" in "kube-system" namespace to be "Ready" ...
	I0731 17:01:05.820743   26392 request.go:629] Waited for 195.048096ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-234651
	I0731 17:01:05.820824   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-234651
	I0731 17:01:05.820829   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:05.820836   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:05.820844   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:05.824330   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:01:06.020622   26392 request.go:629] Waited for 195.343372ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/nodes/ha-234651
	I0731 17:01:06.020676   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651
	I0731 17:01:06.020682   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:06.020689   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:06.020693   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:06.023838   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:01:06.024292   26392 pod_ready.go:92] pod "kube-controller-manager-ha-234651" in "kube-system" namespace has status "Ready":"True"
	I0731 17:01:06.024309   26392 pod_ready.go:81] duration metric: took 398.671702ms for pod "kube-controller-manager-ha-234651" in "kube-system" namespace to be "Ready" ...
	I0731 17:01:06.024318   26392 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-234651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 17:01:06.220348   26392 request.go:629] Waited for 195.964017ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-234651-m02
	I0731 17:01:06.220428   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-234651-m02
	I0731 17:01:06.220435   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:06.220446   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:06.220456   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:06.223974   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:01:06.420844   26392 request.go:629] Waited for 196.07221ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:01:06.420898   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:01:06.420904   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:06.420911   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:06.420916   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:06.423911   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:01:06.424360   26392 pod_ready.go:92] pod "kube-controller-manager-ha-234651-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 17:01:06.424376   26392 pod_ready.go:81] duration metric: took 400.052749ms for pod "kube-controller-manager-ha-234651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 17:01:06.424385   26392 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b8dcw" in "kube-system" namespace to be "Ready" ...
	I0731 17:01:06.620606   26392 request.go:629] Waited for 196.143149ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b8dcw
	I0731 17:01:06.620664   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b8dcw
	I0731 17:01:06.620668   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:06.620675   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:06.620680   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:06.623829   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:01:06.820834   26392 request.go:629] Waited for 196.348156ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:01:06.820908   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:01:06.820915   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:06.820924   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:06.820929   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:06.824063   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:01:06.824684   26392 pod_ready.go:92] pod "kube-proxy-b8dcw" in "kube-system" namespace has status "Ready":"True"
	I0731 17:01:06.824706   26392 pod_ready.go:81] duration metric: took 400.313857ms for pod "kube-proxy-b8dcw" in "kube-system" namespace to be "Ready" ...
	I0731 17:01:06.824719   26392 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jfgs8" in "kube-system" namespace to be "Ready" ...
	I0731 17:01:07.020808   26392 request.go:629] Waited for 196.0095ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jfgs8
	I0731 17:01:07.020868   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jfgs8
	I0731 17:01:07.020873   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:07.020880   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:07.020883   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:07.024277   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:01:07.220900   26392 request.go:629] Waited for 195.972338ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/nodes/ha-234651
	I0731 17:01:07.220951   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651
	I0731 17:01:07.220956   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:07.220964   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:07.220969   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:07.227233   26392 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 17:01:07.227930   26392 pod_ready.go:92] pod "kube-proxy-jfgs8" in "kube-system" namespace has status "Ready":"True"
	I0731 17:01:07.227949   26392 pod_ready.go:81] duration metric: took 403.222708ms for pod "kube-proxy-jfgs8" in "kube-system" namespace to be "Ready" ...
	I0731 17:01:07.227957   26392 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-234651" in "kube-system" namespace to be "Ready" ...
	I0731 17:01:07.421053   26392 request.go:629] Waited for 193.035592ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-234651
	I0731 17:01:07.421104   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-234651
	I0731 17:01:07.421109   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:07.421116   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:07.421128   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:07.424728   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:01:07.620741   26392 request.go:629] Waited for 195.45029ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/nodes/ha-234651
	I0731 17:01:07.620791   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651
	I0731 17:01:07.620796   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:07.620804   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:07.620812   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:07.624127   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:01:07.624586   26392 pod_ready.go:92] pod "kube-scheduler-ha-234651" in "kube-system" namespace has status "Ready":"True"
	I0731 17:01:07.624605   26392 pod_ready.go:81] duration metric: took 396.642385ms for pod "kube-scheduler-ha-234651" in "kube-system" namespace to be "Ready" ...
	I0731 17:01:07.624615   26392 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-234651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 17:01:07.820747   26392 request.go:629] Waited for 196.068342ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-234651-m02
	I0731 17:01:07.820826   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-234651-m02
	I0731 17:01:07.820833   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:07.820842   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:07.820849   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:07.824093   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:01:08.021084   26392 request.go:629] Waited for 196.338927ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:01:08.021148   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:01:08.021155   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:08.021163   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:08.021167   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:08.027770   26392 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 17:01:08.028250   26392 pod_ready.go:92] pod "kube-scheduler-ha-234651-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 17:01:08.028267   26392 pod_ready.go:81] duration metric: took 403.643549ms for pod "kube-scheduler-ha-234651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 17:01:08.028277   26392 pod_ready.go:38] duration metric: took 3.20385274s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 17:01:08.028292   26392 api_server.go:52] waiting for apiserver process to appear ...
	I0731 17:01:08.028339   26392 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 17:01:08.043888   26392 api_server.go:72] duration metric: took 21.014637451s to wait for apiserver process to appear ...
	I0731 17:01:08.043908   26392 api_server.go:88] waiting for apiserver healthz status ...
	I0731 17:01:08.043922   26392 api_server.go:253] Checking apiserver healthz at https://192.168.39.243:8443/healthz ...
	I0731 17:01:08.048088   26392 api_server.go:279] https://192.168.39.243:8443/healthz returned 200:
	ok
	I0731 17:01:08.048141   26392 round_trippers.go:463] GET https://192.168.39.243:8443/version
	I0731 17:01:08.048148   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:08.048156   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:08.048159   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:08.049141   26392 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0731 17:01:08.049212   26392 api_server.go:141] control plane version: v1.30.3
	I0731 17:01:08.049227   26392 api_server.go:131] duration metric: took 5.313114ms to wait for apiserver health ...
	I0731 17:01:08.049233   26392 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 17:01:08.220721   26392 request.go:629] Waited for 171.43341ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods
	I0731 17:01:08.220793   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods
	I0731 17:01:08.220799   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:08.220806   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:08.220810   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:08.227018   26392 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 17:01:08.231092   26392 system_pods.go:59] 17 kube-system pods found
	I0731 17:01:08.231131   26392 system_pods.go:61] "coredns-7db6d8ff4d-nsx9j" [b2cde006-dbb7-4e6f-a5f1-cf7760740104] Running
	I0731 17:01:08.231139   26392 system_pods.go:61] "coredns-7db6d8ff4d-qbqb9" [4f76f862-d39e-4976-90e6-fb9a25cc485a] Running
	I0731 17:01:08.231144   26392 system_pods.go:61] "etcd-ha-234651" [c6fb7163-2f43-4b81-a3d9-e1550ddabc4c] Running
	I0731 17:01:08.231156   26392 system_pods.go:61] "etcd-ha-234651-m02" [ba88e585-8e9b-4177-82e0-4222210cfd2d] Running
	I0731 17:01:08.231163   26392 system_pods.go:61] "kindnet-phmdp" [f0e0736a-4005-4b0b-81fc-e4cea26a7d42] Running
	I0731 17:01:08.231166   26392 system_pods.go:61] "kindnet-wfbt4" [9eda8095-ce75-4043-8ddf-6e5663de8212] Running
	I0731 17:01:08.231169   26392 system_pods.go:61] "kube-apiserver-ha-234651" [ce3cbf02-100c-4437-8d1c-2ac4a2822866] Running
	I0731 17:01:08.231173   26392 system_pods.go:61] "kube-apiserver-ha-234651-m02" [e5eea595-36d9-497e-83be-aac5845b566a] Running
	I0731 17:01:08.231176   26392 system_pods.go:61] "kube-controller-manager-ha-234651" [3d111090-8fee-40ab-8cab-df7dd984bdb3] Running
	I0731 17:01:08.231182   26392 system_pods.go:61] "kube-controller-manager-ha-234651-m02" [28a10b93-7e55-4e2d-a5be-3605fa94fdb0] Running
	I0731 17:01:08.231185   26392 system_pods.go:61] "kube-proxy-b8dcw" [cbd2a846-0490-4465-b299-76b6fb0d5710] Running
	I0731 17:01:08.231188   26392 system_pods.go:61] "kube-proxy-jfgs8" [5ead85d8-0fd0-4900-8c02-2f23217ca208] Running
	I0731 17:01:08.231191   26392 system_pods.go:61] "kube-scheduler-ha-234651" [e1e2064c-17d7-4ea9-b6d3-b13c9ca34789] Running
	I0731 17:01:08.231194   26392 system_pods.go:61] "kube-scheduler-ha-234651-m02" [2ed86332-42ba-46c9-901c-38880e8ec54a] Running
	I0731 17:01:08.231197   26392 system_pods.go:61] "kube-vip-ha-234651" [205b811c-93e9-4f66-9d0e-67abbc8ff1ef] Running
	I0731 17:01:08.231201   26392 system_pods.go:61] "kube-vip-ha-234651-m02" [8fc0f86c-f043-44c2-aa8b-544fd6b7de22] Running
	I0731 17:01:08.231203   26392 system_pods.go:61] "storage-provisioner" [87455537-bdb8-438b-8122-db85bed01d09] Running
	I0731 17:01:08.231210   26392 system_pods.go:74] duration metric: took 181.97158ms to wait for pod list to return data ...
	I0731 17:01:08.231218   26392 default_sa.go:34] waiting for default service account to be created ...
	I0731 17:01:08.420603   26392 request.go:629] Waited for 189.314669ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/namespaces/default/serviceaccounts
	I0731 17:01:08.420709   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/default/serviceaccounts
	I0731 17:01:08.420718   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:08.420726   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:08.420731   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:08.424009   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:01:08.424203   26392 default_sa.go:45] found service account: "default"
	I0731 17:01:08.424218   26392 default_sa.go:55] duration metric: took 192.994736ms for default service account to be created ...
	I0731 17:01:08.424226   26392 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 17:01:08.620624   26392 request.go:629] Waited for 196.342137ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods
	I0731 17:01:08.620703   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods
	I0731 17:01:08.620711   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:08.620722   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:08.620728   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:08.625553   26392 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 17:01:08.629849   26392 system_pods.go:86] 17 kube-system pods found
	I0731 17:01:08.629873   26392 system_pods.go:89] "coredns-7db6d8ff4d-nsx9j" [b2cde006-dbb7-4e6f-a5f1-cf7760740104] Running
	I0731 17:01:08.629880   26392 system_pods.go:89] "coredns-7db6d8ff4d-qbqb9" [4f76f862-d39e-4976-90e6-fb9a25cc485a] Running
	I0731 17:01:08.629884   26392 system_pods.go:89] "etcd-ha-234651" [c6fb7163-2f43-4b81-a3d9-e1550ddabc4c] Running
	I0731 17:01:08.629888   26392 system_pods.go:89] "etcd-ha-234651-m02" [ba88e585-8e9b-4177-82e0-4222210cfd2d] Running
	I0731 17:01:08.629893   26392 system_pods.go:89] "kindnet-phmdp" [f0e0736a-4005-4b0b-81fc-e4cea26a7d42] Running
	I0731 17:01:08.629896   26392 system_pods.go:89] "kindnet-wfbt4" [9eda8095-ce75-4043-8ddf-6e5663de8212] Running
	I0731 17:01:08.629900   26392 system_pods.go:89] "kube-apiserver-ha-234651" [ce3cbf02-100c-4437-8d1c-2ac4a2822866] Running
	I0731 17:01:08.629904   26392 system_pods.go:89] "kube-apiserver-ha-234651-m02" [e5eea595-36d9-497e-83be-aac5845b566a] Running
	I0731 17:01:08.629909   26392 system_pods.go:89] "kube-controller-manager-ha-234651" [3d111090-8fee-40ab-8cab-df7dd984bdb3] Running
	I0731 17:01:08.629913   26392 system_pods.go:89] "kube-controller-manager-ha-234651-m02" [28a10b93-7e55-4e2d-a5be-3605fa94fdb0] Running
	I0731 17:01:08.629917   26392 system_pods.go:89] "kube-proxy-b8dcw" [cbd2a846-0490-4465-b299-76b6fb0d5710] Running
	I0731 17:01:08.629921   26392 system_pods.go:89] "kube-proxy-jfgs8" [5ead85d8-0fd0-4900-8c02-2f23217ca208] Running
	I0731 17:01:08.629924   26392 system_pods.go:89] "kube-scheduler-ha-234651" [e1e2064c-17d7-4ea9-b6d3-b13c9ca34789] Running
	I0731 17:01:08.629928   26392 system_pods.go:89] "kube-scheduler-ha-234651-m02" [2ed86332-42ba-46c9-901c-38880e8ec54a] Running
	I0731 17:01:08.629931   26392 system_pods.go:89] "kube-vip-ha-234651" [205b811c-93e9-4f66-9d0e-67abbc8ff1ef] Running
	I0731 17:01:08.629935   26392 system_pods.go:89] "kube-vip-ha-234651-m02" [8fc0f86c-f043-44c2-aa8b-544fd6b7de22] Running
	I0731 17:01:08.629938   26392 system_pods.go:89] "storage-provisioner" [87455537-bdb8-438b-8122-db85bed01d09] Running
	I0731 17:01:08.629945   26392 system_pods.go:126] duration metric: took 205.71471ms to wait for k8s-apps to be running ...
	I0731 17:01:08.629953   26392 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 17:01:08.629999   26392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 17:01:08.644784   26392 system_svc.go:56] duration metric: took 14.820249ms WaitForService to wait for kubelet
	I0731 17:01:08.644812   26392 kubeadm.go:582] duration metric: took 21.615565367s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 17:01:08.644831   26392 node_conditions.go:102] verifying NodePressure condition ...
	I0731 17:01:08.821216   26392 request.go:629] Waited for 176.313806ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/nodes
	I0731 17:01:08.821273   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes
	I0731 17:01:08.821281   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:08.821289   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:08.821295   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:08.825439   26392 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 17:01:08.827911   26392 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 17:01:08.827947   26392 node_conditions.go:123] node cpu capacity is 2
	I0731 17:01:08.827957   26392 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 17:01:08.827961   26392 node_conditions.go:123] node cpu capacity is 2
	I0731 17:01:08.827966   26392 node_conditions.go:105] duration metric: took 183.129647ms to run NodePressure ...
	I0731 17:01:08.827976   26392 start.go:241] waiting for startup goroutines ...
	I0731 17:01:08.827996   26392 start.go:255] writing updated cluster config ...
	I0731 17:01:08.830200   26392 out.go:177] 
	I0731 17:01:08.831801   26392 config.go:182] Loaded profile config "ha-234651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 17:01:08.831914   26392 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/config.json ...
	I0731 17:01:08.833647   26392 out.go:177] * Starting "ha-234651-m03" control-plane node in "ha-234651" cluster
	I0731 17:01:08.834984   26392 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 17:01:08.835003   26392 cache.go:56] Caching tarball of preloaded images
	I0731 17:01:08.835090   26392 preload.go:172] Found /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 17:01:08.835100   26392 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 17:01:08.835245   26392 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/config.json ...
	I0731 17:01:08.835428   26392 start.go:360] acquireMachinesLock for ha-234651-m03: {Name:mkd78eacfab7d6c3058c6674434b3d889ec957e0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 17:01:08.835471   26392 start.go:364] duration metric: took 24.057µs to acquireMachinesLock for "ha-234651-m03"
	I0731 17:01:08.835487   26392 start.go:93] Provisioning new machine with config: &{Name:ha-234651 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-234651 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.243 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.235 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 17:01:08.835621   26392 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0731 17:01:08.837089   26392 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 17:01:08.837187   26392 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:01:08.837222   26392 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:01:08.851620   26392 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33979
	I0731 17:01:08.851990   26392 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:01:08.852406   26392 main.go:141] libmachine: Using API Version  1
	I0731 17:01:08.852426   26392 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:01:08.852766   26392 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:01:08.852945   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetMachineName
	I0731 17:01:08.853098   26392 main.go:141] libmachine: (ha-234651-m03) Calling .DriverName
	I0731 17:01:08.853223   26392 start.go:159] libmachine.API.Create for "ha-234651" (driver="kvm2")
	I0731 17:01:08.853244   26392 client.go:168] LocalClient.Create starting
	I0731 17:01:08.853269   26392 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem
	I0731 17:01:08.853298   26392 main.go:141] libmachine: Decoding PEM data...
	I0731 17:01:08.853311   26392 main.go:141] libmachine: Parsing certificate...
	I0731 17:01:08.853356   26392 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem
	I0731 17:01:08.853374   26392 main.go:141] libmachine: Decoding PEM data...
	I0731 17:01:08.853384   26392 main.go:141] libmachine: Parsing certificate...
	I0731 17:01:08.853399   26392 main.go:141] libmachine: Running pre-create checks...
	I0731 17:01:08.853406   26392 main.go:141] libmachine: (ha-234651-m03) Calling .PreCreateCheck
	I0731 17:01:08.853580   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetConfigRaw
	I0731 17:01:08.853893   26392 main.go:141] libmachine: Creating machine...
	I0731 17:01:08.853904   26392 main.go:141] libmachine: (ha-234651-m03) Calling .Create
	I0731 17:01:08.854005   26392 main.go:141] libmachine: (ha-234651-m03) Creating KVM machine...
	I0731 17:01:08.855060   26392 main.go:141] libmachine: (ha-234651-m03) DBG | found existing default KVM network
	I0731 17:01:08.855187   26392 main.go:141] libmachine: (ha-234651-m03) DBG | found existing private KVM network mk-ha-234651
	I0731 17:01:08.855288   26392 main.go:141] libmachine: (ha-234651-m03) Setting up store path in /home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m03 ...
	I0731 17:01:08.855326   26392 main.go:141] libmachine: (ha-234651-m03) Building disk image from file:///home/jenkins/minikube-integration/19349-8084/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0731 17:01:08.855425   26392 main.go:141] libmachine: (ha-234651-m03) DBG | I0731 17:01:08.855283   27175 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19349-8084/.minikube
	I0731 17:01:08.855467   26392 main.go:141] libmachine: (ha-234651-m03) Downloading /home/jenkins/minikube-integration/19349-8084/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19349-8084/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0731 17:01:09.084535   26392 main.go:141] libmachine: (ha-234651-m03) DBG | I0731 17:01:09.084390   27175 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m03/id_rsa...
	I0731 17:01:09.496483   26392 main.go:141] libmachine: (ha-234651-m03) DBG | I0731 17:01:09.496377   27175 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m03/ha-234651-m03.rawdisk...
	I0731 17:01:09.496510   26392 main.go:141] libmachine: (ha-234651-m03) DBG | Writing magic tar header
	I0731 17:01:09.496524   26392 main.go:141] libmachine: (ha-234651-m03) DBG | Writing SSH key tar header
	I0731 17:01:09.496538   26392 main.go:141] libmachine: (ha-234651-m03) DBG | I0731 17:01:09.496486   27175 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m03 ...
	I0731 17:01:09.496626   26392 main.go:141] libmachine: (ha-234651-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m03
	I0731 17:01:09.496670   26392 main.go:141] libmachine: (ha-234651-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19349-8084/.minikube/machines
	I0731 17:01:09.496685   26392 main.go:141] libmachine: (ha-234651-m03) Setting executable bit set on /home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m03 (perms=drwx------)
	I0731 17:01:09.496704   26392 main.go:141] libmachine: (ha-234651-m03) Setting executable bit set on /home/jenkins/minikube-integration/19349-8084/.minikube/machines (perms=drwxr-xr-x)
	I0731 17:01:09.496716   26392 main.go:141] libmachine: (ha-234651-m03) Setting executable bit set on /home/jenkins/minikube-integration/19349-8084/.minikube (perms=drwxr-xr-x)
	I0731 17:01:09.496729   26392 main.go:141] libmachine: (ha-234651-m03) Setting executable bit set on /home/jenkins/minikube-integration/19349-8084 (perms=drwxrwxr-x)
	I0731 17:01:09.496747   26392 main.go:141] libmachine: (ha-234651-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0731 17:01:09.496760   26392 main.go:141] libmachine: (ha-234651-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19349-8084/.minikube
	I0731 17:01:09.496772   26392 main.go:141] libmachine: (ha-234651-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0731 17:01:09.496789   26392 main.go:141] libmachine: (ha-234651-m03) Creating domain...
	I0731 17:01:09.496802   26392 main.go:141] libmachine: (ha-234651-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19349-8084
	I0731 17:01:09.496810   26392 main.go:141] libmachine: (ha-234651-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0731 17:01:09.496816   26392 main.go:141] libmachine: (ha-234651-m03) DBG | Checking permissions on dir: /home/jenkins
	I0731 17:01:09.496821   26392 main.go:141] libmachine: (ha-234651-m03) DBG | Checking permissions on dir: /home
	I0731 17:01:09.496832   26392 main.go:141] libmachine: (ha-234651-m03) DBG | Skipping /home - not owner
	I0731 17:01:09.497785   26392 main.go:141] libmachine: (ha-234651-m03) define libvirt domain using xml: 
	I0731 17:01:09.497804   26392 main.go:141] libmachine: (ha-234651-m03) <domain type='kvm'>
	I0731 17:01:09.497814   26392 main.go:141] libmachine: (ha-234651-m03)   <name>ha-234651-m03</name>
	I0731 17:01:09.497822   26392 main.go:141] libmachine: (ha-234651-m03)   <memory unit='MiB'>2200</memory>
	I0731 17:01:09.497830   26392 main.go:141] libmachine: (ha-234651-m03)   <vcpu>2</vcpu>
	I0731 17:01:09.497839   26392 main.go:141] libmachine: (ha-234651-m03)   <features>
	I0731 17:01:09.497848   26392 main.go:141] libmachine: (ha-234651-m03)     <acpi/>
	I0731 17:01:09.497857   26392 main.go:141] libmachine: (ha-234651-m03)     <apic/>
	I0731 17:01:09.497862   26392 main.go:141] libmachine: (ha-234651-m03)     <pae/>
	I0731 17:01:09.497866   26392 main.go:141] libmachine: (ha-234651-m03)     
	I0731 17:01:09.497872   26392 main.go:141] libmachine: (ha-234651-m03)   </features>
	I0731 17:01:09.497880   26392 main.go:141] libmachine: (ha-234651-m03)   <cpu mode='host-passthrough'>
	I0731 17:01:09.497885   26392 main.go:141] libmachine: (ha-234651-m03)   
	I0731 17:01:09.497891   26392 main.go:141] libmachine: (ha-234651-m03)   </cpu>
	I0731 17:01:09.497901   26392 main.go:141] libmachine: (ha-234651-m03)   <os>
	I0731 17:01:09.497908   26392 main.go:141] libmachine: (ha-234651-m03)     <type>hvm</type>
	I0731 17:01:09.497933   26392 main.go:141] libmachine: (ha-234651-m03)     <boot dev='cdrom'/>
	I0731 17:01:09.497954   26392 main.go:141] libmachine: (ha-234651-m03)     <boot dev='hd'/>
	I0731 17:01:09.497960   26392 main.go:141] libmachine: (ha-234651-m03)     <bootmenu enable='no'/>
	I0731 17:01:09.497967   26392 main.go:141] libmachine: (ha-234651-m03)   </os>
	I0731 17:01:09.497972   26392 main.go:141] libmachine: (ha-234651-m03)   <devices>
	I0731 17:01:09.497980   26392 main.go:141] libmachine: (ha-234651-m03)     <disk type='file' device='cdrom'>
	I0731 17:01:09.497990   26392 main.go:141] libmachine: (ha-234651-m03)       <source file='/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m03/boot2docker.iso'/>
	I0731 17:01:09.497998   26392 main.go:141] libmachine: (ha-234651-m03)       <target dev='hdc' bus='scsi'/>
	I0731 17:01:09.498004   26392 main.go:141] libmachine: (ha-234651-m03)       <readonly/>
	I0731 17:01:09.498010   26392 main.go:141] libmachine: (ha-234651-m03)     </disk>
	I0731 17:01:09.498017   26392 main.go:141] libmachine: (ha-234651-m03)     <disk type='file' device='disk'>
	I0731 17:01:09.498026   26392 main.go:141] libmachine: (ha-234651-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0731 17:01:09.498041   26392 main.go:141] libmachine: (ha-234651-m03)       <source file='/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m03/ha-234651-m03.rawdisk'/>
	I0731 17:01:09.498054   26392 main.go:141] libmachine: (ha-234651-m03)       <target dev='hda' bus='virtio'/>
	I0731 17:01:09.498064   26392 main.go:141] libmachine: (ha-234651-m03)     </disk>
	I0731 17:01:09.498076   26392 main.go:141] libmachine: (ha-234651-m03)     <interface type='network'>
	I0731 17:01:09.498086   26392 main.go:141] libmachine: (ha-234651-m03)       <source network='mk-ha-234651'/>
	I0731 17:01:09.498098   26392 main.go:141] libmachine: (ha-234651-m03)       <model type='virtio'/>
	I0731 17:01:09.498108   26392 main.go:141] libmachine: (ha-234651-m03)     </interface>
	I0731 17:01:09.498123   26392 main.go:141] libmachine: (ha-234651-m03)     <interface type='network'>
	I0731 17:01:09.498139   26392 main.go:141] libmachine: (ha-234651-m03)       <source network='default'/>
	I0731 17:01:09.498148   26392 main.go:141] libmachine: (ha-234651-m03)       <model type='virtio'/>
	I0731 17:01:09.498152   26392 main.go:141] libmachine: (ha-234651-m03)     </interface>
	I0731 17:01:09.498158   26392 main.go:141] libmachine: (ha-234651-m03)     <serial type='pty'>
	I0731 17:01:09.498165   26392 main.go:141] libmachine: (ha-234651-m03)       <target port='0'/>
	I0731 17:01:09.498170   26392 main.go:141] libmachine: (ha-234651-m03)     </serial>
	I0731 17:01:09.498176   26392 main.go:141] libmachine: (ha-234651-m03)     <console type='pty'>
	I0731 17:01:09.498182   26392 main.go:141] libmachine: (ha-234651-m03)       <target type='serial' port='0'/>
	I0731 17:01:09.498189   26392 main.go:141] libmachine: (ha-234651-m03)     </console>
	I0731 17:01:09.498194   26392 main.go:141] libmachine: (ha-234651-m03)     <rng model='virtio'>
	I0731 17:01:09.498202   26392 main.go:141] libmachine: (ha-234651-m03)       <backend model='random'>/dev/random</backend>
	I0731 17:01:09.498212   26392 main.go:141] libmachine: (ha-234651-m03)     </rng>
	I0731 17:01:09.498223   26392 main.go:141] libmachine: (ha-234651-m03)     
	I0731 17:01:09.498235   26392 main.go:141] libmachine: (ha-234651-m03)     
	I0731 17:01:09.498251   26392 main.go:141] libmachine: (ha-234651-m03)   </devices>
	I0731 17:01:09.498272   26392 main.go:141] libmachine: (ha-234651-m03) </domain>
	I0731 17:01:09.498285   26392 main.go:141] libmachine: (ha-234651-m03) 
	I0731 17:01:09.505009   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:98:32:f5 in network default
	I0731 17:01:09.505505   26392 main.go:141] libmachine: (ha-234651-m03) Ensuring networks are active...
	I0731 17:01:09.505525   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:09.506245   26392 main.go:141] libmachine: (ha-234651-m03) Ensuring network default is active
	I0731 17:01:09.506588   26392 main.go:141] libmachine: (ha-234651-m03) Ensuring network mk-ha-234651 is active
	I0731 17:01:09.506926   26392 main.go:141] libmachine: (ha-234651-m03) Getting domain xml...
	I0731 17:01:09.507698   26392 main.go:141] libmachine: (ha-234651-m03) Creating domain...
	I0731 17:01:10.710510   26392 main.go:141] libmachine: (ha-234651-m03) Waiting to get IP...
	I0731 17:01:10.711248   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:10.711639   26392 main.go:141] libmachine: (ha-234651-m03) DBG | unable to find current IP address of domain ha-234651-m03 in network mk-ha-234651
	I0731 17:01:10.711664   26392 main.go:141] libmachine: (ha-234651-m03) DBG | I0731 17:01:10.711622   27175 retry.go:31] will retry after 214.209915ms: waiting for machine to come up
	I0731 17:01:10.926907   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:10.927356   26392 main.go:141] libmachine: (ha-234651-m03) DBG | unable to find current IP address of domain ha-234651-m03 in network mk-ha-234651
	I0731 17:01:10.927378   26392 main.go:141] libmachine: (ha-234651-m03) DBG | I0731 17:01:10.927303   27175 retry.go:31] will retry after 376.598663ms: waiting for machine to come up
	I0731 17:01:11.305743   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:11.306195   26392 main.go:141] libmachine: (ha-234651-m03) DBG | unable to find current IP address of domain ha-234651-m03 in network mk-ha-234651
	I0731 17:01:11.306219   26392 main.go:141] libmachine: (ha-234651-m03) DBG | I0731 17:01:11.306160   27175 retry.go:31] will retry after 328.55691ms: waiting for machine to come up
	I0731 17:01:11.636615   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:11.637088   26392 main.go:141] libmachine: (ha-234651-m03) DBG | unable to find current IP address of domain ha-234651-m03 in network mk-ha-234651
	I0731 17:01:11.637117   26392 main.go:141] libmachine: (ha-234651-m03) DBG | I0731 17:01:11.637023   27175 retry.go:31] will retry after 509.868926ms: waiting for machine to come up
	I0731 17:01:12.148495   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:12.148932   26392 main.go:141] libmachine: (ha-234651-m03) DBG | unable to find current IP address of domain ha-234651-m03 in network mk-ha-234651
	I0731 17:01:12.148953   26392 main.go:141] libmachine: (ha-234651-m03) DBG | I0731 17:01:12.148904   27175 retry.go:31] will retry after 489.995297ms: waiting for machine to come up
	I0731 17:01:12.640709   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:12.641266   26392 main.go:141] libmachine: (ha-234651-m03) DBG | unable to find current IP address of domain ha-234651-m03 in network mk-ha-234651
	I0731 17:01:12.641299   26392 main.go:141] libmachine: (ha-234651-m03) DBG | I0731 17:01:12.641207   27175 retry.go:31] will retry after 891.834852ms: waiting for machine to come up
	I0731 17:01:13.534824   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:13.535341   26392 main.go:141] libmachine: (ha-234651-m03) DBG | unable to find current IP address of domain ha-234651-m03 in network mk-ha-234651
	I0731 17:01:13.535368   26392 main.go:141] libmachine: (ha-234651-m03) DBG | I0731 17:01:13.535293   27175 retry.go:31] will retry after 740.342338ms: waiting for machine to come up
	I0731 17:01:14.277390   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:14.277830   26392 main.go:141] libmachine: (ha-234651-m03) DBG | unable to find current IP address of domain ha-234651-m03 in network mk-ha-234651
	I0731 17:01:14.277852   26392 main.go:141] libmachine: (ha-234651-m03) DBG | I0731 17:01:14.277792   27175 retry.go:31] will retry after 1.412219536s: waiting for machine to come up
	I0731 17:01:15.692325   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:15.692790   26392 main.go:141] libmachine: (ha-234651-m03) DBG | unable to find current IP address of domain ha-234651-m03 in network mk-ha-234651
	I0731 17:01:15.692832   26392 main.go:141] libmachine: (ha-234651-m03) DBG | I0731 17:01:15.692742   27175 retry.go:31] will retry after 1.272314742s: waiting for machine to come up
	I0731 17:01:16.966944   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:16.967394   26392 main.go:141] libmachine: (ha-234651-m03) DBG | unable to find current IP address of domain ha-234651-m03 in network mk-ha-234651
	I0731 17:01:16.967424   26392 main.go:141] libmachine: (ha-234651-m03) DBG | I0731 17:01:16.967344   27175 retry.go:31] will retry after 1.443011677s: waiting for machine to come up
	I0731 17:01:18.411974   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:18.412499   26392 main.go:141] libmachine: (ha-234651-m03) DBG | unable to find current IP address of domain ha-234651-m03 in network mk-ha-234651
	I0731 17:01:18.412529   26392 main.go:141] libmachine: (ha-234651-m03) DBG | I0731 17:01:18.412449   27175 retry.go:31] will retry after 2.743615987s: waiting for machine to come up
	I0731 17:01:21.157559   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:21.157996   26392 main.go:141] libmachine: (ha-234651-m03) DBG | unable to find current IP address of domain ha-234651-m03 in network mk-ha-234651
	I0731 17:01:21.158043   26392 main.go:141] libmachine: (ha-234651-m03) DBG | I0731 17:01:21.157950   27175 retry.go:31] will retry after 2.604564384s: waiting for machine to come up
	I0731 17:01:23.763967   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:23.764422   26392 main.go:141] libmachine: (ha-234651-m03) DBG | unable to find current IP address of domain ha-234651-m03 in network mk-ha-234651
	I0731 17:01:23.764443   26392 main.go:141] libmachine: (ha-234651-m03) DBG | I0731 17:01:23.764380   27175 retry.go:31] will retry after 3.508285757s: waiting for machine to come up
	I0731 17:01:27.276084   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:27.276506   26392 main.go:141] libmachine: (ha-234651-m03) DBG | unable to find current IP address of domain ha-234651-m03 in network mk-ha-234651
	I0731 17:01:27.276536   26392 main.go:141] libmachine: (ha-234651-m03) DBG | I0731 17:01:27.276462   27175 retry.go:31] will retry after 4.892278928s: waiting for machine to come up
	I0731 17:01:32.172161   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:32.172728   26392 main.go:141] libmachine: (ha-234651-m03) Found IP for machine: 192.168.39.139
	I0731 17:01:32.172753   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has current primary IP address 192.168.39.139 and MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:32.172762   26392 main.go:141] libmachine: (ha-234651-m03) Reserving static IP address...
	I0731 17:01:32.173205   26392 main.go:141] libmachine: (ha-234651-m03) DBG | unable to find host DHCP lease matching {name: "ha-234651-m03", mac: "52:54:00:ac:c0:cf", ip: "192.168.39.139"} in network mk-ha-234651
	I0731 17:01:32.245008   26392 main.go:141] libmachine: (ha-234651-m03) DBG | Getting to WaitForSSH function...
	I0731 17:01:32.245035   26392 main.go:141] libmachine: (ha-234651-m03) Reserved static IP address: 192.168.39.139
	I0731 17:01:32.245049   26392 main.go:141] libmachine: (ha-234651-m03) Waiting for SSH to be available...
	I0731 17:01:32.247523   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:32.247958   26392 main.go:141] libmachine: (ha-234651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c0:cf", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:01:23 +0000 UTC Type:0 Mac:52:54:00:ac:c0:cf Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ac:c0:cf}
	I0731 17:01:32.248074   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined IP address 192.168.39.139 and MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:32.248199   26392 main.go:141] libmachine: (ha-234651-m03) DBG | Using SSH client type: external
	I0731 17:01:32.248227   26392 main.go:141] libmachine: (ha-234651-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m03/id_rsa (-rw-------)
	I0731 17:01:32.248270   26392 main.go:141] libmachine: (ha-234651-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.139 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 17:01:32.248304   26392 main.go:141] libmachine: (ha-234651-m03) DBG | About to run SSH command:
	I0731 17:01:32.248322   26392 main.go:141] libmachine: (ha-234651-m03) DBG | exit 0
	I0731 17:01:32.370824   26392 main.go:141] libmachine: (ha-234651-m03) DBG | SSH cmd err, output: <nil>: 
	I0731 17:01:32.371066   26392 main.go:141] libmachine: (ha-234651-m03) KVM machine creation complete!
	I0731 17:01:32.371306   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetConfigRaw
	I0731 17:01:32.371797   26392 main.go:141] libmachine: (ha-234651-m03) Calling .DriverName
	I0731 17:01:32.371998   26392 main.go:141] libmachine: (ha-234651-m03) Calling .DriverName
	I0731 17:01:32.372156   26392 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0731 17:01:32.372169   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetState
	I0731 17:01:32.373922   26392 main.go:141] libmachine: Detecting operating system of created instance...
	I0731 17:01:32.373935   26392 main.go:141] libmachine: Waiting for SSH to be available...
	I0731 17:01:32.373940   26392 main.go:141] libmachine: Getting to WaitForSSH function...
	I0731 17:01:32.373946   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHHostname
	I0731 17:01:32.376008   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:32.376384   26392 main.go:141] libmachine: (ha-234651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c0:cf", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:01:23 +0000 UTC Type:0 Mac:52:54:00:ac:c0:cf Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-234651-m03 Clientid:01:52:54:00:ac:c0:cf}
	I0731 17:01:32.376408   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined IP address 192.168.39.139 and MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:32.376620   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHPort
	I0731 17:01:32.376804   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHKeyPath
	I0731 17:01:32.376989   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHKeyPath
	I0731 17:01:32.377143   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHUsername
	I0731 17:01:32.377429   26392 main.go:141] libmachine: Using SSH client type: native
	I0731 17:01:32.377649   26392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0731 17:01:32.377660   26392 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0731 17:01:32.474217   26392 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 17:01:32.474239   26392 main.go:141] libmachine: Detecting the provisioner...
	I0731 17:01:32.474250   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHHostname
	I0731 17:01:32.477335   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:32.477812   26392 main.go:141] libmachine: (ha-234651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c0:cf", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:01:23 +0000 UTC Type:0 Mac:52:54:00:ac:c0:cf Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-234651-m03 Clientid:01:52:54:00:ac:c0:cf}
	I0731 17:01:32.477835   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined IP address 192.168.39.139 and MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:32.477980   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHPort
	I0731 17:01:32.478177   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHKeyPath
	I0731 17:01:32.478379   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHKeyPath
	I0731 17:01:32.478559   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHUsername
	I0731 17:01:32.478713   26392 main.go:141] libmachine: Using SSH client type: native
	I0731 17:01:32.478909   26392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0731 17:01:32.478921   26392 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0731 17:01:32.579743   26392 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0731 17:01:32.579798   26392 main.go:141] libmachine: found compatible host: buildroot
	I0731 17:01:32.579805   26392 main.go:141] libmachine: Provisioning with buildroot...
	I0731 17:01:32.579811   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetMachineName
	I0731 17:01:32.580068   26392 buildroot.go:166] provisioning hostname "ha-234651-m03"
	I0731 17:01:32.580096   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetMachineName
	I0731 17:01:32.580266   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHHostname
	I0731 17:01:32.582796   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:32.583164   26392 main.go:141] libmachine: (ha-234651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c0:cf", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:01:23 +0000 UTC Type:0 Mac:52:54:00:ac:c0:cf Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-234651-m03 Clientid:01:52:54:00:ac:c0:cf}
	I0731 17:01:32.583194   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined IP address 192.168.39.139 and MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:32.583353   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHPort
	I0731 17:01:32.583512   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHKeyPath
	I0731 17:01:32.583651   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHKeyPath
	I0731 17:01:32.583770   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHUsername
	I0731 17:01:32.583947   26392 main.go:141] libmachine: Using SSH client type: native
	I0731 17:01:32.584123   26392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0731 17:01:32.584143   26392 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-234651-m03 && echo "ha-234651-m03" | sudo tee /etc/hostname
	I0731 17:01:32.697431   26392 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-234651-m03
	
	I0731 17:01:32.697455   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHHostname
	I0731 17:01:32.700299   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:32.700674   26392 main.go:141] libmachine: (ha-234651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c0:cf", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:01:23 +0000 UTC Type:0 Mac:52:54:00:ac:c0:cf Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-234651-m03 Clientid:01:52:54:00:ac:c0:cf}
	I0731 17:01:32.700705   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined IP address 192.168.39.139 and MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:32.700940   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHPort
	I0731 17:01:32.701126   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHKeyPath
	I0731 17:01:32.701281   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHKeyPath
	I0731 17:01:32.701433   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHUsername
	I0731 17:01:32.701591   26392 main.go:141] libmachine: Using SSH client type: native
	I0731 17:01:32.701797   26392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0731 17:01:32.701822   26392 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-234651-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-234651-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-234651-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 17:01:32.807070   26392 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 17:01:32.807099   26392 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19349-8084/.minikube CaCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19349-8084/.minikube}
	I0731 17:01:32.807141   26392 buildroot.go:174] setting up certificates
	I0731 17:01:32.807151   26392 provision.go:84] configureAuth start
	I0731 17:01:32.807160   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetMachineName
	I0731 17:01:32.807456   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetIP
	I0731 17:01:32.810372   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:32.810735   26392 main.go:141] libmachine: (ha-234651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c0:cf", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:01:23 +0000 UTC Type:0 Mac:52:54:00:ac:c0:cf Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-234651-m03 Clientid:01:52:54:00:ac:c0:cf}
	I0731 17:01:32.810781   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined IP address 192.168.39.139 and MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:32.810910   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHHostname
	I0731 17:01:32.812999   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:32.813395   26392 main.go:141] libmachine: (ha-234651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c0:cf", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:01:23 +0000 UTC Type:0 Mac:52:54:00:ac:c0:cf Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-234651-m03 Clientid:01:52:54:00:ac:c0:cf}
	I0731 17:01:32.813420   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined IP address 192.168.39.139 and MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:32.813557   26392 provision.go:143] copyHostCerts
	I0731 17:01:32.813586   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem
	I0731 17:01:32.813627   26392 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem, removing ...
	I0731 17:01:32.813638   26392 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem
	I0731 17:01:32.813733   26392 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem (1675 bytes)
	I0731 17:01:32.813818   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem
	I0731 17:01:32.813837   26392 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem, removing ...
	I0731 17:01:32.813845   26392 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem
	I0731 17:01:32.813881   26392 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem (1082 bytes)
	I0731 17:01:32.813948   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem
	I0731 17:01:32.813973   26392 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem, removing ...
	I0731 17:01:32.813981   26392 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem
	I0731 17:01:32.814008   26392 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem (1123 bytes)
	I0731 17:01:32.814064   26392 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem org=jenkins.ha-234651-m03 san=[127.0.0.1 192.168.39.139 ha-234651-m03 localhost minikube]
	I0731 17:01:33.066175   26392 provision.go:177] copyRemoteCerts
	I0731 17:01:33.066233   26392 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 17:01:33.066255   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHHostname
	I0731 17:01:33.068872   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:33.069232   26392 main.go:141] libmachine: (ha-234651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c0:cf", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:01:23 +0000 UTC Type:0 Mac:52:54:00:ac:c0:cf Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-234651-m03 Clientid:01:52:54:00:ac:c0:cf}
	I0731 17:01:33.069260   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined IP address 192.168.39.139 and MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:33.069459   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHPort
	I0731 17:01:33.069645   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHKeyPath
	I0731 17:01:33.069791   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHUsername
	I0731 17:01:33.069906   26392 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m03/id_rsa Username:docker}
	I0731 17:01:33.153992   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0731 17:01:33.154068   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 17:01:33.177299   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0731 17:01:33.177378   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0731 17:01:33.199207   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0731 17:01:33.199275   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 17:01:33.221591   26392 provision.go:87] duration metric: took 414.430702ms to configureAuth
	I0731 17:01:33.221621   26392 buildroot.go:189] setting minikube options for container-runtime
	I0731 17:01:33.221823   26392 config.go:182] Loaded profile config "ha-234651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 17:01:33.221901   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHHostname
	I0731 17:01:33.224934   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:33.225386   26392 main.go:141] libmachine: (ha-234651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c0:cf", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:01:23 +0000 UTC Type:0 Mac:52:54:00:ac:c0:cf Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-234651-m03 Clientid:01:52:54:00:ac:c0:cf}
	I0731 17:01:33.225431   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined IP address 192.168.39.139 and MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:33.225586   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHPort
	I0731 17:01:33.225786   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHKeyPath
	I0731 17:01:33.225945   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHKeyPath
	I0731 17:01:33.226081   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHUsername
	I0731 17:01:33.226239   26392 main.go:141] libmachine: Using SSH client type: native
	I0731 17:01:33.226402   26392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0731 17:01:33.226415   26392 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 17:01:33.488220   26392 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 17:01:33.488252   26392 main.go:141] libmachine: Checking connection to Docker...
	I0731 17:01:33.488265   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetURL
	I0731 17:01:33.489561   26392 main.go:141] libmachine: (ha-234651-m03) DBG | Using libvirt version 6000000
	I0731 17:01:33.491742   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:33.492239   26392 main.go:141] libmachine: (ha-234651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c0:cf", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:01:23 +0000 UTC Type:0 Mac:52:54:00:ac:c0:cf Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-234651-m03 Clientid:01:52:54:00:ac:c0:cf}
	I0731 17:01:33.492272   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined IP address 192.168.39.139 and MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:33.492505   26392 main.go:141] libmachine: Docker is up and running!
	I0731 17:01:33.492523   26392 main.go:141] libmachine: Reticulating splines...
	I0731 17:01:33.492531   26392 client.go:171] duration metric: took 24.63927913s to LocalClient.Create
	I0731 17:01:33.492555   26392 start.go:167] duration metric: took 24.639330663s to libmachine.API.Create "ha-234651"
	I0731 17:01:33.492578   26392 start.go:293] postStartSetup for "ha-234651-m03" (driver="kvm2")
	I0731 17:01:33.492591   26392 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 17:01:33.492663   26392 main.go:141] libmachine: (ha-234651-m03) Calling .DriverName
	I0731 17:01:33.492918   26392 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 17:01:33.492944   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHHostname
	I0731 17:01:33.495518   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:33.495920   26392 main.go:141] libmachine: (ha-234651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c0:cf", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:01:23 +0000 UTC Type:0 Mac:52:54:00:ac:c0:cf Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-234651-m03 Clientid:01:52:54:00:ac:c0:cf}
	I0731 17:01:33.495946   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined IP address 192.168.39.139 and MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:33.496103   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHPort
	I0731 17:01:33.496285   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHKeyPath
	I0731 17:01:33.496459   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHUsername
	I0731 17:01:33.496601   26392 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m03/id_rsa Username:docker}
	I0731 17:01:33.573930   26392 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 17:01:33.578232   26392 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 17:01:33.578251   26392 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/addons for local assets ...
	I0731 17:01:33.578307   26392 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/files for local assets ...
	I0731 17:01:33.578378   26392 filesync.go:149] local asset: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem -> 152592.pem in /etc/ssl/certs
	I0731 17:01:33.578388   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem -> /etc/ssl/certs/152592.pem
	I0731 17:01:33.578465   26392 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 17:01:33.588147   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /etc/ssl/certs/152592.pem (1708 bytes)
	I0731 17:01:33.612020   26392 start.go:296] duration metric: took 119.431633ms for postStartSetup
	I0731 17:01:33.612060   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetConfigRaw
	I0731 17:01:33.612663   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetIP
	I0731 17:01:33.615187   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:33.615618   26392 main.go:141] libmachine: (ha-234651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c0:cf", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:01:23 +0000 UTC Type:0 Mac:52:54:00:ac:c0:cf Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-234651-m03 Clientid:01:52:54:00:ac:c0:cf}
	I0731 17:01:33.615645   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined IP address 192.168.39.139 and MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:33.615888   26392 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/config.json ...
	I0731 17:01:33.616086   26392 start.go:128] duration metric: took 24.780455282s to createHost
	I0731 17:01:33.616111   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHHostname
	I0731 17:01:33.618128   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:33.618421   26392 main.go:141] libmachine: (ha-234651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c0:cf", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:01:23 +0000 UTC Type:0 Mac:52:54:00:ac:c0:cf Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-234651-m03 Clientid:01:52:54:00:ac:c0:cf}
	I0731 17:01:33.618448   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined IP address 192.168.39.139 and MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:33.618599   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHPort
	I0731 17:01:33.618789   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHKeyPath
	I0731 17:01:33.618937   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHKeyPath
	I0731 17:01:33.619090   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHUsername
	I0731 17:01:33.619293   26392 main.go:141] libmachine: Using SSH client type: native
	I0731 17:01:33.619448   26392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0731 17:01:33.619457   26392 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 17:01:33.720061   26392 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722445293.697216051
	
	I0731 17:01:33.720083   26392 fix.go:216] guest clock: 1722445293.697216051
	I0731 17:01:33.720090   26392 fix.go:229] Guest: 2024-07-31 17:01:33.697216051 +0000 UTC Remote: 2024-07-31 17:01:33.616097561 +0000 UTC m=+151.562332489 (delta=81.11849ms)
	I0731 17:01:33.720109   26392 fix.go:200] guest clock delta is within tolerance: 81.11849ms
	I0731 17:01:33.720116   26392 start.go:83] releasing machines lock for "ha-234651-m03", held for 24.884635596s
	I0731 17:01:33.720138   26392 main.go:141] libmachine: (ha-234651-m03) Calling .DriverName
	I0731 17:01:33.720399   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetIP
	I0731 17:01:33.723510   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:33.723912   26392 main.go:141] libmachine: (ha-234651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c0:cf", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:01:23 +0000 UTC Type:0 Mac:52:54:00:ac:c0:cf Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-234651-m03 Clientid:01:52:54:00:ac:c0:cf}
	I0731 17:01:33.723968   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined IP address 192.168.39.139 and MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:33.726040   26392 out.go:177] * Found network options:
	I0731 17:01:33.727394   26392 out.go:177]   - NO_PROXY=192.168.39.243,192.168.39.235
	W0731 17:01:33.728619   26392 proxy.go:119] fail to check proxy env: Error ip not in block
	W0731 17:01:33.728640   26392 proxy.go:119] fail to check proxy env: Error ip not in block
	I0731 17:01:33.728653   26392 main.go:141] libmachine: (ha-234651-m03) Calling .DriverName
	I0731 17:01:33.729098   26392 main.go:141] libmachine: (ha-234651-m03) Calling .DriverName
	I0731 17:01:33.729272   26392 main.go:141] libmachine: (ha-234651-m03) Calling .DriverName
	I0731 17:01:33.729400   26392 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 17:01:33.729441   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHHostname
	W0731 17:01:33.729485   26392 proxy.go:119] fail to check proxy env: Error ip not in block
	W0731 17:01:33.729506   26392 proxy.go:119] fail to check proxy env: Error ip not in block
	I0731 17:01:33.729572   26392 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 17:01:33.729595   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHHostname
	I0731 17:01:33.732354   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:33.732559   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:33.732817   26392 main.go:141] libmachine: (ha-234651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c0:cf", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:01:23 +0000 UTC Type:0 Mac:52:54:00:ac:c0:cf Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-234651-m03 Clientid:01:52:54:00:ac:c0:cf}
	I0731 17:01:33.732844   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined IP address 192.168.39.139 and MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:33.732918   26392 main.go:141] libmachine: (ha-234651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c0:cf", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:01:23 +0000 UTC Type:0 Mac:52:54:00:ac:c0:cf Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-234651-m03 Clientid:01:52:54:00:ac:c0:cf}
	I0731 17:01:33.732938   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined IP address 192.168.39.139 and MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:33.732952   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHPort
	I0731 17:01:33.733108   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHKeyPath
	I0731 17:01:33.733174   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHPort
	I0731 17:01:33.733282   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHUsername
	I0731 17:01:33.733329   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHKeyPath
	I0731 17:01:33.733414   26392 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m03/id_rsa Username:docker}
	I0731 17:01:33.733471   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHUsername
	I0731 17:01:33.733612   26392 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m03/id_rsa Username:docker}
	I0731 17:01:33.971250   26392 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 17:01:33.977009   26392 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 17:01:33.977080   26392 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 17:01:33.995986   26392 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 17:01:33.996010   26392 start.go:495] detecting cgroup driver to use...
	I0731 17:01:33.996073   26392 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 17:01:34.012560   26392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 17:01:34.026037   26392 docker.go:217] disabling cri-docker service (if available) ...
	I0731 17:01:34.026090   26392 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 17:01:34.039868   26392 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 17:01:34.052763   26392 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 17:01:34.162187   26392 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 17:01:34.303171   26392 docker.go:233] disabling docker service ...
	I0731 17:01:34.303247   26392 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 17:01:34.319419   26392 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 17:01:34.332145   26392 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 17:01:34.467404   26392 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 17:01:34.584244   26392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 17:01:34.598198   26392 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 17:01:34.615593   26392 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 17:01:34.615655   26392 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:01:34.625100   26392 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 17:01:34.625150   26392 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:01:34.634613   26392 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:01:34.644216   26392 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:01:34.653866   26392 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 17:01:34.664457   26392 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:01:34.673728   26392 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:01:34.689886   26392 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:01:34.699411   26392 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 17:01:34.707870   26392 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 17:01:34.707921   26392 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 17:01:34.720271   26392 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 17:01:34.729278   26392 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 17:01:34.845550   26392 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 17:01:34.974748   26392 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 17:01:34.974824   26392 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 17:01:34.979616   26392 start.go:563] Will wait 60s for crictl version
	I0731 17:01:34.979670   26392 ssh_runner.go:195] Run: which crictl
	I0731 17:01:34.983733   26392 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 17:01:35.022775   26392 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 17:01:35.022854   26392 ssh_runner.go:195] Run: crio --version
	I0731 17:01:35.050515   26392 ssh_runner.go:195] Run: crio --version
	I0731 17:01:35.079964   26392 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 17:01:35.081319   26392 out.go:177]   - env NO_PROXY=192.168.39.243
	I0731 17:01:35.082627   26392 out.go:177]   - env NO_PROXY=192.168.39.243,192.168.39.235
	I0731 17:01:35.084057   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetIP
	I0731 17:01:35.087070   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:35.087418   26392 main.go:141] libmachine: (ha-234651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c0:cf", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:01:23 +0000 UTC Type:0 Mac:52:54:00:ac:c0:cf Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-234651-m03 Clientid:01:52:54:00:ac:c0:cf}
	I0731 17:01:35.087443   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined IP address 192.168.39.139 and MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:35.087647   26392 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 17:01:35.091590   26392 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 17:01:35.102805   26392 mustload.go:65] Loading cluster: ha-234651
	I0731 17:01:35.103045   26392 config.go:182] Loaded profile config "ha-234651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 17:01:35.103387   26392 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:01:35.103423   26392 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:01:35.117581   26392 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45271
	I0731 17:01:35.117916   26392 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:01:35.118379   26392 main.go:141] libmachine: Using API Version  1
	I0731 17:01:35.118397   26392 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:01:35.118716   26392 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:01:35.118915   26392 main.go:141] libmachine: (ha-234651) Calling .GetState
	I0731 17:01:35.120442   26392 host.go:66] Checking if "ha-234651" exists ...
	I0731 17:01:35.120719   26392 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:01:35.120749   26392 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:01:35.134870   26392 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36341
	I0731 17:01:35.135283   26392 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:01:35.135793   26392 main.go:141] libmachine: Using API Version  1
	I0731 17:01:35.135815   26392 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:01:35.136144   26392 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:01:35.136361   26392 main.go:141] libmachine: (ha-234651) Calling .DriverName
	I0731 17:01:35.136522   26392 certs.go:68] Setting up /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651 for IP: 192.168.39.139
	I0731 17:01:35.136534   26392 certs.go:194] generating shared ca certs ...
	I0731 17:01:35.136550   26392 certs.go:226] acquiring lock for ca certs: {Name:mk49eed439085f9c30746706460ce213570d1997 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 17:01:35.136686   26392 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key
	I0731 17:01:35.136738   26392 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key
	I0731 17:01:35.136750   26392 certs.go:256] generating profile certs ...
	I0731 17:01:35.136841   26392 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/client.key
	I0731 17:01:35.136874   26392 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.key.e4b58186
	I0731 17:01:35.136892   26392 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.crt.e4b58186 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.243 192.168.39.235 192.168.39.139 192.168.39.254]
	I0731 17:01:35.434332   26392 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.crt.e4b58186 ...
	I0731 17:01:35.434363   26392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.crt.e4b58186: {Name:mk63bbc1c92e932d3d9f00338e4ca98819c6b1ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 17:01:35.434529   26392 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.key.e4b58186 ...
	I0731 17:01:35.434541   26392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.key.e4b58186: {Name:mk628b5feee434241ec59f12d267a78b3ae29d7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 17:01:35.434604   26392 certs.go:381] copying /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.crt.e4b58186 -> /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.crt
	I0731 17:01:35.434727   26392 certs.go:385] copying /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.key.e4b58186 -> /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.key
	I0731 17:01:35.434844   26392 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/proxy-client.key
	I0731 17:01:35.434858   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 17:01:35.434870   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0731 17:01:35.434883   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 17:01:35.434896   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 17:01:35.434908   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 17:01:35.434920   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 17:01:35.434932   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 17:01:35.434944   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 17:01:35.434994   26392 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem (1338 bytes)
	W0731 17:01:35.435020   26392 certs.go:480] ignoring /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259_empty.pem, impossibly tiny 0 bytes
	I0731 17:01:35.435029   26392 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 17:01:35.435050   26392 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem (1082 bytes)
	I0731 17:01:35.435071   26392 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem (1123 bytes)
	I0731 17:01:35.435092   26392 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem (1675 bytes)
	I0731 17:01:35.435181   26392 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem (1708 bytes)
	I0731 17:01:35.435212   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem -> /usr/share/ca-certificates/15259.pem
	I0731 17:01:35.435228   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem -> /usr/share/ca-certificates/152592.pem
	I0731 17:01:35.435259   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 17:01:35.435298   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 17:01:35.438461   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:01:35.438911   26392 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 17:01:35.438937   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:01:35.439097   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 17:01:35.439299   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 17:01:35.439476   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 17:01:35.439606   26392 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651/id_rsa Username:docker}
	I0731 17:01:35.515490   26392 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0731 17:01:35.520246   26392 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0731 17:01:35.536155   26392 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0731 17:01:35.540352   26392 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0731 17:01:35.552013   26392 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0731 17:01:35.555859   26392 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0731 17:01:35.565975   26392 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0731 17:01:35.573634   26392 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0731 17:01:35.586805   26392 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0731 17:01:35.591216   26392 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0731 17:01:35.602718   26392 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0731 17:01:35.606501   26392 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0731 17:01:35.616282   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 17:01:35.642723   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 17:01:35.666443   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 17:01:35.688466   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 17:01:35.710019   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0731 17:01:35.733173   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 17:01:35.755946   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 17:01:35.776967   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 17:01:35.799541   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem --> /usr/share/ca-certificates/15259.pem (1338 bytes)
	I0731 17:01:35.821594   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /usr/share/ca-certificates/152592.pem (1708 bytes)
	I0731 17:01:35.844776   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 17:01:35.867076   26392 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0731 17:01:35.881780   26392 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0731 17:01:35.897313   26392 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0731 17:01:35.912868   26392 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0731 17:01:35.927869   26392 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0731 17:01:35.943095   26392 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0731 17:01:35.959604   26392 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0731 17:01:35.976150   26392 ssh_runner.go:195] Run: openssl version
	I0731 17:01:35.981558   26392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 17:01:35.991616   26392 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 17:01:35.996103   26392 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 16:42 /usr/share/ca-certificates/minikubeCA.pem
	I0731 17:01:35.996165   26392 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 17:01:36.001855   26392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 17:01:36.012434   26392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15259.pem && ln -fs /usr/share/ca-certificates/15259.pem /etc/ssl/certs/15259.pem"
	I0731 17:01:36.023297   26392 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15259.pem
	I0731 17:01:36.027551   26392 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 16:54 /usr/share/ca-certificates/15259.pem
	I0731 17:01:36.027600   26392 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15259.pem
	I0731 17:01:36.033476   26392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15259.pem /etc/ssl/certs/51391683.0"
	I0731 17:01:36.043668   26392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/152592.pem && ln -fs /usr/share/ca-certificates/152592.pem /etc/ssl/certs/152592.pem"
	I0731 17:01:36.053694   26392 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/152592.pem
	I0731 17:01:36.057705   26392 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 16:54 /usr/share/ca-certificates/152592.pem
	I0731 17:01:36.057745   26392 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/152592.pem
	I0731 17:01:36.063064   26392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/152592.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 17:01:36.073104   26392 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 17:01:36.076675   26392 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 17:01:36.076729   26392 kubeadm.go:934] updating node {m03 192.168.39.139 8443 v1.30.3 crio true true} ...
	I0731 17:01:36.076821   26392 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-234651-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.139
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-234651 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 17:01:36.076849   26392 kube-vip.go:115] generating kube-vip config ...
	I0731 17:01:36.076883   26392 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0731 17:01:36.090759   26392 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0731 17:01:36.090814   26392 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0731 17:01:36.090879   26392 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 17:01:36.099903   26392 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0731 17:01:36.099959   26392 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0731 17:01:36.108632   26392 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0731 17:01:36.108656   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0731 17:01:36.108696   26392 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0731 17:01:36.108704   26392 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0731 17:01:36.108716   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0731 17:01:36.108725   26392 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0731 17:01:36.108738   26392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 17:01:36.108763   26392 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0731 17:01:36.121976   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0731 17:01:36.122041   26392 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0731 17:01:36.122065   26392 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0731 17:01:36.122070   26392 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0731 17:01:36.122077   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0731 17:01:36.122094   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0731 17:01:36.135737   26392 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0731 17:01:36.135796   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0731 17:01:37.017391   26392 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0731 17:01:37.028340   26392 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0731 17:01:37.045738   26392 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 17:01:37.063014   26392 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0731 17:01:37.079613   26392 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0731 17:01:37.083276   26392 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 17:01:37.096309   26392 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 17:01:37.223821   26392 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 17:01:37.243570   26392 host.go:66] Checking if "ha-234651" exists ...
	I0731 17:01:37.244096   26392 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:01:37.244146   26392 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:01:37.259795   26392 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41825
	I0731 17:01:37.260175   26392 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:01:37.260691   26392 main.go:141] libmachine: Using API Version  1
	I0731 17:01:37.260709   26392 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:01:37.261048   26392 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:01:37.261245   26392 main.go:141] libmachine: (ha-234651) Calling .DriverName
	I0731 17:01:37.261440   26392 start.go:317] joinCluster: &{Name:ha-234651 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-234651 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.243 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.235 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.139 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 17:01:37.261606   26392 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0731 17:01:37.261632   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 17:01:37.264389   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:01:37.264786   26392 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 17:01:37.264826   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:01:37.265006   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 17:01:37.265290   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 17:01:37.265423   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 17:01:37.265584   26392 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651/id_rsa Username:docker}
	I0731 17:01:37.432973   26392 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.139 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 17:01:37.433036   26392 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token akz3y0.iylspc3e44qrqwx7 --discovery-token-ca-cert-hash sha256:460b24b51d10a3760f2391542a8a89e401359854f437f922cbee3f2d8ed6c856 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-234651-m03 --control-plane --apiserver-advertise-address=192.168.39.139 --apiserver-bind-port=8443"
	I0731 17:02:00.740154   26392 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token akz3y0.iylspc3e44qrqwx7 --discovery-token-ca-cert-hash sha256:460b24b51d10a3760f2391542a8a89e401359854f437f922cbee3f2d8ed6c856 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-234651-m03 --control-plane --apiserver-advertise-address=192.168.39.139 --apiserver-bind-port=8443": (23.307093126s)
	I0731 17:02:00.740192   26392 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0731 17:02:01.237200   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-234651-m03 minikube.k8s.io/updated_at=2024_07_31T17_02_01_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1d737dad7efa60c56d30434fcd857dd3b14c91d9 minikube.k8s.io/name=ha-234651 minikube.k8s.io/primary=false
	I0731 17:02:01.355371   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-234651-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0731 17:02:01.468704   26392 start.go:319] duration metric: took 24.207260736s to joinCluster
	I0731 17:02:01.468786   26392 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.139 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 17:02:01.469119   26392 config.go:182] Loaded profile config "ha-234651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 17:02:01.470089   26392 out.go:177] * Verifying Kubernetes components...
	I0731 17:02:01.471212   26392 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 17:02:01.743573   26392 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 17:02:01.768397   26392 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 17:02:01.770559   26392 kapi.go:59] client config for ha-234651: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/client.crt", KeyFile:"/home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/client.key", CAFile:"/home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0731 17:02:01.770697   26392 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.243:8443
	I0731 17:02:01.771427   26392 node_ready.go:35] waiting up to 6m0s for node "ha-234651-m03" to be "Ready" ...
	I0731 17:02:01.771524   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:01.771538   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:01.771549   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:01.771556   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:01.774783   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:02.272487   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:02.272511   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:02.272522   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:02.272527   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:02.298164   26392 round_trippers.go:574] Response Status: 200 OK in 25 milliseconds
	I0731 17:02:02.772644   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:02.772668   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:02.772676   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:02.772681   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:02.776157   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:03.272522   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:03.272545   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:03.272556   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:03.272563   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:03.276432   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:03.772332   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:03.772353   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:03.772363   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:03.772369   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:03.775129   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:02:03.776042   26392 node_ready.go:53] node "ha-234651-m03" has status "Ready":"False"
	I0731 17:02:04.271745   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:04.271763   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:04.271771   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:04.271775   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:04.275853   26392 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 17:02:04.772023   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:04.772042   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:04.772050   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:04.772053   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:04.775235   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:05.271694   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:05.271713   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:05.271721   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:05.271725   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:05.276039   26392 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 17:02:05.771697   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:05.771765   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:05.771789   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:05.771801   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:05.776249   26392 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 17:02:05.777069   26392 node_ready.go:53] node "ha-234651-m03" has status "Ready":"False"
	I0731 17:02:06.272459   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:06.272478   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:06.272486   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:06.272491   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:06.275595   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:06.772431   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:06.772460   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:06.772468   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:06.772472   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:06.776868   26392 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 17:02:07.271856   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:07.271874   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:07.271882   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:07.271886   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:07.276287   26392 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 17:02:07.771871   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:07.771891   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:07.771898   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:07.771902   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:07.775696   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:08.272429   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:08.272455   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:08.272464   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:08.272472   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:08.276105   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:08.276622   26392 node_ready.go:53] node "ha-234651-m03" has status "Ready":"False"
	I0731 17:02:08.771986   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:08.772012   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:08.772019   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:08.772023   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:08.775477   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:09.271760   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:09.271779   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:09.271788   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:09.271794   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:09.275193   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:09.772349   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:09.772367   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:09.772375   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:09.772380   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:09.775458   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:10.271949   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:10.271969   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:10.271976   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:10.271980   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:10.274905   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:02:10.771953   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:10.771973   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:10.771981   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:10.771995   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:10.775196   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:10.775736   26392 node_ready.go:53] node "ha-234651-m03" has status "Ready":"False"
	I0731 17:02:11.272041   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:11.272066   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:11.272077   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:11.272081   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:11.275503   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:11.772324   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:11.772342   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:11.772349   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:11.772353   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:11.775273   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:02:12.272398   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:12.272418   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:12.272429   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:12.272436   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:12.275897   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:12.772652   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:12.772672   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:12.772680   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:12.772683   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:12.776201   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:12.776820   26392 node_ready.go:53] node "ha-234651-m03" has status "Ready":"False"
	I0731 17:02:13.272113   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:13.272137   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:13.272148   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:13.272153   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:13.275213   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:13.772656   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:13.772677   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:13.772686   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:13.772689   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:13.776164   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:14.271954   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:14.271974   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:14.271982   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:14.271986   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:14.276462   26392 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 17:02:14.772649   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:14.772671   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:14.772679   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:14.772682   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:14.775952   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:15.272585   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:15.272605   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:15.272615   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:15.272623   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:15.275473   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:02:15.276097   26392 node_ready.go:53] node "ha-234651-m03" has status "Ready":"False"
	I0731 17:02:15.772352   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:15.772372   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:15.772380   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:15.772384   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:15.776568   26392 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 17:02:16.272347   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:16.272368   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:16.272376   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:16.272382   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:16.275673   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:16.772371   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:16.772396   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:16.772406   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:16.772412   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:16.778215   26392 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 17:02:17.271608   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:17.271632   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:17.271642   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:17.271646   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:17.275386   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:17.771869   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:17.771889   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:17.771897   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:17.771903   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:17.775526   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:17.776021   26392 node_ready.go:53] node "ha-234651-m03" has status "Ready":"False"
	I0731 17:02:18.272359   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:18.272379   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:18.272390   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:18.272396   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:18.276177   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:18.772291   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:18.772313   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:18.772324   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:18.772330   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:18.775648   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:19.272391   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:19.272412   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:19.272419   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:19.272424   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:19.276147   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:19.771808   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:19.771831   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:19.771841   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:19.771846   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:19.776156   26392 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 17:02:19.777038   26392 node_ready.go:49] node "ha-234651-m03" has status "Ready":"True"
	I0731 17:02:19.777054   26392 node_ready.go:38] duration metric: took 18.005601556s for node "ha-234651-m03" to be "Ready" ...
	I0731 17:02:19.777062   26392 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 17:02:19.777111   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods
	I0731 17:02:19.777120   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:19.777127   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:19.777132   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:19.785605   26392 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0731 17:02:19.791637   26392 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-nsx9j" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:19.791706   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nsx9j
	I0731 17:02:19.791713   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:19.791721   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:19.791725   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:19.799668   26392 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0731 17:02:19.800231   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651
	I0731 17:02:19.800258   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:19.800267   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:19.800272   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:19.804395   26392 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 17:02:19.805012   26392 pod_ready.go:92] pod "coredns-7db6d8ff4d-nsx9j" in "kube-system" namespace has status "Ready":"True"
	I0731 17:02:19.805027   26392 pod_ready.go:81] duration metric: took 13.369272ms for pod "coredns-7db6d8ff4d-nsx9j" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:19.805035   26392 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-qbqb9" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:19.805079   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-qbqb9
	I0731 17:02:19.805083   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:19.805090   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:19.805094   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:19.807408   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:02:19.807984   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651
	I0731 17:02:19.808002   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:19.808011   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:19.808017   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:19.810203   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:02:19.810909   26392 pod_ready.go:92] pod "coredns-7db6d8ff4d-qbqb9" in "kube-system" namespace has status "Ready":"True"
	I0731 17:02:19.810945   26392 pod_ready.go:81] duration metric: took 5.894773ms for pod "coredns-7db6d8ff4d-qbqb9" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:19.810956   26392 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-234651" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:19.811015   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/etcd-ha-234651
	I0731 17:02:19.811024   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:19.811034   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:19.811041   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:19.813120   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:02:19.813674   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651
	I0731 17:02:19.813689   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:19.813699   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:19.813703   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:19.816056   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:02:19.816820   26392 pod_ready.go:92] pod "etcd-ha-234651" in "kube-system" namespace has status "Ready":"True"
	I0731 17:02:19.816836   26392 pod_ready.go:81] duration metric: took 5.87247ms for pod "etcd-ha-234651" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:19.816848   26392 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-234651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:19.816899   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/etcd-ha-234651-m02
	I0731 17:02:19.816909   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:19.816918   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:19.816923   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:19.819104   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:02:19.819735   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:02:19.819752   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:19.819761   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:19.819769   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:19.822242   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:02:19.822848   26392 pod_ready.go:92] pod "etcd-ha-234651-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 17:02:19.822865   26392 pod_ready.go:81] duration metric: took 6.010187ms for pod "etcd-ha-234651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:19.822876   26392 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-234651-m03" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:19.972293   26392 request.go:629] Waited for 149.323624ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/etcd-ha-234651-m03
	I0731 17:02:19.972363   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/etcd-ha-234651-m03
	I0731 17:02:19.972376   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:19.972394   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:19.972404   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:19.975294   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:02:20.172524   26392 request.go:629] Waited for 196.621388ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:20.172605   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:20.172612   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:20.172624   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:20.172634   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:20.175922   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:20.176609   26392 pod_ready.go:92] pod "etcd-ha-234651-m03" in "kube-system" namespace has status "Ready":"True"
	I0731 17:02:20.176627   26392 pod_ready.go:81] duration metric: took 353.744367ms for pod "etcd-ha-234651-m03" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:20.176643   26392 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-234651" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:20.372771   26392 request.go:629] Waited for 196.067367ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-234651
	I0731 17:02:20.372850   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-234651
	I0731 17:02:20.372861   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:20.372871   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:20.372882   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:20.375908   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:20.571830   26392 request.go:629] Waited for 195.285012ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/nodes/ha-234651
	I0731 17:02:20.571882   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651
	I0731 17:02:20.571887   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:20.571892   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:20.571896   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:20.574825   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:02:20.575459   26392 pod_ready.go:92] pod "kube-apiserver-ha-234651" in "kube-system" namespace has status "Ready":"True"
	I0731 17:02:20.575480   26392 pod_ready.go:81] duration metric: took 398.829513ms for pod "kube-apiserver-ha-234651" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:20.575494   26392 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-234651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:20.772409   26392 request.go:629] Waited for 196.847445ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-234651-m02
	I0731 17:02:20.772470   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-234651-m02
	I0731 17:02:20.772478   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:20.772489   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:20.772532   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:20.777048   26392 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 17:02:20.972000   26392 request.go:629] Waited for 194.290806ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:02:20.972050   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:02:20.972055   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:20.972070   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:20.972085   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:20.974976   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:02:20.975498   26392 pod_ready.go:92] pod "kube-apiserver-ha-234651-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 17:02:20.975519   26392 pod_ready.go:81] duration metric: took 400.017342ms for pod "kube-apiserver-ha-234651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:20.975531   26392 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-234651-m03" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:21.172594   26392 request.go:629] Waited for 196.98829ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-234651-m03
	I0731 17:02:21.172646   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-234651-m03
	I0731 17:02:21.172651   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:21.172657   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:21.172661   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:21.175522   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:02:21.372553   26392 request.go:629] Waited for 196.351874ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:21.372614   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:21.372621   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:21.372632   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:21.372638   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:21.375964   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:21.376668   26392 pod_ready.go:92] pod "kube-apiserver-ha-234651-m03" in "kube-system" namespace has status "Ready":"True"
	I0731 17:02:21.376688   26392 pod_ready.go:81] duration metric: took 401.149162ms for pod "kube-apiserver-ha-234651-m03" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:21.376700   26392 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-234651" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:21.572668   26392 request.go:629] Waited for 195.906455ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-234651
	I0731 17:02:21.572720   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-234651
	I0731 17:02:21.572726   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:21.572736   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:21.572746   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:21.576257   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:21.772283   26392 request.go:629] Waited for 195.256579ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/nodes/ha-234651
	I0731 17:02:21.772334   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651
	I0731 17:02:21.772339   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:21.772346   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:21.772353   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:21.775136   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:02:21.775626   26392 pod_ready.go:92] pod "kube-controller-manager-ha-234651" in "kube-system" namespace has status "Ready":"True"
	I0731 17:02:21.775647   26392 pod_ready.go:81] duration metric: took 398.937458ms for pod "kube-controller-manager-ha-234651" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:21.775659   26392 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-234651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:21.972798   26392 request.go:629] Waited for 197.069148ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-234651-m02
	I0731 17:02:21.972882   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-234651-m02
	I0731 17:02:21.972894   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:21.972904   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:21.972915   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:21.976110   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:22.172448   26392 request.go:629] Waited for 195.762899ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:02:22.172520   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:02:22.172532   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:22.172543   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:22.172553   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:22.175713   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:22.176373   26392 pod_ready.go:92] pod "kube-controller-manager-ha-234651-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 17:02:22.176395   26392 pod_ready.go:81] duration metric: took 400.728457ms for pod "kube-controller-manager-ha-234651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:22.176407   26392 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-234651-m03" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:22.372209   26392 request.go:629] Waited for 195.723268ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-234651-m03
	I0731 17:02:22.372284   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-234651-m03
	I0731 17:02:22.372294   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:22.372310   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:22.372315   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:22.375732   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:22.572690   26392 request.go:629] Waited for 196.364269ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:22.572766   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:22.572772   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:22.572780   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:22.572786   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:22.576137   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:22.576644   26392 pod_ready.go:92] pod "kube-controller-manager-ha-234651-m03" in "kube-system" namespace has status "Ready":"True"
	I0731 17:02:22.576660   26392 pod_ready.go:81] duration metric: took 400.245471ms for pod "kube-controller-manager-ha-234651-m03" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:22.576671   26392 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b8dcw" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:22.772103   26392 request.go:629] Waited for 195.368675ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b8dcw
	I0731 17:02:22.772183   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b8dcw
	I0731 17:02:22.772195   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:22.772256   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:22.772269   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:22.775401   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:22.972548   26392 request.go:629] Waited for 196.34366ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:02:22.972602   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:02:22.972608   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:22.972615   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:22.972619   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:22.977276   26392 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 17:02:22.977775   26392 pod_ready.go:92] pod "kube-proxy-b8dcw" in "kube-system" namespace has status "Ready":"True"
	I0731 17:02:22.977796   26392 pod_ready.go:81] duration metric: took 401.118741ms for pod "kube-proxy-b8dcw" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:22.977808   26392 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gfgjd" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:23.172827   26392 request.go:629] Waited for 194.930334ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gfgjd
	I0731 17:02:23.172883   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gfgjd
	I0731 17:02:23.172887   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:23.172895   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:23.172899   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:23.175893   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:02:23.371902   26392 request.go:629] Waited for 195.281871ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:23.371952   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:23.371957   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:23.371964   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:23.371968   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:23.374603   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:02:23.375326   26392 pod_ready.go:92] pod "kube-proxy-gfgjd" in "kube-system" namespace has status "Ready":"True"
	I0731 17:02:23.375353   26392 pod_ready.go:81] duration metric: took 397.538025ms for pod "kube-proxy-gfgjd" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:23.375362   26392 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jfgs8" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:23.572401   26392 request.go:629] Waited for 196.975032ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jfgs8
	I0731 17:02:23.572469   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jfgs8
	I0731 17:02:23.572475   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:23.572482   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:23.572488   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:23.575615   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:23.772754   26392 request.go:629] Waited for 196.370061ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/nodes/ha-234651
	I0731 17:02:23.772821   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651
	I0731 17:02:23.772831   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:23.772840   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:23.772849   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:23.775877   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:23.776390   26392 pod_ready.go:92] pod "kube-proxy-jfgs8" in "kube-system" namespace has status "Ready":"True"
	I0731 17:02:23.776408   26392 pod_ready.go:81] duration metric: took 401.039832ms for pod "kube-proxy-jfgs8" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:23.776417   26392 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-234651" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:23.971894   26392 request.go:629] Waited for 195.395031ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-234651
	I0731 17:02:23.971956   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-234651
	I0731 17:02:23.971961   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:23.971968   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:23.971972   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:23.976837   26392 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 17:02:24.172761   26392 request.go:629] Waited for 195.33399ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/nodes/ha-234651
	I0731 17:02:24.172816   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651
	I0731 17:02:24.172821   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:24.172828   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:24.172836   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:24.175689   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:02:24.176304   26392 pod_ready.go:92] pod "kube-scheduler-ha-234651" in "kube-system" namespace has status "Ready":"True"
	I0731 17:02:24.176321   26392 pod_ready.go:81] duration metric: took 399.898252ms for pod "kube-scheduler-ha-234651" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:24.176329   26392 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-234651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:24.372431   26392 request.go:629] Waited for 196.044675ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-234651-m02
	I0731 17:02:24.372507   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-234651-m02
	I0731 17:02:24.372514   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:24.372525   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:24.372531   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:24.375686   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:24.572614   26392 request.go:629] Waited for 196.336033ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:02:24.572663   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:02:24.572668   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:24.572675   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:24.572680   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:24.576071   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:24.576616   26392 pod_ready.go:92] pod "kube-scheduler-ha-234651-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 17:02:24.576634   26392 pod_ready.go:81] duration metric: took 400.298948ms for pod "kube-scheduler-ha-234651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:24.576643   26392 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-234651-m03" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:24.772668   26392 request.go:629] Waited for 195.957361ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-234651-m03
	I0731 17:02:24.772720   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-234651-m03
	I0731 17:02:24.772725   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:24.772732   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:24.772735   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:24.776266   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:24.972184   26392 request.go:629] Waited for 195.351472ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:24.972268   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:24.972277   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:24.972288   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:24.972298   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:24.975807   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:24.976515   26392 pod_ready.go:92] pod "kube-scheduler-ha-234651-m03" in "kube-system" namespace has status "Ready":"True"
	I0731 17:02:24.976534   26392 pod_ready.go:81] duration metric: took 399.885145ms for pod "kube-scheduler-ha-234651-m03" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:24.976544   26392 pod_ready.go:38] duration metric: took 5.199474413s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 17:02:24.976559   26392 api_server.go:52] waiting for apiserver process to appear ...
	I0731 17:02:24.976619   26392 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 17:02:24.991603   26392 api_server.go:72] duration metric: took 23.522784014s to wait for apiserver process to appear ...
	I0731 17:02:24.991627   26392 api_server.go:88] waiting for apiserver healthz status ...
	I0731 17:02:24.991648   26392 api_server.go:253] Checking apiserver healthz at https://192.168.39.243:8443/healthz ...
	I0731 17:02:24.996060   26392 api_server.go:279] https://192.168.39.243:8443/healthz returned 200:
	ok
	I0731 17:02:24.996124   26392 round_trippers.go:463] GET https://192.168.39.243:8443/version
	I0731 17:02:24.996134   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:24.996146   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:24.996153   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:24.997003   26392 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0731 17:02:24.997065   26392 api_server.go:141] control plane version: v1.30.3
	I0731 17:02:24.997080   26392 api_server.go:131] duration metric: took 5.446251ms to wait for apiserver health ...
	I0731 17:02:24.997090   26392 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 17:02:25.172062   26392 request.go:629] Waited for 174.909019ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods
	I0731 17:02:25.172129   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods
	I0731 17:02:25.172136   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:25.172144   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:25.172150   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:25.180567   26392 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0731 17:02:25.187031   26392 system_pods.go:59] 24 kube-system pods found
	I0731 17:02:25.187058   26392 system_pods.go:61] "coredns-7db6d8ff4d-nsx9j" [b2cde006-dbb7-4e6f-a5f1-cf7760740104] Running
	I0731 17:02:25.187063   26392 system_pods.go:61] "coredns-7db6d8ff4d-qbqb9" [4f76f862-d39e-4976-90e6-fb9a25cc485a] Running
	I0731 17:02:25.187067   26392 system_pods.go:61] "etcd-ha-234651" [c6fb7163-2f43-4b81-a3d9-e1550ddabc4c] Running
	I0731 17:02:25.187070   26392 system_pods.go:61] "etcd-ha-234651-m02" [ba88e585-8e9b-4177-82e0-4222210cfd2d] Running
	I0731 17:02:25.187074   26392 system_pods.go:61] "etcd-ha-234651-m03" [6d8ddabd-e7d2-48c7-93ce-ab3f68540789] Running
	I0731 17:02:25.187078   26392 system_pods.go:61] "kindnet-2xqxq" [a9eb3817-aec9-414b-80ab-665236250ab0] Running
	I0731 17:02:25.187081   26392 system_pods.go:61] "kindnet-phmdp" [f0e0736a-4005-4b0b-81fc-e4cea26a7d42] Running
	I0731 17:02:25.187084   26392 system_pods.go:61] "kindnet-wfbt4" [9eda8095-ce75-4043-8ddf-6e5663de8212] Running
	I0731 17:02:25.187088   26392 system_pods.go:61] "kube-apiserver-ha-234651" [ce3cbf02-100c-4437-8d1c-2ac4a2822866] Running
	I0731 17:02:25.187091   26392 system_pods.go:61] "kube-apiserver-ha-234651-m02" [e5eea595-36d9-497e-83be-aac5845b566a] Running
	I0731 17:02:25.187094   26392 system_pods.go:61] "kube-apiserver-ha-234651-m03" [42a6e972-6278-433a-93ea-1661c9827678] Running
	I0731 17:02:25.187098   26392 system_pods.go:61] "kube-controller-manager-ha-234651" [3d111090-8fee-40ab-8cab-df7dd984bdb3] Running
	I0731 17:02:25.187101   26392 system_pods.go:61] "kube-controller-manager-ha-234651-m02" [28a10b93-7e55-4e2d-a5be-3605fa94fdb0] Running
	I0731 17:02:25.187104   26392 system_pods.go:61] "kube-controller-manager-ha-234651-m03" [a5d2498c-f9be-4425-9a3d-570903f9f62e] Running
	I0731 17:02:25.187106   26392 system_pods.go:61] "kube-proxy-b8dcw" [cbd2a846-0490-4465-b299-76b6fb0d5710] Running
	I0731 17:02:25.187134   26392 system_pods.go:61] "kube-proxy-gfgjd" [b20d9a5c-0521-49f5-9002-74ae98e683d0] Running
	I0731 17:02:25.187138   26392 system_pods.go:61] "kube-proxy-jfgs8" [5ead85d8-0fd0-4900-8c02-2f23217ca208] Running
	I0731 17:02:25.187145   26392 system_pods.go:61] "kube-scheduler-ha-234651" [e1e2064c-17d7-4ea9-b6d3-b13c9ca34789] Running
	I0731 17:02:25.187148   26392 system_pods.go:61] "kube-scheduler-ha-234651-m02" [2ed86332-42ba-46c9-901c-38880e8ec54a] Running
	I0731 17:02:25.187152   26392 system_pods.go:61] "kube-scheduler-ha-234651-m03" [274102a7-621b-496d-95e0-6588195be8b0] Running
	I0731 17:02:25.187154   26392 system_pods.go:61] "kube-vip-ha-234651" [205b811c-93e9-4f66-9d0e-67abbc8ff1ef] Running
	I0731 17:02:25.187158   26392 system_pods.go:61] "kube-vip-ha-234651-m02" [8fc0f86c-f043-44c2-aa8b-544fd6b7de22] Running
	I0731 17:02:25.187163   26392 system_pods.go:61] "kube-vip-ha-234651-m03" [d1ca6f6b-095f-457d-a4d3-2bac916bb8ba] Running
	I0731 17:02:25.187166   26392 system_pods.go:61] "storage-provisioner" [87455537-bdb8-438b-8122-db85bed01d09] Running
	I0731 17:02:25.187171   26392 system_pods.go:74] duration metric: took 190.073866ms to wait for pod list to return data ...
	I0731 17:02:25.187180   26392 default_sa.go:34] waiting for default service account to be created ...
	I0731 17:02:25.372614   26392 request.go:629] Waited for 185.36405ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/namespaces/default/serviceaccounts
	I0731 17:02:25.372669   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/default/serviceaccounts
	I0731 17:02:25.372675   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:25.372682   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:25.372685   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:25.376402   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:25.376545   26392 default_sa.go:45] found service account: "default"
	I0731 17:02:25.376568   26392 default_sa.go:55] duration metric: took 189.37928ms for default service account to be created ...
	I0731 17:02:25.376578   26392 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 17:02:25.572785   26392 request.go:629] Waited for 196.133383ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods
	I0731 17:02:25.572863   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods
	I0731 17:02:25.572874   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:25.572884   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:25.572890   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:25.581431   26392 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0731 17:02:25.587354   26392 system_pods.go:86] 24 kube-system pods found
	I0731 17:02:25.587378   26392 system_pods.go:89] "coredns-7db6d8ff4d-nsx9j" [b2cde006-dbb7-4e6f-a5f1-cf7760740104] Running
	I0731 17:02:25.587384   26392 system_pods.go:89] "coredns-7db6d8ff4d-qbqb9" [4f76f862-d39e-4976-90e6-fb9a25cc485a] Running
	I0731 17:02:25.587388   26392 system_pods.go:89] "etcd-ha-234651" [c6fb7163-2f43-4b81-a3d9-e1550ddabc4c] Running
	I0731 17:02:25.587393   26392 system_pods.go:89] "etcd-ha-234651-m02" [ba88e585-8e9b-4177-82e0-4222210cfd2d] Running
	I0731 17:02:25.587397   26392 system_pods.go:89] "etcd-ha-234651-m03" [6d8ddabd-e7d2-48c7-93ce-ab3f68540789] Running
	I0731 17:02:25.587401   26392 system_pods.go:89] "kindnet-2xqxq" [a9eb3817-aec9-414b-80ab-665236250ab0] Running
	I0731 17:02:25.587405   26392 system_pods.go:89] "kindnet-phmdp" [f0e0736a-4005-4b0b-81fc-e4cea26a7d42] Running
	I0731 17:02:25.587409   26392 system_pods.go:89] "kindnet-wfbt4" [9eda8095-ce75-4043-8ddf-6e5663de8212] Running
	I0731 17:02:25.587413   26392 system_pods.go:89] "kube-apiserver-ha-234651" [ce3cbf02-100c-4437-8d1c-2ac4a2822866] Running
	I0731 17:02:25.587417   26392 system_pods.go:89] "kube-apiserver-ha-234651-m02" [e5eea595-36d9-497e-83be-aac5845b566a] Running
	I0731 17:02:25.587421   26392 system_pods.go:89] "kube-apiserver-ha-234651-m03" [42a6e972-6278-433a-93ea-1661c9827678] Running
	I0731 17:02:25.587426   26392 system_pods.go:89] "kube-controller-manager-ha-234651" [3d111090-8fee-40ab-8cab-df7dd984bdb3] Running
	I0731 17:02:25.587431   26392 system_pods.go:89] "kube-controller-manager-ha-234651-m02" [28a10b93-7e55-4e2d-a5be-3605fa94fdb0] Running
	I0731 17:02:25.587435   26392 system_pods.go:89] "kube-controller-manager-ha-234651-m03" [a5d2498c-f9be-4425-9a3d-570903f9f62e] Running
	I0731 17:02:25.587441   26392 system_pods.go:89] "kube-proxy-b8dcw" [cbd2a846-0490-4465-b299-76b6fb0d5710] Running
	I0731 17:02:25.587447   26392 system_pods.go:89] "kube-proxy-gfgjd" [b20d9a5c-0521-49f5-9002-74ae98e683d0] Running
	I0731 17:02:25.587451   26392 system_pods.go:89] "kube-proxy-jfgs8" [5ead85d8-0fd0-4900-8c02-2f23217ca208] Running
	I0731 17:02:25.587455   26392 system_pods.go:89] "kube-scheduler-ha-234651" [e1e2064c-17d7-4ea9-b6d3-b13c9ca34789] Running
	I0731 17:02:25.587459   26392 system_pods.go:89] "kube-scheduler-ha-234651-m02" [2ed86332-42ba-46c9-901c-38880e8ec54a] Running
	I0731 17:02:25.587466   26392 system_pods.go:89] "kube-scheduler-ha-234651-m03" [274102a7-621b-496d-95e0-6588195be8b0] Running
	I0731 17:02:25.587470   26392 system_pods.go:89] "kube-vip-ha-234651" [205b811c-93e9-4f66-9d0e-67abbc8ff1ef] Running
	I0731 17:02:25.587474   26392 system_pods.go:89] "kube-vip-ha-234651-m02" [8fc0f86c-f043-44c2-aa8b-544fd6b7de22] Running
	I0731 17:02:25.587477   26392 system_pods.go:89] "kube-vip-ha-234651-m03" [d1ca6f6b-095f-457d-a4d3-2bac916bb8ba] Running
	I0731 17:02:25.587480   26392 system_pods.go:89] "storage-provisioner" [87455537-bdb8-438b-8122-db85bed01d09] Running
	I0731 17:02:25.587486   26392 system_pods.go:126] duration metric: took 210.90289ms to wait for k8s-apps to be running ...
	I0731 17:02:25.587496   26392 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 17:02:25.587536   26392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 17:02:25.603526   26392 system_svc.go:56] duration metric: took 16.020558ms WaitForService to wait for kubelet
	I0731 17:02:25.603559   26392 kubeadm.go:582] duration metric: took 24.134740341s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 17:02:25.603585   26392 node_conditions.go:102] verifying NodePressure condition ...
	I0731 17:02:25.772812   26392 request.go:629] Waited for 169.160336ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/nodes
	I0731 17:02:25.772896   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes
	I0731 17:02:25.772907   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:25.772918   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:25.772927   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:25.776466   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:25.777443   26392 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 17:02:25.777467   26392 node_conditions.go:123] node cpu capacity is 2
	I0731 17:02:25.777478   26392 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 17:02:25.777482   26392 node_conditions.go:123] node cpu capacity is 2
	I0731 17:02:25.777486   26392 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 17:02:25.777489   26392 node_conditions.go:123] node cpu capacity is 2
	I0731 17:02:25.777493   26392 node_conditions.go:105] duration metric: took 173.903547ms to run NodePressure ...
	I0731 17:02:25.777504   26392 start.go:241] waiting for startup goroutines ...
	I0731 17:02:25.777522   26392 start.go:255] writing updated cluster config ...
	I0731 17:02:25.777809   26392 ssh_runner.go:195] Run: rm -f paused
	I0731 17:02:25.831285   26392 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 17:02:25.833187   26392 out.go:177] * Done! kubectl is now configured to use "ha-234651" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 31 17:06:01 ha-234651 crio[683]: time="2024-07-31 17:06:01.197458402Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e94237b7-b94d-4689-8e44-dc0ec1942b29 name=/runtime.v1.RuntimeService/Version
	Jul 31 17:06:01 ha-234651 crio[683]: time="2024-07-31 17:06:01.198969851Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b994799d-6d69-42fa-9e1f-011d9a2e971e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:06:01 ha-234651 crio[683]: time="2024-07-31 17:06:01.199555905Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722445561199531922,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145867,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b994799d-6d69-42fa-9e1f-011d9a2e971e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:06:01 ha-234651 crio[683]: time="2024-07-31 17:06:01.200013453Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d81eac29-a835-4657-855e-0ca528cd5d18 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:06:01 ha-234651 crio[683]: time="2024-07-31 17:06:01.200074062Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d81eac29-a835-4657-855e-0ca528cd5d18 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:06:01 ha-234651 crio[683]: time="2024-07-31 17:06:01.200534835Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5e4d66f773ff46f7a130d9359ac1316e1117516ebef8b32f9b20515706b3a2f8,PodSandboxId:754996ae28b01f43599725b23a8ef7ea035322c1da4ccbda6c379abd7e897fe0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722445212877304638,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qbqb9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f76f862-d39e-4976-90e6-fb9a25cc485a,},Annotations:map[string]string{io.kubernetes.container.hash: 9bcf3d4b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8ef655791fe4b0b98612bab1bc9fea7e42c73e5327bdd0654676979cb2e8542,PodSandboxId:1616415c6b8f6463e22fcd4bc693a293f8fdee9b60f2f3d789bc645ace8300a6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722445212817639863,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nsx9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: b2cde006-dbb7-4e6f-a5f1-cf7760740104,},Annotations:map[string]string{io.kubernetes.container.hash: 9d1dd63f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cc5e3465a8643593cfbf5871749b29c1edcf22920dda340f9e61a512894333d,PodSandboxId:1e9afabdcc733b5f6d64e03aa33527331e08c27fcc253d8f5a936ac47ba41635,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNI
NG,CreatedAt:1722445212754307746,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87455537-bdb8-438b-8122-db85bed01d09,},Annotations:map[string]string{io.kubernetes.container.hash: f1005df0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd9f6c4536535bb3eb07d6fa75fe49579d9cdee5ff6e4c440e0a58359d4f2f08,PodSandboxId:99ba869aafb121f3032d38c0a926a34605210fa5e62304b252efb3ce0f139b7c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:C
ONTAINER_RUNNING,CreatedAt:1722445201111476345,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wfbt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eda8095-ce75-4043-8ddf-6e5663de8212,},Annotations:map[string]string{io.kubernetes.container.hash: f1243e66,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:631c8cee6152ae1808419f27d4b63fa0a6bcebb15c757dd96090c6267d63e6c2,PodSandboxId:88fd41ca8aad113a54b218f3a36159bdb37464ee5f76ec8ae31f27cd31f7daa1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:17224451
97367288188,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jfgs8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ead85d8-0fd0-4900-8c02-2f23217ca208,},Annotations:map[string]string{io.kubernetes.container.hash: 85f1c355,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:639ed1a246cfddb5bcce86bc66133eebc7339f13348608e46548a872b8df456e,PodSandboxId:e715c97e96fabf0f1c9f4e6025318eda97b8f431f80df4d8ac4517b03917faad,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722445179
126430451,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 377009a00f211ea8abb80d365c74a9fd,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5b3417940cd8bf8065dfefafb918da0e5bfda9fd5f7a2f9ccea0f1615151657,PodSandboxId:8ccf20bcd63f37163f71bce3d8a364eb92d86b0694bc75949c8979e5633c5cc2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722445176479082471,Labels:map[string]string{io.kuber
netes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f8584f0ef1731ec6b6fb11b7fa84aeb,},Annotations:map[string]string{io.kubernetes.container.hash: a3feaab1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b48ac56e48fe04efb8cbe0a1f24302f4ddd366a42a7ba2d4395113bf2157a3ea,PodSandboxId:5e1eebee97f6b9d241e17235a27f64767bdc5937bd5ca3537ce4297a25812986,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722445176435123047,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.k
ubernetes.pod.name: kube-apiserver-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 399c842a2f5d3312c2f955f494ccfe00,},Annotations:map[string]string{io.kubernetes.container.hash: c9b9fb5b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ae09638d5d5724f3e27352156c272f710b70f2c3319adc46eb518d357dc8e2,PodSandboxId:5c961f0d694a144ae953e67f65f86080fd9a64718dfc16e8697475eadb817895,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722445176401816670,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kube
rnetes.pod.name: kube-controller-manager-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9982d04d181fbc6333c44627c777728,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ded6421f2f11d7f64ce80d1681e3d4c9d1453ff8aecc860e184ad0865768adde,PodSandboxId:90ca479a21b17fc36c4328bfd82fe7023d144a893b3a1fbc315c66af59696ea2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722445176364748602,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.n
ame: kube-scheduler-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3defc3711c46905c8aec12eb318ecd3b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d81eac29-a835-4657-855e-0ca528cd5d18 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:06:01 ha-234651 crio[683]: time="2024-07-31 17:06:01.236514970Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0ca8f11e-e258-4954-84a1-3a11c47ebe53 name=/runtime.v1.RuntimeService/Version
	Jul 31 17:06:01 ha-234651 crio[683]: time="2024-07-31 17:06:01.236636286Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0ca8f11e-e258-4954-84a1-3a11c47ebe53 name=/runtime.v1.RuntimeService/Version
	Jul 31 17:06:01 ha-234651 crio[683]: time="2024-07-31 17:06:01.238655050Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=986d0a26-a26c-43cc-bc66-cb2d6e00cb31 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:06:01 ha-234651 crio[683]: time="2024-07-31 17:06:01.239369337Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722445561239304089,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145867,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=986d0a26-a26c-43cc-bc66-cb2d6e00cb31 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:06:01 ha-234651 crio[683]: time="2024-07-31 17:06:01.240793050Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d5917009-ed9b-465b-8c33-50e218bb6fd4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:06:01 ha-234651 crio[683]: time="2024-07-31 17:06:01.240931351Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d5917009-ed9b-465b-8c33-50e218bb6fd4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:06:01 ha-234651 crio[683]: time="2024-07-31 17:06:01.241161132Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5e4d66f773ff46f7a130d9359ac1316e1117516ebef8b32f9b20515706b3a2f8,PodSandboxId:754996ae28b01f43599725b23a8ef7ea035322c1da4ccbda6c379abd7e897fe0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722445212877304638,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qbqb9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f76f862-d39e-4976-90e6-fb9a25cc485a,},Annotations:map[string]string{io.kubernetes.container.hash: 9bcf3d4b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8ef655791fe4b0b98612bab1bc9fea7e42c73e5327bdd0654676979cb2e8542,PodSandboxId:1616415c6b8f6463e22fcd4bc693a293f8fdee9b60f2f3d789bc645ace8300a6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722445212817639863,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nsx9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: b2cde006-dbb7-4e6f-a5f1-cf7760740104,},Annotations:map[string]string{io.kubernetes.container.hash: 9d1dd63f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cc5e3465a8643593cfbf5871749b29c1edcf22920dda340f9e61a512894333d,PodSandboxId:1e9afabdcc733b5f6d64e03aa33527331e08c27fcc253d8f5a936ac47ba41635,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNI
NG,CreatedAt:1722445212754307746,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87455537-bdb8-438b-8122-db85bed01d09,},Annotations:map[string]string{io.kubernetes.container.hash: f1005df0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd9f6c4536535bb3eb07d6fa75fe49579d9cdee5ff6e4c440e0a58359d4f2f08,PodSandboxId:99ba869aafb121f3032d38c0a926a34605210fa5e62304b252efb3ce0f139b7c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:C
ONTAINER_RUNNING,CreatedAt:1722445201111476345,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wfbt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eda8095-ce75-4043-8ddf-6e5663de8212,},Annotations:map[string]string{io.kubernetes.container.hash: f1243e66,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:631c8cee6152ae1808419f27d4b63fa0a6bcebb15c757dd96090c6267d63e6c2,PodSandboxId:88fd41ca8aad113a54b218f3a36159bdb37464ee5f76ec8ae31f27cd31f7daa1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:17224451
97367288188,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jfgs8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ead85d8-0fd0-4900-8c02-2f23217ca208,},Annotations:map[string]string{io.kubernetes.container.hash: 85f1c355,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:639ed1a246cfddb5bcce86bc66133eebc7339f13348608e46548a872b8df456e,PodSandboxId:e715c97e96fabf0f1c9f4e6025318eda97b8f431f80df4d8ac4517b03917faad,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722445179
126430451,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 377009a00f211ea8abb80d365c74a9fd,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5b3417940cd8bf8065dfefafb918da0e5bfda9fd5f7a2f9ccea0f1615151657,PodSandboxId:8ccf20bcd63f37163f71bce3d8a364eb92d86b0694bc75949c8979e5633c5cc2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722445176479082471,Labels:map[string]string{io.kuber
netes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f8584f0ef1731ec6b6fb11b7fa84aeb,},Annotations:map[string]string{io.kubernetes.container.hash: a3feaab1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b48ac56e48fe04efb8cbe0a1f24302f4ddd366a42a7ba2d4395113bf2157a3ea,PodSandboxId:5e1eebee97f6b9d241e17235a27f64767bdc5937bd5ca3537ce4297a25812986,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722445176435123047,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.k
ubernetes.pod.name: kube-apiserver-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 399c842a2f5d3312c2f955f494ccfe00,},Annotations:map[string]string{io.kubernetes.container.hash: c9b9fb5b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ae09638d5d5724f3e27352156c272f710b70f2c3319adc46eb518d357dc8e2,PodSandboxId:5c961f0d694a144ae953e67f65f86080fd9a64718dfc16e8697475eadb817895,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722445176401816670,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kube
rnetes.pod.name: kube-controller-manager-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9982d04d181fbc6333c44627c777728,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ded6421f2f11d7f64ce80d1681e3d4c9d1453ff8aecc860e184ad0865768adde,PodSandboxId:90ca479a21b17fc36c4328bfd82fe7023d144a893b3a1fbc315c66af59696ea2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722445176364748602,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.n
ame: kube-scheduler-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3defc3711c46905c8aec12eb318ecd3b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d5917009-ed9b-465b-8c33-50e218bb6fd4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:06:01 ha-234651 crio[683]: time="2024-07-31 17:06:01.264245470Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=2a2a217d-3718-43a2-b0ee-4c0baa6bb6d9 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 31 17:06:01 ha-234651 crio[683]: time="2024-07-31 17:06:01.264563525Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:754996ae28b01f43599725b23a8ef7ea035322c1da4ccbda6c379abd7e897fe0,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-qbqb9,Uid:4f76f862-d39e-4976-90e6-fb9a25cc485a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722445212609755554,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-qbqb9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f76f862-d39e-4976-90e6-fb9a25cc485a,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T17:00:12.291105521Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1e9afabdcc733b5f6d64e03aa33527331e08c27fcc253d8f5a936ac47ba41635,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:87455537-bdb8-438b-8122-db85bed01d09,Namespace:kube-system,Attempt:
0,},State:SANDBOX_READY,CreatedAt:1722445212608547465,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87455537-bdb8-438b-8122-db85bed01d09,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path
\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-31T17:00:12.293249307Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1616415c6b8f6463e22fcd4bc693a293f8fdee9b60f2f3d789bc645ace8300a6,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-nsx9j,Uid:b2cde006-dbb7-4e6f-a5f1-cf7760740104,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722445212589247328,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-nsx9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2cde006-dbb7-4e6f-a5f1-cf7760740104,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T17:00:12.281658362Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:88fd41ca8aad113a54b218f3a36159bdb37464ee5f76ec8ae31f27cd31f7daa1,Metadata:&PodSandboxMetadata{Name:kube-proxy-jfgs8,Uid:5ead85d8-0fd0-4900-8c02-2f23217ca208,Namespace:kube-sys
tem,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722445197269221011,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-jfgs8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ead85d8-0fd0-4900-8c02-2f23217ca208,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T16:59:56.058945111Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:99ba869aafb121f3032d38c0a926a34605210fa5e62304b252efb3ce0f139b7c,Metadata:&PodSandboxMetadata{Name:kindnet-wfbt4,Uid:9eda8095-ce75-4043-8ddf-6e5663de8212,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722445196365090312,Labels:map[string]string{app: kindnet,controller-revision-hash: 549967b474,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-wfbt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eda8095-ce75-4043-8ddf-6e5663de8212,k8s-app: kindnet,pod-template
-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T16:59:56.052583148Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8ccf20bcd63f37163f71bce3d8a364eb92d86b0694bc75949c8979e5633c5cc2,Metadata:&PodSandboxMetadata{Name:etcd-ha-234651,Uid:7f8584f0ef1731ec6b6fb11b7fa84aeb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722445176211738887,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f8584f0ef1731ec6b6fb11b7fa84aeb,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.243:2379,kubernetes.io/config.hash: 7f8584f0ef1731ec6b6fb11b7fa84aeb,kubernetes.io/config.seen: 2024-07-31T16:59:35.725398431Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5c961f0d694a144ae953e67f65f86080fd9a64718dfc16e8697475eadb817895,Metadata:&Po
dSandboxMetadata{Name:kube-controller-manager-ha-234651,Uid:d9982d04d181fbc6333c44627c777728,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722445176207994144,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9982d04d181fbc6333c44627c777728,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d9982d04d181fbc6333c44627c777728,kubernetes.io/config.seen: 2024-07-31T16:59:35.725390371Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5e1eebee97f6b9d241e17235a27f64767bdc5937bd5ca3537ce4297a25812986,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-234651,Uid:399c842a2f5d3312c2f955f494ccfe00,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722445176206430839,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-
apiserver-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 399c842a2f5d3312c2f955f494ccfe00,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.243:8443,kubernetes.io/config.hash: 399c842a2f5d3312c2f955f494ccfe00,kubernetes.io/config.seen: 2024-07-31T16:59:35.725399530Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:90ca479a21b17fc36c4328bfd82fe7023d144a893b3a1fbc315c66af59696ea2,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-234651,Uid:3defc3711c46905c8aec12eb318ecd3b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722445176167625024,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3defc3711c46905c8aec12eb318ecd3b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 3defc3711c46905c8aec12eb318e
cd3b,kubernetes.io/config.seen: 2024-07-31T16:59:35.725396175Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e715c97e96fabf0f1c9f4e6025318eda97b8f431f80df4d8ac4517b03917faad,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-234651,Uid:377009a00f211ea8abb80d365c74a9fd,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722445176166969927,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 377009a00f211ea8abb80d365c74a9fd,},Annotations:map[string]string{kubernetes.io/config.hash: 377009a00f211ea8abb80d365c74a9fd,kubernetes.io/config.seen: 2024-07-31T16:59:35.725397355Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=2a2a217d-3718-43a2-b0ee-4c0baa6bb6d9 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 31 17:06:01 ha-234651 crio[683]: time="2024-07-31 17:06:01.265541185Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d8623618-2fb6-43fb-a947-cb60374e59e9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:06:01 ha-234651 crio[683]: time="2024-07-31 17:06:01.265623039Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d8623618-2fb6-43fb-a947-cb60374e59e9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:06:01 ha-234651 crio[683]: time="2024-07-31 17:06:01.265834755Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5e4d66f773ff46f7a130d9359ac1316e1117516ebef8b32f9b20515706b3a2f8,PodSandboxId:754996ae28b01f43599725b23a8ef7ea035322c1da4ccbda6c379abd7e897fe0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722445212877304638,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qbqb9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f76f862-d39e-4976-90e6-fb9a25cc485a,},Annotations:map[string]string{io.kubernetes.container.hash: 9bcf3d4b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8ef655791fe4b0b98612bab1bc9fea7e42c73e5327bdd0654676979cb2e8542,PodSandboxId:1616415c6b8f6463e22fcd4bc693a293f8fdee9b60f2f3d789bc645ace8300a6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722445212817639863,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nsx9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: b2cde006-dbb7-4e6f-a5f1-cf7760740104,},Annotations:map[string]string{io.kubernetes.container.hash: 9d1dd63f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cc5e3465a8643593cfbf5871749b29c1edcf22920dda340f9e61a512894333d,PodSandboxId:1e9afabdcc733b5f6d64e03aa33527331e08c27fcc253d8f5a936ac47ba41635,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNI
NG,CreatedAt:1722445212754307746,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87455537-bdb8-438b-8122-db85bed01d09,},Annotations:map[string]string{io.kubernetes.container.hash: f1005df0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd9f6c4536535bb3eb07d6fa75fe49579d9cdee5ff6e4c440e0a58359d4f2f08,PodSandboxId:99ba869aafb121f3032d38c0a926a34605210fa5e62304b252efb3ce0f139b7c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:C
ONTAINER_RUNNING,CreatedAt:1722445201111476345,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wfbt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eda8095-ce75-4043-8ddf-6e5663de8212,},Annotations:map[string]string{io.kubernetes.container.hash: f1243e66,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:631c8cee6152ae1808419f27d4b63fa0a6bcebb15c757dd96090c6267d63e6c2,PodSandboxId:88fd41ca8aad113a54b218f3a36159bdb37464ee5f76ec8ae31f27cd31f7daa1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:17224451
97367288188,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jfgs8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ead85d8-0fd0-4900-8c02-2f23217ca208,},Annotations:map[string]string{io.kubernetes.container.hash: 85f1c355,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:639ed1a246cfddb5bcce86bc66133eebc7339f13348608e46548a872b8df456e,PodSandboxId:e715c97e96fabf0f1c9f4e6025318eda97b8f431f80df4d8ac4517b03917faad,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722445179
126430451,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 377009a00f211ea8abb80d365c74a9fd,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5b3417940cd8bf8065dfefafb918da0e5bfda9fd5f7a2f9ccea0f1615151657,PodSandboxId:8ccf20bcd63f37163f71bce3d8a364eb92d86b0694bc75949c8979e5633c5cc2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722445176479082471,Labels:map[string]string{io.kuber
netes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f8584f0ef1731ec6b6fb11b7fa84aeb,},Annotations:map[string]string{io.kubernetes.container.hash: a3feaab1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b48ac56e48fe04efb8cbe0a1f24302f4ddd366a42a7ba2d4395113bf2157a3ea,PodSandboxId:5e1eebee97f6b9d241e17235a27f64767bdc5937bd5ca3537ce4297a25812986,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722445176435123047,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.k
ubernetes.pod.name: kube-apiserver-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 399c842a2f5d3312c2f955f494ccfe00,},Annotations:map[string]string{io.kubernetes.container.hash: c9b9fb5b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ae09638d5d5724f3e27352156c272f710b70f2c3319adc46eb518d357dc8e2,PodSandboxId:5c961f0d694a144ae953e67f65f86080fd9a64718dfc16e8697475eadb817895,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722445176401816670,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kube
rnetes.pod.name: kube-controller-manager-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9982d04d181fbc6333c44627c777728,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ded6421f2f11d7f64ce80d1681e3d4c9d1453ff8aecc860e184ad0865768adde,PodSandboxId:90ca479a21b17fc36c4328bfd82fe7023d144a893b3a1fbc315c66af59696ea2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722445176364748602,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.n
ame: kube-scheduler-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3defc3711c46905c8aec12eb318ecd3b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d8623618-2fb6-43fb-a947-cb60374e59e9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:06:01 ha-234651 crio[683]: time="2024-07-31 17:06:01.281599129Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a2955e96-581e-4fb6-899d-05a3120109e2 name=/runtime.v1.RuntimeService/Version
	Jul 31 17:06:01 ha-234651 crio[683]: time="2024-07-31 17:06:01.281664552Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a2955e96-581e-4fb6-899d-05a3120109e2 name=/runtime.v1.RuntimeService/Version
	Jul 31 17:06:01 ha-234651 crio[683]: time="2024-07-31 17:06:01.282878690Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b23ef035-3ff2-4615-81ee-0099e3c2bcb4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:06:01 ha-234651 crio[683]: time="2024-07-31 17:06:01.283286577Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722445561283266784,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145867,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b23ef035-3ff2-4615-81ee-0099e3c2bcb4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:06:01 ha-234651 crio[683]: time="2024-07-31 17:06:01.284044308Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3211b2ae-e3ac-4b7f-bb60-ce67a7f8bcfc name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:06:01 ha-234651 crio[683]: time="2024-07-31 17:06:01.284099907Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3211b2ae-e3ac-4b7f-bb60-ce67a7f8bcfc name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:06:01 ha-234651 crio[683]: time="2024-07-31 17:06:01.284301018Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5e4d66f773ff46f7a130d9359ac1316e1117516ebef8b32f9b20515706b3a2f8,PodSandboxId:754996ae28b01f43599725b23a8ef7ea035322c1da4ccbda6c379abd7e897fe0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722445212877304638,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qbqb9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f76f862-d39e-4976-90e6-fb9a25cc485a,},Annotations:map[string]string{io.kubernetes.container.hash: 9bcf3d4b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8ef655791fe4b0b98612bab1bc9fea7e42c73e5327bdd0654676979cb2e8542,PodSandboxId:1616415c6b8f6463e22fcd4bc693a293f8fdee9b60f2f3d789bc645ace8300a6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722445212817639863,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nsx9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: b2cde006-dbb7-4e6f-a5f1-cf7760740104,},Annotations:map[string]string{io.kubernetes.container.hash: 9d1dd63f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cc5e3465a8643593cfbf5871749b29c1edcf22920dda340f9e61a512894333d,PodSandboxId:1e9afabdcc733b5f6d64e03aa33527331e08c27fcc253d8f5a936ac47ba41635,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNI
NG,CreatedAt:1722445212754307746,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87455537-bdb8-438b-8122-db85bed01d09,},Annotations:map[string]string{io.kubernetes.container.hash: f1005df0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd9f6c4536535bb3eb07d6fa75fe49579d9cdee5ff6e4c440e0a58359d4f2f08,PodSandboxId:99ba869aafb121f3032d38c0a926a34605210fa5e62304b252efb3ce0f139b7c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:C
ONTAINER_RUNNING,CreatedAt:1722445201111476345,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wfbt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eda8095-ce75-4043-8ddf-6e5663de8212,},Annotations:map[string]string{io.kubernetes.container.hash: f1243e66,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:631c8cee6152ae1808419f27d4b63fa0a6bcebb15c757dd96090c6267d63e6c2,PodSandboxId:88fd41ca8aad113a54b218f3a36159bdb37464ee5f76ec8ae31f27cd31f7daa1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:17224451
97367288188,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jfgs8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ead85d8-0fd0-4900-8c02-2f23217ca208,},Annotations:map[string]string{io.kubernetes.container.hash: 85f1c355,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:639ed1a246cfddb5bcce86bc66133eebc7339f13348608e46548a872b8df456e,PodSandboxId:e715c97e96fabf0f1c9f4e6025318eda97b8f431f80df4d8ac4517b03917faad,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722445179
126430451,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 377009a00f211ea8abb80d365c74a9fd,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5b3417940cd8bf8065dfefafb918da0e5bfda9fd5f7a2f9ccea0f1615151657,PodSandboxId:8ccf20bcd63f37163f71bce3d8a364eb92d86b0694bc75949c8979e5633c5cc2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722445176479082471,Labels:map[string]string{io.kuber
netes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f8584f0ef1731ec6b6fb11b7fa84aeb,},Annotations:map[string]string{io.kubernetes.container.hash: a3feaab1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b48ac56e48fe04efb8cbe0a1f24302f4ddd366a42a7ba2d4395113bf2157a3ea,PodSandboxId:5e1eebee97f6b9d241e17235a27f64767bdc5937bd5ca3537ce4297a25812986,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722445176435123047,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.k
ubernetes.pod.name: kube-apiserver-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 399c842a2f5d3312c2f955f494ccfe00,},Annotations:map[string]string{io.kubernetes.container.hash: c9b9fb5b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ae09638d5d5724f3e27352156c272f710b70f2c3319adc46eb518d357dc8e2,PodSandboxId:5c961f0d694a144ae953e67f65f86080fd9a64718dfc16e8697475eadb817895,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722445176401816670,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kube
rnetes.pod.name: kube-controller-manager-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9982d04d181fbc6333c44627c777728,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ded6421f2f11d7f64ce80d1681e3d4c9d1453ff8aecc860e184ad0865768adde,PodSandboxId:90ca479a21b17fc36c4328bfd82fe7023d144a893b3a1fbc315c66af59696ea2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722445176364748602,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.n
ame: kube-scheduler-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3defc3711c46905c8aec12eb318ecd3b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3211b2ae-e3ac-4b7f-bb60-ce67a7f8bcfc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5e4d66f773ff4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                     5 minutes ago       Running             coredns                   0                   754996ae28b01       coredns-7db6d8ff4d-qbqb9
	e8ef655791fe4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                     5 minutes ago       Running             coredns                   0                   1616415c6b8f6       coredns-7db6d8ff4d-nsx9j
	0cc5e3465a864       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                     5 minutes ago       Running             storage-provisioner       0                   1e9afabdcc733       storage-provisioner
	dd9f6c4536535       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9   6 minutes ago       Running             kindnet-cni               0                   99ba869aafb12       kindnet-wfbt4
	631c8cee6152a       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                     6 minutes ago       Running             kube-proxy                0                   88fd41ca8aad1       kube-proxy-jfgs8
	639ed1a246cfd       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f    6 minutes ago       Running             kube-vip                  0                   e715c97e96fab       kube-vip-ha-234651
	e5b3417940cd8       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                     6 minutes ago       Running             etcd                      0                   8ccf20bcd63f3       etcd-ha-234651
	b48ac56e48fe0       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                     6 minutes ago       Running             kube-apiserver            0                   5e1eebee97f6b       kube-apiserver-ha-234651
	e3ae09638d5d5       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                     6 minutes ago       Running             kube-controller-manager   0                   5c961f0d694a1       kube-controller-manager-ha-234651
	ded6421f2f11d       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                     6 minutes ago       Running             kube-scheduler            0                   90ca479a21b17       kube-scheduler-ha-234651
	
	
	==> coredns [5e4d66f773ff46f7a130d9359ac1316e1117516ebef8b32f9b20515706b3a2f8] <==
	[INFO] 10.244.2.2:36295 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000163438s
	[INFO] 10.244.2.2:39212 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000147577s
	[INFO] 10.244.1.3:41084 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003668415s
	[INFO] 10.244.1.3:58419 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000240845s
	[INFO] 10.244.1.3:46572 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000149759s
	[INFO] 10.244.1.3:46716 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000125267s
	[INFO] 10.244.2.2:44128 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00011516s
	[INFO] 10.244.2.2:51451 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000094315s
	[INFO] 10.244.2.2:36147 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001399s
	[INFO] 10.244.2.2:36545 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001276628s
	[INFO] 10.244.1.2:43917 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113879s
	[INFO] 10.244.1.2:52270 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00173961s
	[INFO] 10.244.1.2:43272 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000090127s
	[INFO] 10.244.1.2:40969 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001253454s
	[INFO] 10.244.1.2:36005 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000101429s
	[INFO] 10.244.1.3:57882 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000155324s
	[INFO] 10.244.1.3:52921 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104436s
	[INFO] 10.244.1.3:53848 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000118293s
	[INFO] 10.244.1.2:59324 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114877s
	[INFO] 10.244.1.2:35559 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080871s
	[INFO] 10.244.1.3:36523 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000149158s
	[INFO] 10.244.1.3:43713 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000113949s
	[INFO] 10.244.2.2:57100 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000104476s
	[INFO] 10.244.2.2:36343 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000075949s
	[INFO] 10.244.1.2:36593 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000110887s
	
	
	==> coredns [e8ef655791fe4b0b98612bab1bc9fea7e42c73e5327bdd0654676979cb2e8542] <==
	[INFO] 10.244.1.3:39272 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000221307s
	[INFO] 10.244.1.3:45451 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001416s
	[INFO] 10.244.1.3:35968 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003019868s
	[INFO] 10.244.1.3:50760 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000096087s
	[INFO] 10.244.2.2:47184 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001873446s
	[INFO] 10.244.2.2:52684 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000141574s
	[INFO] 10.244.2.2:55915 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000097985s
	[INFO] 10.244.2.2:37641 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000064285s
	[INFO] 10.244.1.2:44538 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000098479s
	[INFO] 10.244.1.2:51050 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000063987s
	[INFO] 10.244.1.2:53102 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117625s
	[INFO] 10.244.1.3:34472 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000093028s
	[INFO] 10.244.2.2:50493 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000198464s
	[INFO] 10.244.2.2:59387 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000091819s
	[INFO] 10.244.2.2:46587 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000140652s
	[INFO] 10.244.2.2:44332 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000062045s
	[INFO] 10.244.1.2:56100 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000129501s
	[INFO] 10.244.1.2:52904 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075504s
	[INFO] 10.244.1.3:45513 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000201365s
	[INFO] 10.244.1.3:56964 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000220702s
	[INFO] 10.244.2.2:52612 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000221354s
	[INFO] 10.244.2.2:34847 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000096723s
	[INFO] 10.244.1.2:54098 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00017441s
	[INFO] 10.244.1.2:35429 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000097269s
	[INFO] 10.244.1.2:35606 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000150264s
	
	
	==> describe nodes <==
	Name:               ha-234651
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-234651
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1d737dad7efa60c56d30434fcd857dd3b14c91d9
	                    minikube.k8s.io/name=ha-234651
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T16_59_43_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 16:59:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-234651
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 17:06:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 17:05:19 +0000   Wed, 31 Jul 2024 16:59:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 17:05:19 +0000   Wed, 31 Jul 2024 16:59:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 17:05:19 +0000   Wed, 31 Jul 2024 16:59:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 17:05:19 +0000   Wed, 31 Jul 2024 17:00:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.243
	  Hostname:    ha-234651
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 78c611c203cf48ab9dc710fc8d4b3901
	  System UUID:                78c611c2-03cf-48ab-9dc7-10fc8d4b3901
	  Boot ID:                    7f43c774-6026-42b9-978d-915af2f564da
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-nsx9j             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m5s
	  kube-system                 coredns-7db6d8ff4d-qbqb9             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m5s
	  kube-system                 etcd-ha-234651                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m19s
	  kube-system                 kindnet-wfbt4                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m5s
	  kube-system                 kube-apiserver-ha-234651             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m19s
	  kube-system                 kube-controller-manager-ha-234651    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m19s
	  kube-system                 kube-proxy-jfgs8                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m5s
	  kube-system                 kube-scheduler-ha-234651             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m19s
	  kube-system                 kube-vip-ha-234651                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m19s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m3s   kube-proxy       
	  Normal  Starting                 6m19s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m19s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m19s  kubelet          Node ha-234651 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m19s  kubelet          Node ha-234651 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m19s  kubelet          Node ha-234651 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m5s   node-controller  Node ha-234651 event: Registered Node ha-234651 in Controller
	  Normal  NodeReady                5m49s  kubelet          Node ha-234651 status is now: NodeReady
	  Normal  RegisteredNode           5m1s   node-controller  Node ha-234651 event: Registered Node ha-234651 in Controller
	  Normal  RegisteredNode           3m45s  node-controller  Node ha-234651 event: Registered Node ha-234651 in Controller
	
	
	Name:               ha-234651-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-234651-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1d737dad7efa60c56d30434fcd857dd3b14c91d9
	                    minikube.k8s.io/name=ha-234651
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T17_00_46_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 17:00:43 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-234651-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 17:03:38 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 31 Jul 2024 17:02:47 +0000   Wed, 31 Jul 2024 17:04:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 31 Jul 2024 17:02:47 +0000   Wed, 31 Jul 2024 17:04:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 31 Jul 2024 17:02:47 +0000   Wed, 31 Jul 2024 17:04:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 31 Jul 2024 17:02:47 +0000   Wed, 31 Jul 2024 17:04:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.235
	  Hostname:    ha-234651-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f48a6b3aa33049d58a0ceaa57200b934
	  System UUID:                f48a6b3a-a330-49d5-8a0c-eaa57200b934
	  Boot ID:                    4f0c8ea9-325a-45d8-974f-4ccdaaffa5ca
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-2w6fp                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m35s
	  default                     busybox-fc5497c4f-qw457                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m35s
	  kube-system                 etcd-ha-234651-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m16s
	  kube-system                 kindnet-phmdp                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m18s
	  kube-system                 kube-apiserver-ha-234651-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m17s
	  kube-system                 kube-controller-manager-ha-234651-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m17s
	  kube-system                 kube-proxy-b8dcw                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m18s
	  kube-system                 kube-scheduler-ha-234651-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m17s
	  kube-system                 kube-vip-ha-234651-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m14s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m18s (x8 over 5m18s)  kubelet          Node ha-234651-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m18s (x8 over 5m18s)  kubelet          Node ha-234651-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m18s (x7 over 5m18s)  kubelet          Node ha-234651-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m15s                  node-controller  Node ha-234651-m02 event: Registered Node ha-234651-m02 in Controller
	  Normal  RegisteredNode           5m1s                   node-controller  Node ha-234651-m02 event: Registered Node ha-234651-m02 in Controller
	  Normal  RegisteredNode           3m45s                  node-controller  Node ha-234651-m02 event: Registered Node ha-234651-m02 in Controller
	  Normal  NodeNotReady             100s                   node-controller  Node ha-234651-m02 status is now: NodeNotReady
	
	
	Name:               ha-234651-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-234651-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1d737dad7efa60c56d30434fcd857dd3b14c91d9
	                    minikube.k8s.io/name=ha-234651
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T17_02_01_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 17:01:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-234651-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 17:05:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 17:03:00 +0000   Wed, 31 Jul 2024 17:01:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 17:03:00 +0000   Wed, 31 Jul 2024 17:01:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 17:03:00 +0000   Wed, 31 Jul 2024 17:01:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 17:03:00 +0000   Wed, 31 Jul 2024 17:02:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.139
	  Hostname:    ha-234651-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 bedcbc00eaa142208b6f46ab90ace771
	  System UUID:                bedcbc00-eaa1-4220-8b6f-46ab90ace771
	  Boot ID:                    8963cccd-a585-41ce-80e2-aa6c1268ee1b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-fdmbt                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m35s
	  kube-system                 etcd-ha-234651-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3m55s
	  kube-system                 kindnet-2xqxq                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m3s
	  kube-system                 kube-apiserver-ha-234651-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m1s
	  kube-system                 kube-controller-manager-ha-234651-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m54s
	  kube-system                 kube-proxy-gfgjd                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	  kube-system                 kube-scheduler-ha-234651-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	  kube-system                 kube-vip-ha-234651-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m59s                kube-proxy       
	  Normal  NodeHasSufficientMemory  4m3s (x8 over 4m3s)  kubelet          Node ha-234651-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m3s (x8 over 4m3s)  kubelet          Node ha-234651-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m3s (x7 over 4m3s)  kubelet          Node ha-234651-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m1s                 node-controller  Node ha-234651-m03 event: Registered Node ha-234651-m03 in Controller
	  Normal  RegisteredNode           4m                   node-controller  Node ha-234651-m03 event: Registered Node ha-234651-m03 in Controller
	  Normal  RegisteredNode           3m45s                node-controller  Node ha-234651-m03 event: Registered Node ha-234651-m03 in Controller
	
	
	Name:               ha-234651-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-234651-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1d737dad7efa60c56d30434fcd857dd3b14c91d9
	                    minikube.k8s.io/name=ha-234651
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T17_03_01_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 17:03:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-234651-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 17:05:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 17:03:31 +0000   Wed, 31 Jul 2024 17:03:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 17:03:31 +0000   Wed, 31 Jul 2024 17:03:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 17:03:31 +0000   Wed, 31 Jul 2024 17:03:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 17:03:31 +0000   Wed, 31 Jul 2024 17:03:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.216
	  Hostname:    ha-234651-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6cc10919121c4c939afe8d5b5f293c45
	  System UUID:                6cc10919-121c-4c93-9afe-8d5b5f293c45
	  Boot ID:                    ad8f4b1b-56d6-4379-babe-ef3d0a8d6eef
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-qnml8       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m1s
	  kube-system                 kube-proxy-4b8gn    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m56s                kube-proxy       
	  Normal  RegisteredNode           3m1s                 node-controller  Node ha-234651-m04 event: Registered Node ha-234651-m04 in Controller
	  Normal  NodeHasSufficientMemory  3m1s (x3 over 3m1s)  kubelet          Node ha-234651-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m1s (x3 over 3m1s)  kubelet          Node ha-234651-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m1s (x3 over 3m1s)  kubelet          Node ha-234651-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m                   node-controller  Node ha-234651-m04 event: Registered Node ha-234651-m04 in Controller
	  Normal  RegisteredNode           3m                   node-controller  Node ha-234651-m04 event: Registered Node ha-234651-m04 in Controller
	  Normal  NodeReady                2m40s                kubelet          Node ha-234651-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jul31 16:59] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050822] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036777] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.665799] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.749974] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.543243] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.022347] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.054568] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.050413] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.168871] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.140385] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.264161] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +3.967546] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +4.664986] systemd-fstab-generator[951]: Ignoring "noauto" option for root device
	[  +0.073312] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.322437] systemd-fstab-generator[1369]: Ignoring "noauto" option for root device
	[  +0.079901] kauditd_printk_skb: 79 callbacks suppressed
	[ +13.802679] kauditd_printk_skb: 21 callbacks suppressed
	[Jul31 17:00] kauditd_printk_skb: 34 callbacks suppressed
	[ +47.907295] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [e5b3417940cd8bf8065dfefafb918da0e5bfda9fd5f7a2f9ccea0f1615151657] <==
	{"level":"warn","ts":"2024-07-31T17:06:01.339056Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4d6f7e7e767b3ff3","from":"4d6f7e7e767b3ff3","remote-peer-id":"6d43d0f719899a7e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T17:06:01.339723Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4d6f7e7e767b3ff3","from":"4d6f7e7e767b3ff3","remote-peer-id":"6d43d0f719899a7e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T17:06:01.439657Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4d6f7e7e767b3ff3","from":"4d6f7e7e767b3ff3","remote-peer-id":"6d43d0f719899a7e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T17:06:01.488468Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4d6f7e7e767b3ff3","from":"4d6f7e7e767b3ff3","remote-peer-id":"6d43d0f719899a7e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T17:06:01.508445Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4d6f7e7e767b3ff3","from":"4d6f7e7e767b3ff3","remote-peer-id":"6d43d0f719899a7e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T17:06:01.566616Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4d6f7e7e767b3ff3","from":"4d6f7e7e767b3ff3","remote-peer-id":"6d43d0f719899a7e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T17:06:01.575556Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4d6f7e7e767b3ff3","from":"4d6f7e7e767b3ff3","remote-peer-id":"6d43d0f719899a7e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T17:06:01.583153Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4d6f7e7e767b3ff3","from":"4d6f7e7e767b3ff3","remote-peer-id":"6d43d0f719899a7e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T17:06:01.587326Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4d6f7e7e767b3ff3","from":"4d6f7e7e767b3ff3","remote-peer-id":"6d43d0f719899a7e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T17:06:01.590764Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4d6f7e7e767b3ff3","from":"4d6f7e7e767b3ff3","remote-peer-id":"6d43d0f719899a7e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T17:06:01.601532Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4d6f7e7e767b3ff3","from":"4d6f7e7e767b3ff3","remote-peer-id":"6d43d0f719899a7e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T17:06:01.608585Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4d6f7e7e767b3ff3","from":"4d6f7e7e767b3ff3","remote-peer-id":"6d43d0f719899a7e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T17:06:01.614013Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4d6f7e7e767b3ff3","from":"4d6f7e7e767b3ff3","remote-peer-id":"6d43d0f719899a7e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T17:06:01.618225Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4d6f7e7e767b3ff3","from":"4d6f7e7e767b3ff3","remote-peer-id":"6d43d0f719899a7e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T17:06:01.621164Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4d6f7e7e767b3ff3","from":"4d6f7e7e767b3ff3","remote-peer-id":"6d43d0f719899a7e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T17:06:01.629566Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4d6f7e7e767b3ff3","from":"4d6f7e7e767b3ff3","remote-peer-id":"6d43d0f719899a7e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T17:06:01.635299Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4d6f7e7e767b3ff3","from":"4d6f7e7e767b3ff3","remote-peer-id":"6d43d0f719899a7e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T17:06:01.639707Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4d6f7e7e767b3ff3","from":"4d6f7e7e767b3ff3","remote-peer-id":"6d43d0f719899a7e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T17:06:01.640224Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4d6f7e7e767b3ff3","from":"4d6f7e7e767b3ff3","remote-peer-id":"6d43d0f719899a7e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T17:06:01.646678Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4d6f7e7e767b3ff3","from":"4d6f7e7e767b3ff3","remote-peer-id":"6d43d0f719899a7e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T17:06:01.650085Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4d6f7e7e767b3ff3","from":"4d6f7e7e767b3ff3","remote-peer-id":"6d43d0f719899a7e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T17:06:01.65659Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4d6f7e7e767b3ff3","from":"4d6f7e7e767b3ff3","remote-peer-id":"6d43d0f719899a7e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T17:06:01.659206Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4d6f7e7e767b3ff3","from":"4d6f7e7e767b3ff3","remote-peer-id":"6d43d0f719899a7e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T17:06:01.664196Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4d6f7e7e767b3ff3","from":"4d6f7e7e767b3ff3","remote-peer-id":"6d43d0f719899a7e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T17:06:01.671285Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4d6f7e7e767b3ff3","from":"4d6f7e7e767b3ff3","remote-peer-id":"6d43d0f719899a7e","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 17:06:01 up 6 min,  0 users,  load average: 0.50, 0.55, 0.28
	Linux ha-234651 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [dd9f6c4536535bb3eb07d6fa75fe49579d9cdee5ff6e4c440e0a58359d4f2f08] <==
	I0731 17:05:22.128877       1 main.go:322] Node ha-234651-m04 has CIDR [10.244.3.0/24] 
	I0731 17:05:32.131902       1 main.go:295] Handling node with IPs: map[192.168.39.243:{}]
	I0731 17:05:32.131964       1 main.go:299] handling current node
	I0731 17:05:32.131979       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0731 17:05:32.131984       1 main.go:322] Node ha-234651-m02 has CIDR [10.244.1.0/24] 
	I0731 17:05:32.132145       1 main.go:295] Handling node with IPs: map[192.168.39.139:{}]
	I0731 17:05:32.132167       1 main.go:322] Node ha-234651-m03 has CIDR [10.244.2.0/24] 
	I0731 17:05:32.132225       1 main.go:295] Handling node with IPs: map[192.168.39.216:{}]
	I0731 17:05:32.132274       1 main.go:322] Node ha-234651-m04 has CIDR [10.244.3.0/24] 
	I0731 17:05:42.126525       1 main.go:295] Handling node with IPs: map[192.168.39.216:{}]
	I0731 17:05:42.126694       1 main.go:322] Node ha-234651-m04 has CIDR [10.244.3.0/24] 
	I0731 17:05:42.126882       1 main.go:295] Handling node with IPs: map[192.168.39.243:{}]
	I0731 17:05:42.127025       1 main.go:299] handling current node
	I0731 17:05:42.127074       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0731 17:05:42.127096       1 main.go:322] Node ha-234651-m02 has CIDR [10.244.1.0/24] 
	I0731 17:05:42.127188       1 main.go:295] Handling node with IPs: map[192.168.39.139:{}]
	I0731 17:05:42.127208       1 main.go:322] Node ha-234651-m03 has CIDR [10.244.2.0/24] 
	I0731 17:05:52.128110       1 main.go:295] Handling node with IPs: map[192.168.39.243:{}]
	I0731 17:05:52.128192       1 main.go:299] handling current node
	I0731 17:05:52.128225       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0731 17:05:52.128231       1 main.go:322] Node ha-234651-m02 has CIDR [10.244.1.0/24] 
	I0731 17:05:52.128461       1 main.go:295] Handling node with IPs: map[192.168.39.139:{}]
	I0731 17:05:52.128479       1 main.go:322] Node ha-234651-m03 has CIDR [10.244.2.0/24] 
	I0731 17:05:52.128534       1 main.go:295] Handling node with IPs: map[192.168.39.216:{}]
	I0731 17:05:52.128551       1 main.go:322] Node ha-234651-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [b48ac56e48fe04efb8cbe0a1f24302f4ddd366a42a7ba2d4395113bf2157a3ea] <==
	I0731 16:59:41.278961       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0731 16:59:41.285624       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.243]
	I0731 16:59:41.286548       1 controller.go:615] quota admission added evaluator for: endpoints
	I0731 16:59:41.291521       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0731 16:59:41.491319       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0731 16:59:42.740597       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0731 16:59:42.753582       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0731 16:59:42.898693       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0731 16:59:56.022208       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0731 16:59:56.106082       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0731 17:02:31.881906       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60088: use of closed network connection
	E0731 17:02:32.047811       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60106: use of closed network connection
	E0731 17:02:32.230182       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60124: use of closed network connection
	E0731 17:02:32.409711       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60154: use of closed network connection
	E0731 17:02:32.586020       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60168: use of closed network connection
	E0731 17:02:32.760544       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60198: use of closed network connection
	E0731 17:02:32.934938       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60210: use of closed network connection
	E0731 17:02:33.105728       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60236: use of closed network connection
	E0731 17:02:33.377977       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60254: use of closed network connection
	E0731 17:02:33.539826       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60266: use of closed network connection
	E0731 17:02:33.706202       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60292: use of closed network connection
	E0731 17:02:33.871884       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60304: use of closed network connection
	E0731 17:02:34.042800       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60330: use of closed network connection
	E0731 17:02:34.212508       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60342: use of closed network connection
	W0731 17:04:01.298435       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.139 192.168.39.243]
	
	
	==> kube-controller-manager [e3ae09638d5d5724f3e27352156c272f710b70f2c3319adc46eb518d357dc8e2] <==
	I0731 17:02:27.105508       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.125µs"
	I0731 17:02:27.543579       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.434µs"
	I0731 17:02:28.548910       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="73.962µs"
	I0731 17:02:28.559909       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.406µs"
	I0731 17:02:28.564489       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.591µs"
	I0731 17:02:28.686722       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="83.291µs"
	I0731 17:02:30.263248       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.859513ms"
	I0731 17:02:30.263688       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="60.971µs"
	I0731 17:02:30.957804       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.563351ms"
	I0731 17:02:30.958323       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="81.378µs"
	I0731 17:02:31.272030       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.121275ms"
	I0731 17:02:31.272231       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="76.813µs"
	E0731 17:03:00.315712       1 certificate_controller.go:146] Sync csr-z4x7k failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-z4x7k": the object has been modified; please apply your changes to the latest version and try again
	E0731 17:03:00.573677       1 certificate_controller.go:146] Sync csr-z4x7k failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-z4x7k": the object has been modified; please apply your changes to the latest version and try again
	I0731 17:03:00.609984       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-234651-m04\" does not exist"
	I0731 17:03:00.657440       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-234651-m04" podCIDRs=["10.244.3.0/24"]
	I0731 17:03:01.076771       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-234651-m04"
	I0731 17:03:21.239060       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-234651-m04"
	I0731 17:04:21.105420       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-234651-m04"
	I0731 17:04:21.185436       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.413544ms"
	I0731 17:04:21.185518       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.667µs"
	I0731 17:04:21.208326       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.195491ms"
	I0731 17:04:21.208513       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.429µs"
	I0731 17:04:21.245645       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.319898ms"
	I0731 17:04:21.246538       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.344µs"
	
	
	==> kube-proxy [631c8cee6152ae1808419f27d4b63fa0a6bcebb15c757dd96090c6267d63e6c2] <==
	I0731 16:59:57.547806       1 server_linux.go:69] "Using iptables proxy"
	I0731 16:59:57.565295       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.243"]
	I0731 16:59:57.614393       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 16:59:57.614454       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 16:59:57.614473       1 server_linux.go:165] "Using iptables Proxier"
	I0731 16:59:57.617425       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 16:59:57.617906       1 server.go:872] "Version info" version="v1.30.3"
	I0731 16:59:57.617937       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 16:59:57.619968       1 config.go:192] "Starting service config controller"
	I0731 16:59:57.620464       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 16:59:57.620514       1 config.go:101] "Starting endpoint slice config controller"
	I0731 16:59:57.620520       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 16:59:57.621534       1 config.go:319] "Starting node config controller"
	I0731 16:59:57.621562       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 16:59:57.720963       1 shared_informer.go:320] Caches are synced for service config
	I0731 16:59:57.720972       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 16:59:57.721723       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [ded6421f2f11d7f64ce80d1681e3d4c9d1453ff8aecc860e184ad0865768adde] <==
	I0731 17:02:26.714143       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="b1c66d4c-91d3-498a-b3ae-c705ae28c8fa" pod="default/busybox-fc5497c4f-fdmbt" assumedNode="ha-234651-m03" currentNode="ha-234651"
	E0731 17:02:26.725726       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-qw457\": pod busybox-fc5497c4f-qw457 is already assigned to node \"ha-234651-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-qw457" node="ha-234651-m03"
	E0731 17:02:26.725803       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod c4caccfd-f3bd-402f-8d90-0ed9d02f5c2d(default/busybox-fc5497c4f-qw457) was assumed on ha-234651-m03 but assigned to ha-234651-m02" pod="default/busybox-fc5497c4f-qw457"
	E0731 17:02:26.725828       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-qw457\": pod busybox-fc5497c4f-qw457 is already assigned to node \"ha-234651-m02\"" pod="default/busybox-fc5497c4f-qw457"
	I0731 17:02:26.725861       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-qw457" node="ha-234651-m02"
	E0731 17:02:26.727014       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-fdmbt\": pod busybox-fc5497c4f-fdmbt is already assigned to node \"ha-234651-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-fdmbt" node="ha-234651"
	E0731 17:02:26.727069       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod b1c66d4c-91d3-498a-b3ae-c705ae28c8fa(default/busybox-fc5497c4f-fdmbt) was assumed on ha-234651 but assigned to ha-234651-m03" pod="default/busybox-fc5497c4f-fdmbt"
	E0731 17:02:26.727084       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-fdmbt\": pod busybox-fc5497c4f-fdmbt is already assigned to node \"ha-234651-m03\"" pod="default/busybox-fc5497c4f-fdmbt"
	I0731 17:02:26.727098       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-fdmbt" node="ha-234651-m03"
	E0731 17:03:00.685760       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-z8qd9\": pod kube-proxy-z8qd9 is already assigned to node \"ha-234651-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-z8qd9" node="ha-234651-m04"
	E0731 17:03:00.685947       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod bb0abc33-d04d-41ad-adc9-39420f19a821(kube-system/kube-proxy-z8qd9) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-z8qd9"
	E0731 17:03:00.686961       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-z8qd9\": pod kube-proxy-z8qd9 is already assigned to node \"ha-234651-m04\"" pod="kube-system/kube-proxy-z8qd9"
	I0731 17:03:00.687075       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-z8qd9" node="ha-234651-m04"
	E0731 17:03:00.685858       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-xlhp4\": pod kindnet-xlhp4 is already assigned to node \"ha-234651-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-xlhp4" node="ha-234651-m04"
	E0731 17:03:00.689443       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 1685cdf9-de2d-4ad0-bb7e-2b2cd5fc6cba(kube-system/kindnet-xlhp4) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-xlhp4"
	E0731 17:03:00.689565       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-xlhp4\": pod kindnet-xlhp4 is already assigned to node \"ha-234651-m04\"" pod="kube-system/kindnet-xlhp4"
	I0731 17:03:00.689662       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-xlhp4" node="ha-234651-m04"
	E0731 17:03:00.721461       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-qnml8\": pod kindnet-qnml8 is already assigned to node \"ha-234651-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-qnml8" node="ha-234651-m04"
	E0731 17:03:00.721533       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 254727bb-4578-4f88-8838-553d196d806d(kube-system/kindnet-qnml8) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-qnml8"
	E0731 17:03:00.721554       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-qnml8\": pod kindnet-qnml8 is already assigned to node \"ha-234651-m04\"" pod="kube-system/kindnet-qnml8"
	I0731 17:03:00.721578       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-qnml8" node="ha-234651-m04"
	E0731 17:03:00.721806       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-4b8gn\": pod kube-proxy-4b8gn is already assigned to node \"ha-234651-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-4b8gn" node="ha-234651-m04"
	E0731 17:03:00.722071       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 402fc7f7-8d84-4dc0-936c-e39d5411430a(kube-system/kube-proxy-4b8gn) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-4b8gn"
	E0731 17:03:00.722583       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-4b8gn\": pod kube-proxy-4b8gn is already assigned to node \"ha-234651-m04\"" pod="kube-system/kube-proxy-4b8gn"
	I0731 17:03:00.722794       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-4b8gn" node="ha-234651-m04"
	
	
	==> kubelet <==
	Jul 31 17:02:26 ha-234651 kubelet[1377]: I0731 17:02:26.929195    1377 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvtp7\" (UniqueName: \"kubernetes.io/projected/0bd4b5ea-d73f-4cca-95fe-c1dd8abaf904-kube-api-access-vvtp7\") pod \"busybox-fc5497c4f-xr5vn\" (UID: \"0bd4b5ea-d73f-4cca-95fe-c1dd8abaf904\") " pod="default/busybox-fc5497c4f-xr5vn"
	Jul 31 17:02:27 ha-234651 kubelet[1377]: I0731 17:02:27.634027    1377 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvtp7\" (UniqueName: \"kubernetes.io/projected/0bd4b5ea-d73f-4cca-95fe-c1dd8abaf904-kube-api-access-vvtp7\") pod \"0bd4b5ea-d73f-4cca-95fe-c1dd8abaf904\" (UID: \"0bd4b5ea-d73f-4cca-95fe-c1dd8abaf904\") "
	Jul 31 17:02:27 ha-234651 kubelet[1377]: I0731 17:02:27.639038    1377 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bd4b5ea-d73f-4cca-95fe-c1dd8abaf904-kube-api-access-vvtp7" (OuterVolumeSpecName: "kube-api-access-vvtp7") pod "0bd4b5ea-d73f-4cca-95fe-c1dd8abaf904" (UID: "0bd4b5ea-d73f-4cca-95fe-c1dd8abaf904"). InnerVolumeSpecName "kube-api-access-vvtp7". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 31 17:02:27 ha-234651 kubelet[1377]: I0731 17:02:27.735183    1377 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-vvtp7\" (UniqueName: \"kubernetes.io/projected/0bd4b5ea-d73f-4cca-95fe-c1dd8abaf904-kube-api-access-vvtp7\") on node \"ha-234651\" DevicePath \"\""
	Jul 31 17:02:28 ha-234651 kubelet[1377]: I0731 17:02:28.869866    1377 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bd4b5ea-d73f-4cca-95fe-c1dd8abaf904" path="/var/lib/kubelet/pods/0bd4b5ea-d73f-4cca-95fe-c1dd8abaf904/volumes"
	Jul 31 17:02:42 ha-234651 kubelet[1377]: E0731 17:02:42.881806    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 17:02:42 ha-234651 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 17:02:42 ha-234651 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 17:02:42 ha-234651 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 17:02:42 ha-234651 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 17:03:42 ha-234651 kubelet[1377]: E0731 17:03:42.887541    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 17:03:42 ha-234651 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 17:03:42 ha-234651 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 17:03:42 ha-234651 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 17:03:42 ha-234651 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 17:04:42 ha-234651 kubelet[1377]: E0731 17:04:42.892127    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 17:04:42 ha-234651 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 17:04:42 ha-234651 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 17:04:42 ha-234651 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 17:04:42 ha-234651 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 17:05:42 ha-234651 kubelet[1377]: E0731 17:05:42.882489    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 17:05:42 ha-234651 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 17:05:42 ha-234651 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 17:05:42 ha-234651 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 17:05:42 ha-234651 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-234651 -n ha-234651
helpers_test.go:261: (dbg) Run:  kubectl --context ha-234651 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (60.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-234651 status -v=7 --alsologtostderr: exit status 3 (3.205288513s)

                                                
                                                
-- stdout --
	ha-234651
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-234651-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-234651-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-234651-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 17:06:06.173410   31201 out.go:291] Setting OutFile to fd 1 ...
	I0731 17:06:06.173528   31201 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 17:06:06.173537   31201 out.go:304] Setting ErrFile to fd 2...
	I0731 17:06:06.173543   31201 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 17:06:06.173732   31201 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19349-8084/.minikube/bin
	I0731 17:06:06.173886   31201 out.go:298] Setting JSON to false
	I0731 17:06:06.173907   31201 mustload.go:65] Loading cluster: ha-234651
	I0731 17:06:06.173945   31201 notify.go:220] Checking for updates...
	I0731 17:06:06.174433   31201 config.go:182] Loaded profile config "ha-234651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 17:06:06.174455   31201 status.go:255] checking status of ha-234651 ...
	I0731 17:06:06.174865   31201 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:06.174922   31201 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:06.193484   31201 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41657
	I0731 17:06:06.194011   31201 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:06.194577   31201 main.go:141] libmachine: Using API Version  1
	I0731 17:06:06.194598   31201 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:06.194956   31201 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:06.195164   31201 main.go:141] libmachine: (ha-234651) Calling .GetState
	I0731 17:06:06.196875   31201 status.go:330] ha-234651 host status = "Running" (err=<nil>)
	I0731 17:06:06.196904   31201 host.go:66] Checking if "ha-234651" exists ...
	I0731 17:06:06.197167   31201 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:06.197195   31201 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:06.211391   31201 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43025
	I0731 17:06:06.211733   31201 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:06.212185   31201 main.go:141] libmachine: Using API Version  1
	I0731 17:06:06.212200   31201 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:06.212490   31201 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:06.212697   31201 main.go:141] libmachine: (ha-234651) Calling .GetIP
	I0731 17:06:06.215258   31201 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:06:06.215657   31201 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 17:06:06.215685   31201 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:06:06.215803   31201 host.go:66] Checking if "ha-234651" exists ...
	I0731 17:06:06.216089   31201 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:06.216130   31201 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:06.231563   31201 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42337
	I0731 17:06:06.231931   31201 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:06.232392   31201 main.go:141] libmachine: Using API Version  1
	I0731 17:06:06.232412   31201 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:06.232695   31201 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:06.232875   31201 main.go:141] libmachine: (ha-234651) Calling .DriverName
	I0731 17:06:06.233048   31201 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 17:06:06.233071   31201 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 17:06:06.235520   31201 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:06:06.235913   31201 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 17:06:06.235931   31201 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:06:06.236065   31201 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 17:06:06.236205   31201 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 17:06:06.236354   31201 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 17:06:06.236480   31201 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651/id_rsa Username:docker}
	I0731 17:06:06.317933   31201 ssh_runner.go:195] Run: systemctl --version
	I0731 17:06:06.323849   31201 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 17:06:06.338264   31201 kubeconfig.go:125] found "ha-234651" server: "https://192.168.39.254:8443"
	I0731 17:06:06.338294   31201 api_server.go:166] Checking apiserver status ...
	I0731 17:06:06.338335   31201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 17:06:06.352645   31201 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1220/cgroup
	W0731 17:06:06.363727   31201 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1220/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 17:06:06.363801   31201 ssh_runner.go:195] Run: ls
	I0731 17:06:06.369181   31201 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 17:06:06.375215   31201 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 17:06:06.375240   31201 status.go:422] ha-234651 apiserver status = Running (err=<nil>)
	I0731 17:06:06.375250   31201 status.go:257] ha-234651 status: &{Name:ha-234651 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 17:06:06.375264   31201 status.go:255] checking status of ha-234651-m02 ...
	I0731 17:06:06.375711   31201 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:06.375761   31201 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:06.390393   31201 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33343
	I0731 17:06:06.390898   31201 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:06.391387   31201 main.go:141] libmachine: Using API Version  1
	I0731 17:06:06.391409   31201 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:06.391686   31201 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:06.391899   31201 main.go:141] libmachine: (ha-234651-m02) Calling .GetState
	I0731 17:06:06.393414   31201 status.go:330] ha-234651-m02 host status = "Running" (err=<nil>)
	I0731 17:06:06.393434   31201 host.go:66] Checking if "ha-234651-m02" exists ...
	I0731 17:06:06.393862   31201 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:06.393904   31201 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:06.409349   31201 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38187
	I0731 17:06:06.409801   31201 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:06.410258   31201 main.go:141] libmachine: Using API Version  1
	I0731 17:06:06.410282   31201 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:06.410602   31201 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:06.410798   31201 main.go:141] libmachine: (ha-234651-m02) Calling .GetIP
	I0731 17:06:06.413567   31201 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:06:06.414008   31201 main.go:141] libmachine: (ha-234651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:97:0e", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:00:10 +0000 UTC Type:0 Mac:52:54:00:4c:97:0e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-234651-m02 Clientid:01:52:54:00:4c:97:0e}
	I0731 17:06:06.414037   31201 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined IP address 192.168.39.235 and MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:06:06.414140   31201 host.go:66] Checking if "ha-234651-m02" exists ...
	I0731 17:06:06.414458   31201 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:06.414497   31201 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:06.429266   31201 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39271
	I0731 17:06:06.429717   31201 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:06.430212   31201 main.go:141] libmachine: Using API Version  1
	I0731 17:06:06.430236   31201 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:06.430531   31201 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:06.430730   31201 main.go:141] libmachine: (ha-234651-m02) Calling .DriverName
	I0731 17:06:06.430915   31201 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 17:06:06.430939   31201 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHHostname
	I0731 17:06:06.433774   31201 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:06:06.434279   31201 main.go:141] libmachine: (ha-234651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:97:0e", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:00:10 +0000 UTC Type:0 Mac:52:54:00:4c:97:0e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-234651-m02 Clientid:01:52:54:00:4c:97:0e}
	I0731 17:06:06.434313   31201 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined IP address 192.168.39.235 and MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:06:06.434431   31201 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHPort
	I0731 17:06:06.434634   31201 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHKeyPath
	I0731 17:06:06.434783   31201 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHUsername
	I0731 17:06:06.434945   31201 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m02/id_rsa Username:docker}
	W0731 17:06:08.999412   31201 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.235:22: connect: no route to host
	W0731 17:06:08.999515   31201 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.235:22: connect: no route to host
	E0731 17:06:08.999532   31201 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.235:22: connect: no route to host
	I0731 17:06:08.999540   31201 status.go:257] ha-234651-m02 status: &{Name:ha-234651-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0731 17:06:08.999557   31201 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.235:22: connect: no route to host
	I0731 17:06:08.999564   31201 status.go:255] checking status of ha-234651-m03 ...
	I0731 17:06:08.999852   31201 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:08.999893   31201 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:09.014758   31201 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32857
	I0731 17:06:09.015202   31201 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:09.015632   31201 main.go:141] libmachine: Using API Version  1
	I0731 17:06:09.015653   31201 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:09.015943   31201 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:09.016145   31201 main.go:141] libmachine: (ha-234651-m03) Calling .GetState
	I0731 17:06:09.017565   31201 status.go:330] ha-234651-m03 host status = "Running" (err=<nil>)
	I0731 17:06:09.017579   31201 host.go:66] Checking if "ha-234651-m03" exists ...
	I0731 17:06:09.017931   31201 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:09.017976   31201 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:09.032267   31201 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38491
	I0731 17:06:09.032677   31201 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:09.033161   31201 main.go:141] libmachine: Using API Version  1
	I0731 17:06:09.033178   31201 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:09.033477   31201 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:09.033646   31201 main.go:141] libmachine: (ha-234651-m03) Calling .GetIP
	I0731 17:06:09.036219   31201 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:06:09.036626   31201 main.go:141] libmachine: (ha-234651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c0:cf", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:01:23 +0000 UTC Type:0 Mac:52:54:00:ac:c0:cf Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-234651-m03 Clientid:01:52:54:00:ac:c0:cf}
	I0731 17:06:09.036645   31201 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined IP address 192.168.39.139 and MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:06:09.036787   31201 host.go:66] Checking if "ha-234651-m03" exists ...
	I0731 17:06:09.037065   31201 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:09.037100   31201 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:09.051320   31201 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38565
	I0731 17:06:09.051677   31201 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:09.052131   31201 main.go:141] libmachine: Using API Version  1
	I0731 17:06:09.052145   31201 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:09.052461   31201 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:09.052634   31201 main.go:141] libmachine: (ha-234651-m03) Calling .DriverName
	I0731 17:06:09.052796   31201 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 17:06:09.052818   31201 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHHostname
	I0731 17:06:09.055386   31201 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:06:09.055836   31201 main.go:141] libmachine: (ha-234651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c0:cf", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:01:23 +0000 UTC Type:0 Mac:52:54:00:ac:c0:cf Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-234651-m03 Clientid:01:52:54:00:ac:c0:cf}
	I0731 17:06:09.055873   31201 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined IP address 192.168.39.139 and MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:06:09.056019   31201 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHPort
	I0731 17:06:09.056173   31201 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHKeyPath
	I0731 17:06:09.056282   31201 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHUsername
	I0731 17:06:09.056464   31201 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m03/id_rsa Username:docker}
	I0731 17:06:09.135290   31201 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 17:06:09.152886   31201 kubeconfig.go:125] found "ha-234651" server: "https://192.168.39.254:8443"
	I0731 17:06:09.152913   31201 api_server.go:166] Checking apiserver status ...
	I0731 17:06:09.152944   31201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 17:06:09.166702   31201 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1530/cgroup
	W0731 17:06:09.176135   31201 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1530/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 17:06:09.176210   31201 ssh_runner.go:195] Run: ls
	I0731 17:06:09.180973   31201 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 17:06:09.187143   31201 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 17:06:09.187175   31201 status.go:422] ha-234651-m03 apiserver status = Running (err=<nil>)
	I0731 17:06:09.187188   31201 status.go:257] ha-234651-m03 status: &{Name:ha-234651-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 17:06:09.187208   31201 status.go:255] checking status of ha-234651-m04 ...
	I0731 17:06:09.187552   31201 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:09.187591   31201 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:09.202138   31201 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35687
	I0731 17:06:09.202642   31201 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:09.203144   31201 main.go:141] libmachine: Using API Version  1
	I0731 17:06:09.203167   31201 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:09.203520   31201 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:09.203708   31201 main.go:141] libmachine: (ha-234651-m04) Calling .GetState
	I0731 17:06:09.205283   31201 status.go:330] ha-234651-m04 host status = "Running" (err=<nil>)
	I0731 17:06:09.205296   31201 host.go:66] Checking if "ha-234651-m04" exists ...
	I0731 17:06:09.205551   31201 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:09.205593   31201 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:09.219969   31201 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42249
	I0731 17:06:09.220400   31201 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:09.220818   31201 main.go:141] libmachine: Using API Version  1
	I0731 17:06:09.220836   31201 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:09.221112   31201 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:09.221329   31201 main.go:141] libmachine: (ha-234651-m04) Calling .GetIP
	I0731 17:06:09.224025   31201 main.go:141] libmachine: (ha-234651-m04) DBG | domain ha-234651-m04 has defined MAC address 52:54:00:25:f7:22 in network mk-ha-234651
	I0731 17:06:09.224547   31201 main.go:141] libmachine: (ha-234651-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f7:22", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:02:48 +0000 UTC Type:0 Mac:52:54:00:25:f7:22 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-234651-m04 Clientid:01:52:54:00:25:f7:22}
	I0731 17:06:09.224578   31201 main.go:141] libmachine: (ha-234651-m04) DBG | domain ha-234651-m04 has defined IP address 192.168.39.216 and MAC address 52:54:00:25:f7:22 in network mk-ha-234651
	I0731 17:06:09.224744   31201 host.go:66] Checking if "ha-234651-m04" exists ...
	I0731 17:06:09.225059   31201 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:09.225095   31201 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:09.239456   31201 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40697
	I0731 17:06:09.239805   31201 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:09.240309   31201 main.go:141] libmachine: Using API Version  1
	I0731 17:06:09.240328   31201 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:09.240596   31201 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:09.240774   31201 main.go:141] libmachine: (ha-234651-m04) Calling .DriverName
	I0731 17:06:09.240938   31201 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 17:06:09.240957   31201 main.go:141] libmachine: (ha-234651-m04) Calling .GetSSHHostname
	I0731 17:06:09.243703   31201 main.go:141] libmachine: (ha-234651-m04) DBG | domain ha-234651-m04 has defined MAC address 52:54:00:25:f7:22 in network mk-ha-234651
	I0731 17:06:09.244085   31201 main.go:141] libmachine: (ha-234651-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f7:22", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:02:48 +0000 UTC Type:0 Mac:52:54:00:25:f7:22 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-234651-m04 Clientid:01:52:54:00:25:f7:22}
	I0731 17:06:09.244111   31201 main.go:141] libmachine: (ha-234651-m04) DBG | domain ha-234651-m04 has defined IP address 192.168.39.216 and MAC address 52:54:00:25:f7:22 in network mk-ha-234651
	I0731 17:06:09.244234   31201 main.go:141] libmachine: (ha-234651-m04) Calling .GetSSHPort
	I0731 17:06:09.244375   31201 main.go:141] libmachine: (ha-234651-m04) Calling .GetSSHKeyPath
	I0731 17:06:09.244520   31201 main.go:141] libmachine: (ha-234651-m04) Calling .GetSSHUsername
	I0731 17:06:09.244634   31201 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m04/id_rsa Username:docker}
	I0731 17:06:09.326995   31201 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 17:06:09.340848   31201 status.go:257] ha-234651-m04 status: &{Name:ha-234651-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-234651 status -v=7 --alsologtostderr: exit status 3 (4.83045388s)

                                                
                                                
-- stdout --
	ha-234651
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-234651-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-234651-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-234651-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 17:06:10.697528   31302 out.go:291] Setting OutFile to fd 1 ...
	I0731 17:06:10.697656   31302 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 17:06:10.697669   31302 out.go:304] Setting ErrFile to fd 2...
	I0731 17:06:10.697676   31302 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 17:06:10.697873   31302 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19349-8084/.minikube/bin
	I0731 17:06:10.698043   31302 out.go:298] Setting JSON to false
	I0731 17:06:10.698069   31302 mustload.go:65] Loading cluster: ha-234651
	I0731 17:06:10.698093   31302 notify.go:220] Checking for updates...
	I0731 17:06:10.698509   31302 config.go:182] Loaded profile config "ha-234651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 17:06:10.698527   31302 status.go:255] checking status of ha-234651 ...
	I0731 17:06:10.699029   31302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:10.699078   31302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:10.720463   31302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42033
	I0731 17:06:10.720868   31302 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:10.721436   31302 main.go:141] libmachine: Using API Version  1
	I0731 17:06:10.721448   31302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:10.721880   31302 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:10.722094   31302 main.go:141] libmachine: (ha-234651) Calling .GetState
	I0731 17:06:10.723707   31302 status.go:330] ha-234651 host status = "Running" (err=<nil>)
	I0731 17:06:10.723736   31302 host.go:66] Checking if "ha-234651" exists ...
	I0731 17:06:10.724009   31302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:10.724041   31302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:10.739347   31302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39487
	I0731 17:06:10.739745   31302 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:10.740187   31302 main.go:141] libmachine: Using API Version  1
	I0731 17:06:10.740220   31302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:10.740532   31302 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:10.740685   31302 main.go:141] libmachine: (ha-234651) Calling .GetIP
	I0731 17:06:10.743472   31302 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:06:10.743887   31302 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 17:06:10.743927   31302 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:06:10.744055   31302 host.go:66] Checking if "ha-234651" exists ...
	I0731 17:06:10.744334   31302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:10.744373   31302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:10.758261   31302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44533
	I0731 17:06:10.758665   31302 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:10.759117   31302 main.go:141] libmachine: Using API Version  1
	I0731 17:06:10.759146   31302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:10.759427   31302 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:10.759588   31302 main.go:141] libmachine: (ha-234651) Calling .DriverName
	I0731 17:06:10.759730   31302 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 17:06:10.759752   31302 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 17:06:10.762451   31302 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:06:10.762787   31302 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 17:06:10.762813   31302 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:06:10.762920   31302 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 17:06:10.763078   31302 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 17:06:10.763237   31302 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 17:06:10.763369   31302 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651/id_rsa Username:docker}
	I0731 17:06:10.846733   31302 ssh_runner.go:195] Run: systemctl --version
	I0731 17:06:10.852148   31302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 17:06:10.866223   31302 kubeconfig.go:125] found "ha-234651" server: "https://192.168.39.254:8443"
	I0731 17:06:10.866253   31302 api_server.go:166] Checking apiserver status ...
	I0731 17:06:10.866304   31302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 17:06:10.881668   31302 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1220/cgroup
	W0731 17:06:10.900536   31302 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1220/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 17:06:10.900601   31302 ssh_runner.go:195] Run: ls
	I0731 17:06:10.905596   31302 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 17:06:10.909428   31302 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 17:06:10.909453   31302 status.go:422] ha-234651 apiserver status = Running (err=<nil>)
	I0731 17:06:10.909468   31302 status.go:257] ha-234651 status: &{Name:ha-234651 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 17:06:10.909486   31302 status.go:255] checking status of ha-234651-m02 ...
	I0731 17:06:10.909753   31302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:10.909782   31302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:10.926137   31302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43095
	I0731 17:06:10.926624   31302 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:10.927126   31302 main.go:141] libmachine: Using API Version  1
	I0731 17:06:10.927147   31302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:10.927433   31302 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:10.927626   31302 main.go:141] libmachine: (ha-234651-m02) Calling .GetState
	I0731 17:06:10.929245   31302 status.go:330] ha-234651-m02 host status = "Running" (err=<nil>)
	I0731 17:06:10.929261   31302 host.go:66] Checking if "ha-234651-m02" exists ...
	I0731 17:06:10.929558   31302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:10.929589   31302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:10.944043   31302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40405
	I0731 17:06:10.944434   31302 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:10.944863   31302 main.go:141] libmachine: Using API Version  1
	I0731 17:06:10.944885   31302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:10.945149   31302 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:10.945368   31302 main.go:141] libmachine: (ha-234651-m02) Calling .GetIP
	I0731 17:06:10.948154   31302 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:06:10.948613   31302 main.go:141] libmachine: (ha-234651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:97:0e", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:00:10 +0000 UTC Type:0 Mac:52:54:00:4c:97:0e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-234651-m02 Clientid:01:52:54:00:4c:97:0e}
	I0731 17:06:10.948644   31302 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined IP address 192.168.39.235 and MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:06:10.948779   31302 host.go:66] Checking if "ha-234651-m02" exists ...
	I0731 17:06:10.949052   31302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:10.949082   31302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:10.963137   31302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46111
	I0731 17:06:10.963484   31302 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:10.963877   31302 main.go:141] libmachine: Using API Version  1
	I0731 17:06:10.963895   31302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:10.964174   31302 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:10.964354   31302 main.go:141] libmachine: (ha-234651-m02) Calling .DriverName
	I0731 17:06:10.964523   31302 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 17:06:10.964543   31302 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHHostname
	I0731 17:06:10.967131   31302 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:06:10.967522   31302 main.go:141] libmachine: (ha-234651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:97:0e", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:00:10 +0000 UTC Type:0 Mac:52:54:00:4c:97:0e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-234651-m02 Clientid:01:52:54:00:4c:97:0e}
	I0731 17:06:10.967549   31302 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined IP address 192.168.39.235 and MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:06:10.967729   31302 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHPort
	I0731 17:06:10.967901   31302 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHKeyPath
	I0731 17:06:10.968417   31302 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHUsername
	I0731 17:06:10.968570   31302 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m02/id_rsa Username:docker}
	W0731 17:06:12.067442   31302 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.235:22: connect: no route to host
	I0731 17:06:12.067483   31302 retry.go:31] will retry after 259.026395ms: dial tcp 192.168.39.235:22: connect: no route to host
	W0731 17:06:15.139417   31302 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.235:22: connect: no route to host
	W0731 17:06:15.139495   31302 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.235:22: connect: no route to host
	E0731 17:06:15.139508   31302 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.235:22: connect: no route to host
	I0731 17:06:15.139515   31302 status.go:257] ha-234651-m02 status: &{Name:ha-234651-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0731 17:06:15.139546   31302 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.235:22: connect: no route to host
	I0731 17:06:15.139556   31302 status.go:255] checking status of ha-234651-m03 ...
	I0731 17:06:15.139864   31302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:15.139902   31302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:15.155720   31302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38137
	I0731 17:06:15.156190   31302 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:15.156725   31302 main.go:141] libmachine: Using API Version  1
	I0731 17:06:15.156746   31302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:15.157050   31302 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:15.157213   31302 main.go:141] libmachine: (ha-234651-m03) Calling .GetState
	I0731 17:06:15.158728   31302 status.go:330] ha-234651-m03 host status = "Running" (err=<nil>)
	I0731 17:06:15.158740   31302 host.go:66] Checking if "ha-234651-m03" exists ...
	I0731 17:06:15.159024   31302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:15.159055   31302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:15.173559   31302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46217
	I0731 17:06:15.173977   31302 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:15.174471   31302 main.go:141] libmachine: Using API Version  1
	I0731 17:06:15.174495   31302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:15.174794   31302 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:15.174971   31302 main.go:141] libmachine: (ha-234651-m03) Calling .GetIP
	I0731 17:06:15.177871   31302 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:06:15.178320   31302 main.go:141] libmachine: (ha-234651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c0:cf", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:01:23 +0000 UTC Type:0 Mac:52:54:00:ac:c0:cf Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-234651-m03 Clientid:01:52:54:00:ac:c0:cf}
	I0731 17:06:15.178350   31302 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined IP address 192.168.39.139 and MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:06:15.178496   31302 host.go:66] Checking if "ha-234651-m03" exists ...
	I0731 17:06:15.178822   31302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:15.178855   31302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:15.192913   31302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35969
	I0731 17:06:15.193277   31302 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:15.193721   31302 main.go:141] libmachine: Using API Version  1
	I0731 17:06:15.193740   31302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:15.194027   31302 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:15.194202   31302 main.go:141] libmachine: (ha-234651-m03) Calling .DriverName
	I0731 17:06:15.194437   31302 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 17:06:15.194462   31302 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHHostname
	I0731 17:06:15.197101   31302 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:06:15.197676   31302 main.go:141] libmachine: (ha-234651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c0:cf", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:01:23 +0000 UTC Type:0 Mac:52:54:00:ac:c0:cf Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-234651-m03 Clientid:01:52:54:00:ac:c0:cf}
	I0731 17:06:15.197708   31302 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined IP address 192.168.39.139 and MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:06:15.197841   31302 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHPort
	I0731 17:06:15.198029   31302 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHKeyPath
	I0731 17:06:15.198166   31302 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHUsername
	I0731 17:06:15.198301   31302 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m03/id_rsa Username:docker}
	I0731 17:06:15.277391   31302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 17:06:15.293661   31302 kubeconfig.go:125] found "ha-234651" server: "https://192.168.39.254:8443"
	I0731 17:06:15.293693   31302 api_server.go:166] Checking apiserver status ...
	I0731 17:06:15.293728   31302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 17:06:15.307261   31302 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1530/cgroup
	W0731 17:06:15.316367   31302 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1530/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 17:06:15.316414   31302 ssh_runner.go:195] Run: ls
	I0731 17:06:15.320565   31302 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 17:06:15.325250   31302 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 17:06:15.325269   31302 status.go:422] ha-234651-m03 apiserver status = Running (err=<nil>)
	I0731 17:06:15.325277   31302 status.go:257] ha-234651-m03 status: &{Name:ha-234651-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 17:06:15.325291   31302 status.go:255] checking status of ha-234651-m04 ...
	I0731 17:06:15.325590   31302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:15.325632   31302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:15.341321   31302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36085
	I0731 17:06:15.341720   31302 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:15.342208   31302 main.go:141] libmachine: Using API Version  1
	I0731 17:06:15.342227   31302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:15.342631   31302 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:15.342855   31302 main.go:141] libmachine: (ha-234651-m04) Calling .GetState
	I0731 17:06:15.344806   31302 status.go:330] ha-234651-m04 host status = "Running" (err=<nil>)
	I0731 17:06:15.344823   31302 host.go:66] Checking if "ha-234651-m04" exists ...
	I0731 17:06:15.345241   31302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:15.345306   31302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:15.360458   31302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39453
	I0731 17:06:15.360846   31302 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:15.361297   31302 main.go:141] libmachine: Using API Version  1
	I0731 17:06:15.361318   31302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:15.361638   31302 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:15.361828   31302 main.go:141] libmachine: (ha-234651-m04) Calling .GetIP
	I0731 17:06:15.364746   31302 main.go:141] libmachine: (ha-234651-m04) DBG | domain ha-234651-m04 has defined MAC address 52:54:00:25:f7:22 in network mk-ha-234651
	I0731 17:06:15.365163   31302 main.go:141] libmachine: (ha-234651-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f7:22", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:02:48 +0000 UTC Type:0 Mac:52:54:00:25:f7:22 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-234651-m04 Clientid:01:52:54:00:25:f7:22}
	I0731 17:06:15.365192   31302 main.go:141] libmachine: (ha-234651-m04) DBG | domain ha-234651-m04 has defined IP address 192.168.39.216 and MAC address 52:54:00:25:f7:22 in network mk-ha-234651
	I0731 17:06:15.365457   31302 host.go:66] Checking if "ha-234651-m04" exists ...
	I0731 17:06:15.365753   31302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:15.365794   31302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:15.380223   31302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40955
	I0731 17:06:15.380622   31302 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:15.381098   31302 main.go:141] libmachine: Using API Version  1
	I0731 17:06:15.381116   31302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:15.381409   31302 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:15.381565   31302 main.go:141] libmachine: (ha-234651-m04) Calling .DriverName
	I0731 17:06:15.381733   31302 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 17:06:15.381750   31302 main.go:141] libmachine: (ha-234651-m04) Calling .GetSSHHostname
	I0731 17:06:15.384349   31302 main.go:141] libmachine: (ha-234651-m04) DBG | domain ha-234651-m04 has defined MAC address 52:54:00:25:f7:22 in network mk-ha-234651
	I0731 17:06:15.384736   31302 main.go:141] libmachine: (ha-234651-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f7:22", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:02:48 +0000 UTC Type:0 Mac:52:54:00:25:f7:22 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-234651-m04 Clientid:01:52:54:00:25:f7:22}
	I0731 17:06:15.384756   31302 main.go:141] libmachine: (ha-234651-m04) DBG | domain ha-234651-m04 has defined IP address 192.168.39.216 and MAC address 52:54:00:25:f7:22 in network mk-ha-234651
	I0731 17:06:15.384861   31302 main.go:141] libmachine: (ha-234651-m04) Calling .GetSSHPort
	I0731 17:06:15.385017   31302 main.go:141] libmachine: (ha-234651-m04) Calling .GetSSHKeyPath
	I0731 17:06:15.385150   31302 main.go:141] libmachine: (ha-234651-m04) Calling .GetSSHUsername
	I0731 17:06:15.385278   31302 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m04/id_rsa Username:docker}
	I0731 17:06:15.465486   31302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 17:06:15.479299   31302 status.go:257] ha-234651-m04 status: &{Name:ha-234651-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-234651 status -v=7 --alsologtostderr: exit status 3 (4.125555543s)

                                                
                                                
-- stdout --
	ha-234651
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-234651-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-234651-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-234651-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 17:06:17.739216   31419 out.go:291] Setting OutFile to fd 1 ...
	I0731 17:06:17.739570   31419 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 17:06:17.739584   31419 out.go:304] Setting ErrFile to fd 2...
	I0731 17:06:17.739592   31419 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 17:06:17.740122   31419 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19349-8084/.minikube/bin
	I0731 17:06:17.740438   31419 out.go:298] Setting JSON to false
	I0731 17:06:17.740557   31419 mustload.go:65] Loading cluster: ha-234651
	I0731 17:06:17.740642   31419 notify.go:220] Checking for updates...
	I0731 17:06:17.740978   31419 config.go:182] Loaded profile config "ha-234651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 17:06:17.741010   31419 status.go:255] checking status of ha-234651 ...
	I0731 17:06:17.741404   31419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:17.741467   31419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:17.761621   31419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32905
	I0731 17:06:17.762042   31419 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:17.762599   31419 main.go:141] libmachine: Using API Version  1
	I0731 17:06:17.762621   31419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:17.763015   31419 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:17.763252   31419 main.go:141] libmachine: (ha-234651) Calling .GetState
	I0731 17:06:17.764920   31419 status.go:330] ha-234651 host status = "Running" (err=<nil>)
	I0731 17:06:17.764939   31419 host.go:66] Checking if "ha-234651" exists ...
	I0731 17:06:17.765322   31419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:17.765368   31419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:17.779856   31419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39665
	I0731 17:06:17.780217   31419 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:17.780695   31419 main.go:141] libmachine: Using API Version  1
	I0731 17:06:17.780719   31419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:17.781027   31419 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:17.781224   31419 main.go:141] libmachine: (ha-234651) Calling .GetIP
	I0731 17:06:17.783721   31419 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:06:17.784203   31419 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 17:06:17.784237   31419 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:06:17.784434   31419 host.go:66] Checking if "ha-234651" exists ...
	I0731 17:06:17.784721   31419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:17.784765   31419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:17.802732   31419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46253
	I0731 17:06:17.803127   31419 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:17.803597   31419 main.go:141] libmachine: Using API Version  1
	I0731 17:06:17.803617   31419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:17.803925   31419 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:17.804120   31419 main.go:141] libmachine: (ha-234651) Calling .DriverName
	I0731 17:06:17.804313   31419 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 17:06:17.804350   31419 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 17:06:17.806731   31419 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:06:17.807081   31419 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 17:06:17.807152   31419 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:06:17.807244   31419 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 17:06:17.807399   31419 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 17:06:17.807527   31419 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 17:06:17.807668   31419 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651/id_rsa Username:docker}
	I0731 17:06:17.886148   31419 ssh_runner.go:195] Run: systemctl --version
	I0731 17:06:17.892100   31419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 17:06:17.906798   31419 kubeconfig.go:125] found "ha-234651" server: "https://192.168.39.254:8443"
	I0731 17:06:17.906831   31419 api_server.go:166] Checking apiserver status ...
	I0731 17:06:17.906873   31419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 17:06:17.921003   31419 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1220/cgroup
	W0731 17:06:17.930586   31419 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1220/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 17:06:17.930643   31419 ssh_runner.go:195] Run: ls
	I0731 17:06:17.934618   31419 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 17:06:17.938741   31419 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 17:06:17.938761   31419 status.go:422] ha-234651 apiserver status = Running (err=<nil>)
	I0731 17:06:17.938770   31419 status.go:257] ha-234651 status: &{Name:ha-234651 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 17:06:17.938790   31419 status.go:255] checking status of ha-234651-m02 ...
	I0731 17:06:17.939149   31419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:17.939196   31419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:17.953622   31419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37021
	I0731 17:06:17.954046   31419 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:17.954501   31419 main.go:141] libmachine: Using API Version  1
	I0731 17:06:17.954523   31419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:17.954876   31419 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:17.955068   31419 main.go:141] libmachine: (ha-234651-m02) Calling .GetState
	I0731 17:06:17.956628   31419 status.go:330] ha-234651-m02 host status = "Running" (err=<nil>)
	I0731 17:06:17.956644   31419 host.go:66] Checking if "ha-234651-m02" exists ...
	I0731 17:06:17.956982   31419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:17.957020   31419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:17.971280   31419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45775
	I0731 17:06:17.971763   31419 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:17.972235   31419 main.go:141] libmachine: Using API Version  1
	I0731 17:06:17.972256   31419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:17.972566   31419 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:17.972750   31419 main.go:141] libmachine: (ha-234651-m02) Calling .GetIP
	I0731 17:06:17.975676   31419 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:06:17.976026   31419 main.go:141] libmachine: (ha-234651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:97:0e", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:00:10 +0000 UTC Type:0 Mac:52:54:00:4c:97:0e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-234651-m02 Clientid:01:52:54:00:4c:97:0e}
	I0731 17:06:17.976058   31419 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined IP address 192.168.39.235 and MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:06:17.976207   31419 host.go:66] Checking if "ha-234651-m02" exists ...
	I0731 17:06:17.976587   31419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:17.976657   31419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:17.990983   31419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45641
	I0731 17:06:17.991411   31419 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:17.991839   31419 main.go:141] libmachine: Using API Version  1
	I0731 17:06:17.991863   31419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:17.992160   31419 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:17.992413   31419 main.go:141] libmachine: (ha-234651-m02) Calling .DriverName
	I0731 17:06:17.992621   31419 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 17:06:17.992638   31419 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHHostname
	I0731 17:06:17.995465   31419 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:06:17.995973   31419 main.go:141] libmachine: (ha-234651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:97:0e", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:00:10 +0000 UTC Type:0 Mac:52:54:00:4c:97:0e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-234651-m02 Clientid:01:52:54:00:4c:97:0e}
	I0731 17:06:17.996014   31419 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined IP address 192.168.39.235 and MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:06:17.996042   31419 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHPort
	I0731 17:06:17.996224   31419 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHKeyPath
	I0731 17:06:17.996376   31419 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHUsername
	I0731 17:06:17.996535   31419 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m02/id_rsa Username:docker}
	W0731 17:06:18.215374   31419 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.235:22: connect: no route to host
	I0731 17:06:18.215429   31419 retry.go:31] will retry after 206.800731ms: dial tcp 192.168.39.235:22: connect: no route to host
	W0731 17:06:21.479328   31419 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.235:22: connect: no route to host
	W0731 17:06:21.479408   31419 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.235:22: connect: no route to host
	E0731 17:06:21.479424   31419 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.235:22: connect: no route to host
	I0731 17:06:21.479430   31419 status.go:257] ha-234651-m02 status: &{Name:ha-234651-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0731 17:06:21.479449   31419 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.235:22: connect: no route to host
	I0731 17:06:21.479466   31419 status.go:255] checking status of ha-234651-m03 ...
	I0731 17:06:21.479760   31419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:21.479799   31419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:21.494910   31419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46113
	I0731 17:06:21.495347   31419 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:21.495866   31419 main.go:141] libmachine: Using API Version  1
	I0731 17:06:21.495888   31419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:21.496181   31419 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:21.496349   31419 main.go:141] libmachine: (ha-234651-m03) Calling .GetState
	I0731 17:06:21.497921   31419 status.go:330] ha-234651-m03 host status = "Running" (err=<nil>)
	I0731 17:06:21.497938   31419 host.go:66] Checking if "ha-234651-m03" exists ...
	I0731 17:06:21.498244   31419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:21.498294   31419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:21.512627   31419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40813
	I0731 17:06:21.513047   31419 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:21.513501   31419 main.go:141] libmachine: Using API Version  1
	I0731 17:06:21.513523   31419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:21.513820   31419 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:21.513978   31419 main.go:141] libmachine: (ha-234651-m03) Calling .GetIP
	I0731 17:06:21.516806   31419 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:06:21.517192   31419 main.go:141] libmachine: (ha-234651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c0:cf", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:01:23 +0000 UTC Type:0 Mac:52:54:00:ac:c0:cf Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-234651-m03 Clientid:01:52:54:00:ac:c0:cf}
	I0731 17:06:21.517232   31419 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined IP address 192.168.39.139 and MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:06:21.517368   31419 host.go:66] Checking if "ha-234651-m03" exists ...
	I0731 17:06:21.517689   31419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:21.517738   31419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:21.532009   31419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42513
	I0731 17:06:21.532363   31419 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:21.532837   31419 main.go:141] libmachine: Using API Version  1
	I0731 17:06:21.532859   31419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:21.533167   31419 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:21.533355   31419 main.go:141] libmachine: (ha-234651-m03) Calling .DriverName
	I0731 17:06:21.533546   31419 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 17:06:21.533566   31419 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHHostname
	I0731 17:06:21.536110   31419 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:06:21.536460   31419 main.go:141] libmachine: (ha-234651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c0:cf", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:01:23 +0000 UTC Type:0 Mac:52:54:00:ac:c0:cf Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-234651-m03 Clientid:01:52:54:00:ac:c0:cf}
	I0731 17:06:21.536505   31419 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined IP address 192.168.39.139 and MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:06:21.536642   31419 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHPort
	I0731 17:06:21.536794   31419 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHKeyPath
	I0731 17:06:21.536938   31419 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHUsername
	I0731 17:06:21.537077   31419 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m03/id_rsa Username:docker}
	I0731 17:06:21.614193   31419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 17:06:21.628992   31419 kubeconfig.go:125] found "ha-234651" server: "https://192.168.39.254:8443"
	I0731 17:06:21.629018   31419 api_server.go:166] Checking apiserver status ...
	I0731 17:06:21.629052   31419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 17:06:21.642514   31419 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1530/cgroup
	W0731 17:06:21.652009   31419 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1530/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 17:06:21.652067   31419 ssh_runner.go:195] Run: ls
	I0731 17:06:21.655925   31419 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 17:06:21.660282   31419 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 17:06:21.660303   31419 status.go:422] ha-234651-m03 apiserver status = Running (err=<nil>)
	I0731 17:06:21.660314   31419 status.go:257] ha-234651-m03 status: &{Name:ha-234651-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 17:06:21.660339   31419 status.go:255] checking status of ha-234651-m04 ...
	I0731 17:06:21.660671   31419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:21.660714   31419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:21.675861   31419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38345
	I0731 17:06:21.676269   31419 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:21.676674   31419 main.go:141] libmachine: Using API Version  1
	I0731 17:06:21.676690   31419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:21.676981   31419 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:21.677180   31419 main.go:141] libmachine: (ha-234651-m04) Calling .GetState
	I0731 17:06:21.678705   31419 status.go:330] ha-234651-m04 host status = "Running" (err=<nil>)
	I0731 17:06:21.678720   31419 host.go:66] Checking if "ha-234651-m04" exists ...
	I0731 17:06:21.679089   31419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:21.679155   31419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:21.693712   31419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44147
	I0731 17:06:21.694107   31419 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:21.694564   31419 main.go:141] libmachine: Using API Version  1
	I0731 17:06:21.694587   31419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:21.694914   31419 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:21.695123   31419 main.go:141] libmachine: (ha-234651-m04) Calling .GetIP
	I0731 17:06:21.697533   31419 main.go:141] libmachine: (ha-234651-m04) DBG | domain ha-234651-m04 has defined MAC address 52:54:00:25:f7:22 in network mk-ha-234651
	I0731 17:06:21.698148   31419 main.go:141] libmachine: (ha-234651-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f7:22", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:02:48 +0000 UTC Type:0 Mac:52:54:00:25:f7:22 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-234651-m04 Clientid:01:52:54:00:25:f7:22}
	I0731 17:06:21.698177   31419 main.go:141] libmachine: (ha-234651-m04) DBG | domain ha-234651-m04 has defined IP address 192.168.39.216 and MAC address 52:54:00:25:f7:22 in network mk-ha-234651
	I0731 17:06:21.698319   31419 host.go:66] Checking if "ha-234651-m04" exists ...
	I0731 17:06:21.698591   31419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:21.698621   31419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:21.713988   31419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43265
	I0731 17:06:21.714421   31419 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:21.714882   31419 main.go:141] libmachine: Using API Version  1
	I0731 17:06:21.714901   31419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:21.715205   31419 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:21.715503   31419 main.go:141] libmachine: (ha-234651-m04) Calling .DriverName
	I0731 17:06:21.715677   31419 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 17:06:21.715698   31419 main.go:141] libmachine: (ha-234651-m04) Calling .GetSSHHostname
	I0731 17:06:21.718445   31419 main.go:141] libmachine: (ha-234651-m04) DBG | domain ha-234651-m04 has defined MAC address 52:54:00:25:f7:22 in network mk-ha-234651
	I0731 17:06:21.718880   31419 main.go:141] libmachine: (ha-234651-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f7:22", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:02:48 +0000 UTC Type:0 Mac:52:54:00:25:f7:22 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-234651-m04 Clientid:01:52:54:00:25:f7:22}
	I0731 17:06:21.718908   31419 main.go:141] libmachine: (ha-234651-m04) DBG | domain ha-234651-m04 has defined IP address 192.168.39.216 and MAC address 52:54:00:25:f7:22 in network mk-ha-234651
	I0731 17:06:21.719054   31419 main.go:141] libmachine: (ha-234651-m04) Calling .GetSSHPort
	I0731 17:06:21.719237   31419 main.go:141] libmachine: (ha-234651-m04) Calling .GetSSHKeyPath
	I0731 17:06:21.719377   31419 main.go:141] libmachine: (ha-234651-m04) Calling .GetSSHUsername
	I0731 17:06:21.719544   31419 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m04/id_rsa Username:docker}
	I0731 17:06:21.806517   31419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 17:06:21.824709   31419 status.go:257] ha-234651-m04 status: &{Name:ha-234651-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-234651 status -v=7 --alsologtostderr: exit status 3 (4.259188712s)

                                                
                                                
-- stdout --
	ha-234651
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-234651-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-234651-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-234651-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 17:06:23.943901   31519 out.go:291] Setting OutFile to fd 1 ...
	I0731 17:06:23.944014   31519 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 17:06:23.944022   31519 out.go:304] Setting ErrFile to fd 2...
	I0731 17:06:23.944027   31519 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 17:06:23.944198   31519 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19349-8084/.minikube/bin
	I0731 17:06:23.944362   31519 out.go:298] Setting JSON to false
	I0731 17:06:23.944388   31519 mustload.go:65] Loading cluster: ha-234651
	I0731 17:06:23.944500   31519 notify.go:220] Checking for updates...
	I0731 17:06:23.944758   31519 config.go:182] Loaded profile config "ha-234651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 17:06:23.944775   31519 status.go:255] checking status of ha-234651 ...
	I0731 17:06:23.945142   31519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:23.945203   31519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:23.964107   31519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42555
	I0731 17:06:23.964496   31519 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:23.965039   31519 main.go:141] libmachine: Using API Version  1
	I0731 17:06:23.965063   31519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:23.965405   31519 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:23.965624   31519 main.go:141] libmachine: (ha-234651) Calling .GetState
	I0731 17:06:23.967207   31519 status.go:330] ha-234651 host status = "Running" (err=<nil>)
	I0731 17:06:23.967231   31519 host.go:66] Checking if "ha-234651" exists ...
	I0731 17:06:23.967625   31519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:23.967658   31519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:23.982032   31519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45941
	I0731 17:06:23.982406   31519 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:23.982801   31519 main.go:141] libmachine: Using API Version  1
	I0731 17:06:23.982822   31519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:23.983174   31519 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:23.983371   31519 main.go:141] libmachine: (ha-234651) Calling .GetIP
	I0731 17:06:23.986094   31519 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:06:23.986503   31519 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 17:06:23.986525   31519 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:06:23.986626   31519 host.go:66] Checking if "ha-234651" exists ...
	I0731 17:06:23.986962   31519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:23.986998   31519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:24.002080   31519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33597
	I0731 17:06:24.002446   31519 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:24.002894   31519 main.go:141] libmachine: Using API Version  1
	I0731 17:06:24.002914   31519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:24.003242   31519 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:24.003430   31519 main.go:141] libmachine: (ha-234651) Calling .DriverName
	I0731 17:06:24.003597   31519 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 17:06:24.003632   31519 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 17:06:24.006180   31519 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:06:24.006568   31519 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 17:06:24.006590   31519 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:06:24.006724   31519 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 17:06:24.006887   31519 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 17:06:24.007044   31519 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 17:06:24.007167   31519 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651/id_rsa Username:docker}
	I0731 17:06:24.086522   31519 ssh_runner.go:195] Run: systemctl --version
	I0731 17:06:24.092264   31519 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 17:06:24.106093   31519 kubeconfig.go:125] found "ha-234651" server: "https://192.168.39.254:8443"
	I0731 17:06:24.106120   31519 api_server.go:166] Checking apiserver status ...
	I0731 17:06:24.106152   31519 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 17:06:24.119518   31519 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1220/cgroup
	W0731 17:06:24.128224   31519 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1220/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 17:06:24.128267   31519 ssh_runner.go:195] Run: ls
	I0731 17:06:24.132177   31519 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 17:06:24.137847   31519 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 17:06:24.137866   31519 status.go:422] ha-234651 apiserver status = Running (err=<nil>)
	I0731 17:06:24.137877   31519 status.go:257] ha-234651 status: &{Name:ha-234651 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 17:06:24.137907   31519 status.go:255] checking status of ha-234651-m02 ...
	I0731 17:06:24.138268   31519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:24.138304   31519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:24.154368   31519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44163
	I0731 17:06:24.154740   31519 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:24.155163   31519 main.go:141] libmachine: Using API Version  1
	I0731 17:06:24.155183   31519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:24.155534   31519 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:24.155712   31519 main.go:141] libmachine: (ha-234651-m02) Calling .GetState
	I0731 17:06:24.157245   31519 status.go:330] ha-234651-m02 host status = "Running" (err=<nil>)
	I0731 17:06:24.157258   31519 host.go:66] Checking if "ha-234651-m02" exists ...
	I0731 17:06:24.157551   31519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:24.157594   31519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:24.172796   31519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38209
	I0731 17:06:24.173171   31519 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:24.173607   31519 main.go:141] libmachine: Using API Version  1
	I0731 17:06:24.173627   31519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:24.173956   31519 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:24.174144   31519 main.go:141] libmachine: (ha-234651-m02) Calling .GetIP
	I0731 17:06:24.177214   31519 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:06:24.177787   31519 main.go:141] libmachine: (ha-234651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:97:0e", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:00:10 +0000 UTC Type:0 Mac:52:54:00:4c:97:0e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-234651-m02 Clientid:01:52:54:00:4c:97:0e}
	I0731 17:06:24.177815   31519 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined IP address 192.168.39.235 and MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:06:24.177914   31519 host.go:66] Checking if "ha-234651-m02" exists ...
	I0731 17:06:24.178210   31519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:24.178272   31519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:24.192661   31519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38835
	I0731 17:06:24.193099   31519 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:24.193689   31519 main.go:141] libmachine: Using API Version  1
	I0731 17:06:24.193709   31519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:24.194093   31519 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:24.194285   31519 main.go:141] libmachine: (ha-234651-m02) Calling .DriverName
	I0731 17:06:24.194474   31519 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 17:06:24.194496   31519 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHHostname
	I0731 17:06:24.197197   31519 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:06:24.197593   31519 main.go:141] libmachine: (ha-234651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:97:0e", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:00:10 +0000 UTC Type:0 Mac:52:54:00:4c:97:0e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-234651-m02 Clientid:01:52:54:00:4c:97:0e}
	I0731 17:06:24.197618   31519 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined IP address 192.168.39.235 and MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:06:24.197738   31519 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHPort
	I0731 17:06:24.197933   31519 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHKeyPath
	I0731 17:06:24.198089   31519 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHUsername
	I0731 17:06:24.198296   31519 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m02/id_rsa Username:docker}
	W0731 17:06:24.547396   31519 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.235:22: connect: no route to host
	I0731 17:06:24.547449   31519 retry.go:31] will retry after 214.01172ms: dial tcp 192.168.39.235:22: connect: no route to host
	W0731 17:06:27.811316   31519 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.235:22: connect: no route to host
	W0731 17:06:27.811405   31519 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.235:22: connect: no route to host
	E0731 17:06:27.811430   31519 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.235:22: connect: no route to host
	I0731 17:06:27.811439   31519 status.go:257] ha-234651-m02 status: &{Name:ha-234651-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0731 17:06:27.811462   31519 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.235:22: connect: no route to host
	I0731 17:06:27.811478   31519 status.go:255] checking status of ha-234651-m03 ...
	I0731 17:06:27.811786   31519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:27.811827   31519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:27.827146   31519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36107
	I0731 17:06:27.827607   31519 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:27.828092   31519 main.go:141] libmachine: Using API Version  1
	I0731 17:06:27.828121   31519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:27.828458   31519 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:27.828707   31519 main.go:141] libmachine: (ha-234651-m03) Calling .GetState
	I0731 17:06:27.830237   31519 status.go:330] ha-234651-m03 host status = "Running" (err=<nil>)
	I0731 17:06:27.830255   31519 host.go:66] Checking if "ha-234651-m03" exists ...
	I0731 17:06:27.830564   31519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:27.830601   31519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:27.846387   31519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36609
	I0731 17:06:27.846814   31519 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:27.847295   31519 main.go:141] libmachine: Using API Version  1
	I0731 17:06:27.847315   31519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:27.847663   31519 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:27.847817   31519 main.go:141] libmachine: (ha-234651-m03) Calling .GetIP
	I0731 17:06:27.850608   31519 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:06:27.851029   31519 main.go:141] libmachine: (ha-234651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c0:cf", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:01:23 +0000 UTC Type:0 Mac:52:54:00:ac:c0:cf Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-234651-m03 Clientid:01:52:54:00:ac:c0:cf}
	I0731 17:06:27.851062   31519 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined IP address 192.168.39.139 and MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:06:27.851207   31519 host.go:66] Checking if "ha-234651-m03" exists ...
	I0731 17:06:27.851540   31519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:27.851579   31519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:27.866243   31519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36885
	I0731 17:06:27.866634   31519 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:27.867103   31519 main.go:141] libmachine: Using API Version  1
	I0731 17:06:27.867146   31519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:27.867478   31519 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:27.867679   31519 main.go:141] libmachine: (ha-234651-m03) Calling .DriverName
	I0731 17:06:27.867862   31519 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 17:06:27.867885   31519 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHHostname
	I0731 17:06:27.872999   31519 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:06:27.873435   31519 main.go:141] libmachine: (ha-234651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c0:cf", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:01:23 +0000 UTC Type:0 Mac:52:54:00:ac:c0:cf Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-234651-m03 Clientid:01:52:54:00:ac:c0:cf}
	I0731 17:06:27.873471   31519 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined IP address 192.168.39.139 and MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:06:27.873603   31519 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHPort
	I0731 17:06:27.873774   31519 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHKeyPath
	I0731 17:06:27.873936   31519 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHUsername
	I0731 17:06:27.874079   31519 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m03/id_rsa Username:docker}
	I0731 17:06:27.950636   31519 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 17:06:27.967942   31519 kubeconfig.go:125] found "ha-234651" server: "https://192.168.39.254:8443"
	I0731 17:06:27.967965   31519 api_server.go:166] Checking apiserver status ...
	I0731 17:06:27.967999   31519 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 17:06:27.983973   31519 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1530/cgroup
	W0731 17:06:27.994645   31519 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1530/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 17:06:27.994688   31519 ssh_runner.go:195] Run: ls
	I0731 17:06:27.999241   31519 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 17:06:28.003285   31519 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 17:06:28.003304   31519 status.go:422] ha-234651-m03 apiserver status = Running (err=<nil>)
	I0731 17:06:28.003312   31519 status.go:257] ha-234651-m03 status: &{Name:ha-234651-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 17:06:28.003326   31519 status.go:255] checking status of ha-234651-m04 ...
	I0731 17:06:28.003593   31519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:28.003623   31519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:28.019357   31519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36121
	I0731 17:06:28.019751   31519 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:28.020171   31519 main.go:141] libmachine: Using API Version  1
	I0731 17:06:28.020193   31519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:28.020618   31519 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:28.020861   31519 main.go:141] libmachine: (ha-234651-m04) Calling .GetState
	I0731 17:06:28.022380   31519 status.go:330] ha-234651-m04 host status = "Running" (err=<nil>)
	I0731 17:06:28.022398   31519 host.go:66] Checking if "ha-234651-m04" exists ...
	I0731 17:06:28.022774   31519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:28.022813   31519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:28.036902   31519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43329
	I0731 17:06:28.037296   31519 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:28.037697   31519 main.go:141] libmachine: Using API Version  1
	I0731 17:06:28.037715   31519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:28.037972   31519 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:28.038147   31519 main.go:141] libmachine: (ha-234651-m04) Calling .GetIP
	I0731 17:06:28.040656   31519 main.go:141] libmachine: (ha-234651-m04) DBG | domain ha-234651-m04 has defined MAC address 52:54:00:25:f7:22 in network mk-ha-234651
	I0731 17:06:28.041084   31519 main.go:141] libmachine: (ha-234651-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f7:22", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:02:48 +0000 UTC Type:0 Mac:52:54:00:25:f7:22 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-234651-m04 Clientid:01:52:54:00:25:f7:22}
	I0731 17:06:28.041108   31519 main.go:141] libmachine: (ha-234651-m04) DBG | domain ha-234651-m04 has defined IP address 192.168.39.216 and MAC address 52:54:00:25:f7:22 in network mk-ha-234651
	I0731 17:06:28.041295   31519 host.go:66] Checking if "ha-234651-m04" exists ...
	I0731 17:06:28.041614   31519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:28.041645   31519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:28.057253   31519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39875
	I0731 17:06:28.057624   31519 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:28.058088   31519 main.go:141] libmachine: Using API Version  1
	I0731 17:06:28.058109   31519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:28.058385   31519 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:28.058588   31519 main.go:141] libmachine: (ha-234651-m04) Calling .DriverName
	I0731 17:06:28.058819   31519 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 17:06:28.058841   31519 main.go:141] libmachine: (ha-234651-m04) Calling .GetSSHHostname
	I0731 17:06:28.061399   31519 main.go:141] libmachine: (ha-234651-m04) DBG | domain ha-234651-m04 has defined MAC address 52:54:00:25:f7:22 in network mk-ha-234651
	I0731 17:06:28.061858   31519 main.go:141] libmachine: (ha-234651-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f7:22", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:02:48 +0000 UTC Type:0 Mac:52:54:00:25:f7:22 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-234651-m04 Clientid:01:52:54:00:25:f7:22}
	I0731 17:06:28.061888   31519 main.go:141] libmachine: (ha-234651-m04) DBG | domain ha-234651-m04 has defined IP address 192.168.39.216 and MAC address 52:54:00:25:f7:22 in network mk-ha-234651
	I0731 17:06:28.062040   31519 main.go:141] libmachine: (ha-234651-m04) Calling .GetSSHPort
	I0731 17:06:28.062183   31519 main.go:141] libmachine: (ha-234651-m04) Calling .GetSSHKeyPath
	I0731 17:06:28.062312   31519 main.go:141] libmachine: (ha-234651-m04) Calling .GetSSHUsername
	I0731 17:06:28.062484   31519 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m04/id_rsa Username:docker}
	I0731 17:06:28.146637   31519 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 17:06:28.160759   31519 status.go:257] ha-234651-m04 status: &{Name:ha-234651-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-234651 status -v=7 --alsologtostderr: exit status 3 (3.714000209s)

                                                
                                                
-- stdout --
	ha-234651
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-234651-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-234651-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-234651-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 17:06:32.985368   31635 out.go:291] Setting OutFile to fd 1 ...
	I0731 17:06:32.985478   31635 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 17:06:32.985486   31635 out.go:304] Setting ErrFile to fd 2...
	I0731 17:06:32.985490   31635 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 17:06:32.985667   31635 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19349-8084/.minikube/bin
	I0731 17:06:32.985824   31635 out.go:298] Setting JSON to false
	I0731 17:06:32.985849   31635 mustload.go:65] Loading cluster: ha-234651
	I0731 17:06:32.985946   31635 notify.go:220] Checking for updates...
	I0731 17:06:32.986232   31635 config.go:182] Loaded profile config "ha-234651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 17:06:32.986248   31635 status.go:255] checking status of ha-234651 ...
	I0731 17:06:32.986618   31635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:32.986678   31635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:33.005686   31635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42499
	I0731 17:06:33.006111   31635 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:33.006665   31635 main.go:141] libmachine: Using API Version  1
	I0731 17:06:33.006687   31635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:33.007086   31635 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:33.007311   31635 main.go:141] libmachine: (ha-234651) Calling .GetState
	I0731 17:06:33.008996   31635 status.go:330] ha-234651 host status = "Running" (err=<nil>)
	I0731 17:06:33.009019   31635 host.go:66] Checking if "ha-234651" exists ...
	I0731 17:06:33.009267   31635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:33.009298   31635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:33.023663   31635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36247
	I0731 17:06:33.024047   31635 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:33.024492   31635 main.go:141] libmachine: Using API Version  1
	I0731 17:06:33.024527   31635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:33.024830   31635 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:33.025013   31635 main.go:141] libmachine: (ha-234651) Calling .GetIP
	I0731 17:06:33.028093   31635 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:06:33.028561   31635 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 17:06:33.028594   31635 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:06:33.028706   31635 host.go:66] Checking if "ha-234651" exists ...
	I0731 17:06:33.029003   31635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:33.029041   31635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:33.043702   31635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39443
	I0731 17:06:33.044126   31635 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:33.044603   31635 main.go:141] libmachine: Using API Version  1
	I0731 17:06:33.044621   31635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:33.044944   31635 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:33.045141   31635 main.go:141] libmachine: (ha-234651) Calling .DriverName
	I0731 17:06:33.045328   31635 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 17:06:33.045362   31635 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 17:06:33.047897   31635 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:06:33.048289   31635 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 17:06:33.048315   31635 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:06:33.048453   31635 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 17:06:33.048632   31635 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 17:06:33.048752   31635 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 17:06:33.048921   31635 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651/id_rsa Username:docker}
	I0731 17:06:33.131036   31635 ssh_runner.go:195] Run: systemctl --version
	I0731 17:06:33.136831   31635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 17:06:33.155230   31635 kubeconfig.go:125] found "ha-234651" server: "https://192.168.39.254:8443"
	I0731 17:06:33.155259   31635 api_server.go:166] Checking apiserver status ...
	I0731 17:06:33.155311   31635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 17:06:33.169171   31635 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1220/cgroup
	W0731 17:06:33.181357   31635 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1220/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 17:06:33.181417   31635 ssh_runner.go:195] Run: ls
	I0731 17:06:33.185316   31635 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 17:06:33.190909   31635 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 17:06:33.190929   31635 status.go:422] ha-234651 apiserver status = Running (err=<nil>)
	I0731 17:06:33.190938   31635 status.go:257] ha-234651 status: &{Name:ha-234651 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 17:06:33.190956   31635 status.go:255] checking status of ha-234651-m02 ...
	I0731 17:06:33.191303   31635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:33.191343   31635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:33.205756   31635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40697
	I0731 17:06:33.206199   31635 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:33.206697   31635 main.go:141] libmachine: Using API Version  1
	I0731 17:06:33.206718   31635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:33.207038   31635 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:33.207288   31635 main.go:141] libmachine: (ha-234651-m02) Calling .GetState
	I0731 17:06:33.208881   31635 status.go:330] ha-234651-m02 host status = "Running" (err=<nil>)
	I0731 17:06:33.208897   31635 host.go:66] Checking if "ha-234651-m02" exists ...
	I0731 17:06:33.209195   31635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:33.209229   31635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:33.224674   31635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38235
	I0731 17:06:33.225034   31635 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:33.225499   31635 main.go:141] libmachine: Using API Version  1
	I0731 17:06:33.225523   31635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:33.225830   31635 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:33.226020   31635 main.go:141] libmachine: (ha-234651-m02) Calling .GetIP
	I0731 17:06:33.228595   31635 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:06:33.228954   31635 main.go:141] libmachine: (ha-234651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:97:0e", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:00:10 +0000 UTC Type:0 Mac:52:54:00:4c:97:0e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-234651-m02 Clientid:01:52:54:00:4c:97:0e}
	I0731 17:06:33.228976   31635 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined IP address 192.168.39.235 and MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:06:33.229173   31635 host.go:66] Checking if "ha-234651-m02" exists ...
	I0731 17:06:33.229447   31635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:33.229487   31635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:33.244443   31635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41401
	I0731 17:06:33.244807   31635 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:33.245267   31635 main.go:141] libmachine: Using API Version  1
	I0731 17:06:33.245284   31635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:33.245578   31635 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:33.245747   31635 main.go:141] libmachine: (ha-234651-m02) Calling .DriverName
	I0731 17:06:33.245933   31635 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 17:06:33.245954   31635 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHHostname
	I0731 17:06:33.248621   31635 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:06:33.249023   31635 main.go:141] libmachine: (ha-234651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:97:0e", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:00:10 +0000 UTC Type:0 Mac:52:54:00:4c:97:0e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-234651-m02 Clientid:01:52:54:00:4c:97:0e}
	I0731 17:06:33.249059   31635 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined IP address 192.168.39.235 and MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:06:33.249179   31635 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHPort
	I0731 17:06:33.249319   31635 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHKeyPath
	I0731 17:06:33.249428   31635 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHUsername
	I0731 17:06:33.249532   31635 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m02/id_rsa Username:docker}
	W0731 17:06:36.327337   31635 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.235:22: connect: no route to host
	W0731 17:06:36.327462   31635 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.235:22: connect: no route to host
	E0731 17:06:36.327487   31635 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.235:22: connect: no route to host
	I0731 17:06:36.327496   31635 status.go:257] ha-234651-m02 status: &{Name:ha-234651-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0731 17:06:36.327517   31635 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.235:22: connect: no route to host
	I0731 17:06:36.327528   31635 status.go:255] checking status of ha-234651-m03 ...
	I0731 17:06:36.327830   31635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:36.327867   31635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:36.342546   31635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33099
	I0731 17:06:36.343067   31635 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:36.343701   31635 main.go:141] libmachine: Using API Version  1
	I0731 17:06:36.343727   31635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:36.344026   31635 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:36.344222   31635 main.go:141] libmachine: (ha-234651-m03) Calling .GetState
	I0731 17:06:36.345656   31635 status.go:330] ha-234651-m03 host status = "Running" (err=<nil>)
	I0731 17:06:36.345669   31635 host.go:66] Checking if "ha-234651-m03" exists ...
	I0731 17:06:36.345948   31635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:36.345978   31635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:36.360489   31635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34653
	I0731 17:06:36.360872   31635 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:36.361298   31635 main.go:141] libmachine: Using API Version  1
	I0731 17:06:36.361316   31635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:36.361660   31635 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:36.361836   31635 main.go:141] libmachine: (ha-234651-m03) Calling .GetIP
	I0731 17:06:36.364827   31635 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:06:36.365243   31635 main.go:141] libmachine: (ha-234651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c0:cf", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:01:23 +0000 UTC Type:0 Mac:52:54:00:ac:c0:cf Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-234651-m03 Clientid:01:52:54:00:ac:c0:cf}
	I0731 17:06:36.365268   31635 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined IP address 192.168.39.139 and MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:06:36.365435   31635 host.go:66] Checking if "ha-234651-m03" exists ...
	I0731 17:06:36.365714   31635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:36.365745   31635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:36.381004   31635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33317
	I0731 17:06:36.381406   31635 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:36.381915   31635 main.go:141] libmachine: Using API Version  1
	I0731 17:06:36.381939   31635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:36.382244   31635 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:36.382417   31635 main.go:141] libmachine: (ha-234651-m03) Calling .DriverName
	I0731 17:06:36.382593   31635 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 17:06:36.382614   31635 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHHostname
	I0731 17:06:36.385263   31635 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:06:36.385656   31635 main.go:141] libmachine: (ha-234651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c0:cf", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:01:23 +0000 UTC Type:0 Mac:52:54:00:ac:c0:cf Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-234651-m03 Clientid:01:52:54:00:ac:c0:cf}
	I0731 17:06:36.385695   31635 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined IP address 192.168.39.139 and MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:06:36.385803   31635 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHPort
	I0731 17:06:36.385946   31635 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHKeyPath
	I0731 17:06:36.386096   31635 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHUsername
	I0731 17:06:36.386213   31635 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m03/id_rsa Username:docker}
	I0731 17:06:36.462168   31635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 17:06:36.476688   31635 kubeconfig.go:125] found "ha-234651" server: "https://192.168.39.254:8443"
	I0731 17:06:36.476718   31635 api_server.go:166] Checking apiserver status ...
	I0731 17:06:36.476755   31635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 17:06:36.489913   31635 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1530/cgroup
	W0731 17:06:36.498474   31635 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1530/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 17:06:36.498522   31635 ssh_runner.go:195] Run: ls
	I0731 17:06:36.502305   31635 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 17:06:36.508122   31635 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 17:06:36.508145   31635 status.go:422] ha-234651-m03 apiserver status = Running (err=<nil>)
	I0731 17:06:36.508155   31635 status.go:257] ha-234651-m03 status: &{Name:ha-234651-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 17:06:36.508172   31635 status.go:255] checking status of ha-234651-m04 ...
	I0731 17:06:36.508469   31635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:36.508501   31635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:36.522792   31635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34531
	I0731 17:06:36.523173   31635 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:36.523577   31635 main.go:141] libmachine: Using API Version  1
	I0731 17:06:36.523600   31635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:36.523840   31635 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:36.524045   31635 main.go:141] libmachine: (ha-234651-m04) Calling .GetState
	I0731 17:06:36.525567   31635 status.go:330] ha-234651-m04 host status = "Running" (err=<nil>)
	I0731 17:06:36.525582   31635 host.go:66] Checking if "ha-234651-m04" exists ...
	I0731 17:06:36.525858   31635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:36.525892   31635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:36.539948   31635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39987
	I0731 17:06:36.540320   31635 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:36.540745   31635 main.go:141] libmachine: Using API Version  1
	I0731 17:06:36.540766   31635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:36.541092   31635 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:36.541262   31635 main.go:141] libmachine: (ha-234651-m04) Calling .GetIP
	I0731 17:06:36.544234   31635 main.go:141] libmachine: (ha-234651-m04) DBG | domain ha-234651-m04 has defined MAC address 52:54:00:25:f7:22 in network mk-ha-234651
	I0731 17:06:36.544662   31635 main.go:141] libmachine: (ha-234651-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f7:22", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:02:48 +0000 UTC Type:0 Mac:52:54:00:25:f7:22 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-234651-m04 Clientid:01:52:54:00:25:f7:22}
	I0731 17:06:36.544690   31635 main.go:141] libmachine: (ha-234651-m04) DBG | domain ha-234651-m04 has defined IP address 192.168.39.216 and MAC address 52:54:00:25:f7:22 in network mk-ha-234651
	I0731 17:06:36.544827   31635 host.go:66] Checking if "ha-234651-m04" exists ...
	I0731 17:06:36.545094   31635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:36.545124   31635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:36.559561   31635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41527
	I0731 17:06:36.559920   31635 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:36.560308   31635 main.go:141] libmachine: Using API Version  1
	I0731 17:06:36.560329   31635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:36.560610   31635 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:36.560791   31635 main.go:141] libmachine: (ha-234651-m04) Calling .DriverName
	I0731 17:06:36.560979   31635 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 17:06:36.560997   31635 main.go:141] libmachine: (ha-234651-m04) Calling .GetSSHHostname
	I0731 17:06:36.563824   31635 main.go:141] libmachine: (ha-234651-m04) DBG | domain ha-234651-m04 has defined MAC address 52:54:00:25:f7:22 in network mk-ha-234651
	I0731 17:06:36.564294   31635 main.go:141] libmachine: (ha-234651-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f7:22", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:02:48 +0000 UTC Type:0 Mac:52:54:00:25:f7:22 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-234651-m04 Clientid:01:52:54:00:25:f7:22}
	I0731 17:06:36.564313   31635 main.go:141] libmachine: (ha-234651-m04) DBG | domain ha-234651-m04 has defined IP address 192.168.39.216 and MAC address 52:54:00:25:f7:22 in network mk-ha-234651
	I0731 17:06:36.564453   31635 main.go:141] libmachine: (ha-234651-m04) Calling .GetSSHPort
	I0731 17:06:36.564660   31635 main.go:141] libmachine: (ha-234651-m04) Calling .GetSSHKeyPath
	I0731 17:06:36.564803   31635 main.go:141] libmachine: (ha-234651-m04) Calling .GetSSHUsername
	I0731 17:06:36.564921   31635 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m04/id_rsa Username:docker}
	I0731 17:06:36.646135   31635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 17:06:36.658793   31635 status.go:257] ha-234651-m04 status: &{Name:ha-234651-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-234651 status -v=7 --alsologtostderr: exit status 7 (592.027996ms)

                                                
                                                
-- stdout --
	ha-234651
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-234651-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-234651-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-234651-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 17:06:43.048269   31772 out.go:291] Setting OutFile to fd 1 ...
	I0731 17:06:43.048383   31772 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 17:06:43.048391   31772 out.go:304] Setting ErrFile to fd 2...
	I0731 17:06:43.048396   31772 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 17:06:43.048598   31772 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19349-8084/.minikube/bin
	I0731 17:06:43.048767   31772 out.go:298] Setting JSON to false
	I0731 17:06:43.048791   31772 mustload.go:65] Loading cluster: ha-234651
	I0731 17:06:43.048883   31772 notify.go:220] Checking for updates...
	I0731 17:06:43.049741   31772 config.go:182] Loaded profile config "ha-234651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 17:06:43.049779   31772 status.go:255] checking status of ha-234651 ...
	I0731 17:06:43.050993   31772 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:43.051033   31772 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:43.066010   31772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40161
	I0731 17:06:43.066454   31772 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:43.067032   31772 main.go:141] libmachine: Using API Version  1
	I0731 17:06:43.067055   31772 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:43.067463   31772 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:43.067661   31772 main.go:141] libmachine: (ha-234651) Calling .GetState
	I0731 17:06:43.069610   31772 status.go:330] ha-234651 host status = "Running" (err=<nil>)
	I0731 17:06:43.069628   31772 host.go:66] Checking if "ha-234651" exists ...
	I0731 17:06:43.069931   31772 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:43.069975   31772 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:43.084069   31772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45391
	I0731 17:06:43.084518   31772 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:43.084973   31772 main.go:141] libmachine: Using API Version  1
	I0731 17:06:43.084991   31772 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:43.085261   31772 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:43.085438   31772 main.go:141] libmachine: (ha-234651) Calling .GetIP
	I0731 17:06:43.088200   31772 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:06:43.088702   31772 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 17:06:43.088728   31772 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:06:43.088870   31772 host.go:66] Checking if "ha-234651" exists ...
	I0731 17:06:43.089258   31772 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:43.089301   31772 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:43.103339   31772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44837
	I0731 17:06:43.103696   31772 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:43.104113   31772 main.go:141] libmachine: Using API Version  1
	I0731 17:06:43.104129   31772 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:43.104441   31772 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:43.104633   31772 main.go:141] libmachine: (ha-234651) Calling .DriverName
	I0731 17:06:43.104804   31772 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 17:06:43.104833   31772 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 17:06:43.107643   31772 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:06:43.108079   31772 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 17:06:43.108111   31772 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:06:43.108242   31772 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 17:06:43.108435   31772 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 17:06:43.108589   31772 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 17:06:43.108735   31772 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651/id_rsa Username:docker}
	I0731 17:06:43.190083   31772 ssh_runner.go:195] Run: systemctl --version
	I0731 17:06:43.196127   31772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 17:06:43.209895   31772 kubeconfig.go:125] found "ha-234651" server: "https://192.168.39.254:8443"
	I0731 17:06:43.209921   31772 api_server.go:166] Checking apiserver status ...
	I0731 17:06:43.209952   31772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 17:06:43.224068   31772 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1220/cgroup
	W0731 17:06:43.232818   31772 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1220/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 17:06:43.232879   31772 ssh_runner.go:195] Run: ls
	I0731 17:06:43.236992   31772 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 17:06:43.242563   31772 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 17:06:43.242582   31772 status.go:422] ha-234651 apiserver status = Running (err=<nil>)
	I0731 17:06:43.242606   31772 status.go:257] ha-234651 status: &{Name:ha-234651 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 17:06:43.242620   31772 status.go:255] checking status of ha-234651-m02 ...
	I0731 17:06:43.242896   31772 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:43.242930   31772 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:43.258082   31772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42937
	I0731 17:06:43.258469   31772 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:43.258922   31772 main.go:141] libmachine: Using API Version  1
	I0731 17:06:43.258939   31772 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:43.259243   31772 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:43.259412   31772 main.go:141] libmachine: (ha-234651-m02) Calling .GetState
	I0731 17:06:43.260871   31772 status.go:330] ha-234651-m02 host status = "Stopped" (err=<nil>)
	I0731 17:06:43.260884   31772 status.go:343] host is not running, skipping remaining checks
	I0731 17:06:43.260892   31772 status.go:257] ha-234651-m02 status: &{Name:ha-234651-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 17:06:43.260911   31772 status.go:255] checking status of ha-234651-m03 ...
	I0731 17:06:43.261308   31772 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:43.261355   31772 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:43.276140   31772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43719
	I0731 17:06:43.276551   31772 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:43.277018   31772 main.go:141] libmachine: Using API Version  1
	I0731 17:06:43.277042   31772 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:43.277435   31772 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:43.277651   31772 main.go:141] libmachine: (ha-234651-m03) Calling .GetState
	I0731 17:06:43.279272   31772 status.go:330] ha-234651-m03 host status = "Running" (err=<nil>)
	I0731 17:06:43.279285   31772 host.go:66] Checking if "ha-234651-m03" exists ...
	I0731 17:06:43.279557   31772 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:43.279592   31772 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:43.293467   31772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44571
	I0731 17:06:43.293839   31772 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:43.294224   31772 main.go:141] libmachine: Using API Version  1
	I0731 17:06:43.294246   31772 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:43.294543   31772 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:43.294807   31772 main.go:141] libmachine: (ha-234651-m03) Calling .GetIP
	I0731 17:06:43.297231   31772 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:06:43.297609   31772 main.go:141] libmachine: (ha-234651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c0:cf", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:01:23 +0000 UTC Type:0 Mac:52:54:00:ac:c0:cf Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-234651-m03 Clientid:01:52:54:00:ac:c0:cf}
	I0731 17:06:43.297644   31772 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined IP address 192.168.39.139 and MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:06:43.297777   31772 host.go:66] Checking if "ha-234651-m03" exists ...
	I0731 17:06:43.298071   31772 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:43.298111   31772 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:43.312401   31772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41473
	I0731 17:06:43.312737   31772 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:43.313209   31772 main.go:141] libmachine: Using API Version  1
	I0731 17:06:43.313230   31772 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:43.313533   31772 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:43.313734   31772 main.go:141] libmachine: (ha-234651-m03) Calling .DriverName
	I0731 17:06:43.313944   31772 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 17:06:43.313970   31772 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHHostname
	I0731 17:06:43.317013   31772 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:06:43.317479   31772 main.go:141] libmachine: (ha-234651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c0:cf", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:01:23 +0000 UTC Type:0 Mac:52:54:00:ac:c0:cf Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-234651-m03 Clientid:01:52:54:00:ac:c0:cf}
	I0731 17:06:43.317502   31772 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined IP address 192.168.39.139 and MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:06:43.317685   31772 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHPort
	I0731 17:06:43.317844   31772 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHKeyPath
	I0731 17:06:43.317987   31772 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHUsername
	I0731 17:06:43.318091   31772 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m03/id_rsa Username:docker}
	I0731 17:06:43.395099   31772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 17:06:43.408578   31772 kubeconfig.go:125] found "ha-234651" server: "https://192.168.39.254:8443"
	I0731 17:06:43.408602   31772 api_server.go:166] Checking apiserver status ...
	I0731 17:06:43.408629   31772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 17:06:43.421961   31772 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1530/cgroup
	W0731 17:06:43.432076   31772 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1530/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 17:06:43.432138   31772 ssh_runner.go:195] Run: ls
	I0731 17:06:43.436429   31772 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 17:06:43.440595   31772 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 17:06:43.440616   31772 status.go:422] ha-234651-m03 apiserver status = Running (err=<nil>)
	I0731 17:06:43.440624   31772 status.go:257] ha-234651-m03 status: &{Name:ha-234651-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 17:06:43.440637   31772 status.go:255] checking status of ha-234651-m04 ...
	I0731 17:06:43.440947   31772 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:43.440979   31772 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:43.456144   31772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43889
	I0731 17:06:43.456541   31772 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:43.457010   31772 main.go:141] libmachine: Using API Version  1
	I0731 17:06:43.457037   31772 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:43.457398   31772 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:43.457612   31772 main.go:141] libmachine: (ha-234651-m04) Calling .GetState
	I0731 17:06:43.459303   31772 status.go:330] ha-234651-m04 host status = "Running" (err=<nil>)
	I0731 17:06:43.459326   31772 host.go:66] Checking if "ha-234651-m04" exists ...
	I0731 17:06:43.459737   31772 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:43.459781   31772 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:43.474377   31772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43661
	I0731 17:06:43.474839   31772 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:43.475338   31772 main.go:141] libmachine: Using API Version  1
	I0731 17:06:43.475360   31772 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:43.475727   31772 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:43.475905   31772 main.go:141] libmachine: (ha-234651-m04) Calling .GetIP
	I0731 17:06:43.478862   31772 main.go:141] libmachine: (ha-234651-m04) DBG | domain ha-234651-m04 has defined MAC address 52:54:00:25:f7:22 in network mk-ha-234651
	I0731 17:06:43.479326   31772 main.go:141] libmachine: (ha-234651-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f7:22", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:02:48 +0000 UTC Type:0 Mac:52:54:00:25:f7:22 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-234651-m04 Clientid:01:52:54:00:25:f7:22}
	I0731 17:06:43.479356   31772 main.go:141] libmachine: (ha-234651-m04) DBG | domain ha-234651-m04 has defined IP address 192.168.39.216 and MAC address 52:54:00:25:f7:22 in network mk-ha-234651
	I0731 17:06:43.479494   31772 host.go:66] Checking if "ha-234651-m04" exists ...
	I0731 17:06:43.479879   31772 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:43.479947   31772 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:43.494050   31772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44839
	I0731 17:06:43.494480   31772 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:43.494911   31772 main.go:141] libmachine: Using API Version  1
	I0731 17:06:43.494971   31772 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:43.495308   31772 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:43.495698   31772 main.go:141] libmachine: (ha-234651-m04) Calling .DriverName
	I0731 17:06:43.495943   31772 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 17:06:43.495966   31772 main.go:141] libmachine: (ha-234651-m04) Calling .GetSSHHostname
	I0731 17:06:43.498237   31772 main.go:141] libmachine: (ha-234651-m04) DBG | domain ha-234651-m04 has defined MAC address 52:54:00:25:f7:22 in network mk-ha-234651
	I0731 17:06:43.498675   31772 main.go:141] libmachine: (ha-234651-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f7:22", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:02:48 +0000 UTC Type:0 Mac:52:54:00:25:f7:22 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-234651-m04 Clientid:01:52:54:00:25:f7:22}
	I0731 17:06:43.498692   31772 main.go:141] libmachine: (ha-234651-m04) DBG | domain ha-234651-m04 has defined IP address 192.168.39.216 and MAC address 52:54:00:25:f7:22 in network mk-ha-234651
	I0731 17:06:43.498830   31772 main.go:141] libmachine: (ha-234651-m04) Calling .GetSSHPort
	I0731 17:06:43.499028   31772 main.go:141] libmachine: (ha-234651-m04) Calling .GetSSHKeyPath
	I0731 17:06:43.499193   31772 main.go:141] libmachine: (ha-234651-m04) Calling .GetSSHUsername
	I0731 17:06:43.499330   31772 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m04/id_rsa Username:docker}
	I0731 17:06:43.585737   31772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 17:06:43.599430   31772 status.go:257] ha-234651-m04 status: &{Name:ha-234651-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-234651 status -v=7 --alsologtostderr: exit status 7 (606.279341ms)

                                                
                                                
-- stdout --
	ha-234651
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-234651-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-234651-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-234651-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 17:06:53.622076   31876 out.go:291] Setting OutFile to fd 1 ...
	I0731 17:06:53.622348   31876 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 17:06:53.622358   31876 out.go:304] Setting ErrFile to fd 2...
	I0731 17:06:53.622362   31876 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 17:06:53.622576   31876 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19349-8084/.minikube/bin
	I0731 17:06:53.622805   31876 out.go:298] Setting JSON to false
	I0731 17:06:53.622836   31876 mustload.go:65] Loading cluster: ha-234651
	I0731 17:06:53.622863   31876 notify.go:220] Checking for updates...
	I0731 17:06:53.623293   31876 config.go:182] Loaded profile config "ha-234651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 17:06:53.623311   31876 status.go:255] checking status of ha-234651 ...
	I0731 17:06:53.623698   31876 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:53.623772   31876 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:53.642540   31876 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36553
	I0731 17:06:53.642995   31876 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:53.643644   31876 main.go:141] libmachine: Using API Version  1
	I0731 17:06:53.643677   31876 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:53.644032   31876 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:53.644243   31876 main.go:141] libmachine: (ha-234651) Calling .GetState
	I0731 17:06:53.645915   31876 status.go:330] ha-234651 host status = "Running" (err=<nil>)
	I0731 17:06:53.645936   31876 host.go:66] Checking if "ha-234651" exists ...
	I0731 17:06:53.646204   31876 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:53.646233   31876 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:53.660284   31876 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35137
	I0731 17:06:53.660646   31876 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:53.661093   31876 main.go:141] libmachine: Using API Version  1
	I0731 17:06:53.661112   31876 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:53.661477   31876 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:53.661665   31876 main.go:141] libmachine: (ha-234651) Calling .GetIP
	I0731 17:06:53.664372   31876 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:06:53.664788   31876 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 17:06:53.664810   31876 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:06:53.664942   31876 host.go:66] Checking if "ha-234651" exists ...
	I0731 17:06:53.665215   31876 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:53.665252   31876 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:53.679367   31876 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41059
	I0731 17:06:53.679755   31876 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:53.680269   31876 main.go:141] libmachine: Using API Version  1
	I0731 17:06:53.680291   31876 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:53.680692   31876 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:53.680885   31876 main.go:141] libmachine: (ha-234651) Calling .DriverName
	I0731 17:06:53.681067   31876 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 17:06:53.681089   31876 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 17:06:53.683791   31876 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:06:53.684190   31876 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 17:06:53.684223   31876 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:06:53.684329   31876 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 17:06:53.684505   31876 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 17:06:53.684662   31876 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 17:06:53.684807   31876 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651/id_rsa Username:docker}
	I0731 17:06:53.767496   31876 ssh_runner.go:195] Run: systemctl --version
	I0731 17:06:53.774795   31876 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 17:06:53.789153   31876 kubeconfig.go:125] found "ha-234651" server: "https://192.168.39.254:8443"
	I0731 17:06:53.789183   31876 api_server.go:166] Checking apiserver status ...
	I0731 17:06:53.789227   31876 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 17:06:53.804938   31876 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1220/cgroup
	W0731 17:06:53.820605   31876 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1220/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 17:06:53.820662   31876 ssh_runner.go:195] Run: ls
	I0731 17:06:53.826348   31876 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 17:06:53.830361   31876 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 17:06:53.830383   31876 status.go:422] ha-234651 apiserver status = Running (err=<nil>)
	I0731 17:06:53.830395   31876 status.go:257] ha-234651 status: &{Name:ha-234651 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 17:06:53.830414   31876 status.go:255] checking status of ha-234651-m02 ...
	I0731 17:06:53.830807   31876 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:53.830858   31876 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:53.845303   31876 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40453
	I0731 17:06:53.845751   31876 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:53.846263   31876 main.go:141] libmachine: Using API Version  1
	I0731 17:06:53.846289   31876 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:53.846627   31876 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:53.846847   31876 main.go:141] libmachine: (ha-234651-m02) Calling .GetState
	I0731 17:06:53.848627   31876 status.go:330] ha-234651-m02 host status = "Stopped" (err=<nil>)
	I0731 17:06:53.848638   31876 status.go:343] host is not running, skipping remaining checks
	I0731 17:06:53.848643   31876 status.go:257] ha-234651-m02 status: &{Name:ha-234651-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 17:06:53.848657   31876 status.go:255] checking status of ha-234651-m03 ...
	I0731 17:06:53.848967   31876 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:53.849003   31876 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:53.863352   31876 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42109
	I0731 17:06:53.863731   31876 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:53.864165   31876 main.go:141] libmachine: Using API Version  1
	I0731 17:06:53.864186   31876 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:53.864488   31876 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:53.864666   31876 main.go:141] libmachine: (ha-234651-m03) Calling .GetState
	I0731 17:06:53.866216   31876 status.go:330] ha-234651-m03 host status = "Running" (err=<nil>)
	I0731 17:06:53.866228   31876 host.go:66] Checking if "ha-234651-m03" exists ...
	I0731 17:06:53.866554   31876 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:53.866587   31876 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:53.881332   31876 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44049
	I0731 17:06:53.881699   31876 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:53.882155   31876 main.go:141] libmachine: Using API Version  1
	I0731 17:06:53.882180   31876 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:53.882498   31876 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:53.882673   31876 main.go:141] libmachine: (ha-234651-m03) Calling .GetIP
	I0731 17:06:53.885452   31876 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:06:53.885860   31876 main.go:141] libmachine: (ha-234651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c0:cf", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:01:23 +0000 UTC Type:0 Mac:52:54:00:ac:c0:cf Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-234651-m03 Clientid:01:52:54:00:ac:c0:cf}
	I0731 17:06:53.885885   31876 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined IP address 192.168.39.139 and MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:06:53.886014   31876 host.go:66] Checking if "ha-234651-m03" exists ...
	I0731 17:06:53.886308   31876 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:53.886353   31876 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:53.900435   31876 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42531
	I0731 17:06:53.900778   31876 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:53.901225   31876 main.go:141] libmachine: Using API Version  1
	I0731 17:06:53.901251   31876 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:53.901591   31876 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:53.901772   31876 main.go:141] libmachine: (ha-234651-m03) Calling .DriverName
	I0731 17:06:53.901937   31876 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 17:06:53.901957   31876 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHHostname
	I0731 17:06:53.904431   31876 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:06:53.904885   31876 main.go:141] libmachine: (ha-234651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c0:cf", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:01:23 +0000 UTC Type:0 Mac:52:54:00:ac:c0:cf Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-234651-m03 Clientid:01:52:54:00:ac:c0:cf}
	I0731 17:06:53.904901   31876 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined IP address 192.168.39.139 and MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:06:53.905039   31876 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHPort
	I0731 17:06:53.905214   31876 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHKeyPath
	I0731 17:06:53.905356   31876 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHUsername
	I0731 17:06:53.905494   31876 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m03/id_rsa Username:docker}
	I0731 17:06:53.982939   31876 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 17:06:53.996922   31876 kubeconfig.go:125] found "ha-234651" server: "https://192.168.39.254:8443"
	I0731 17:06:53.996947   31876 api_server.go:166] Checking apiserver status ...
	I0731 17:06:53.996976   31876 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 17:06:54.009707   31876 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1530/cgroup
	W0731 17:06:54.022002   31876 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1530/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 17:06:54.022056   31876 ssh_runner.go:195] Run: ls
	I0731 17:06:54.026750   31876 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 17:06:54.031126   31876 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 17:06:54.031148   31876 status.go:422] ha-234651-m03 apiserver status = Running (err=<nil>)
	I0731 17:06:54.031158   31876 status.go:257] ha-234651-m03 status: &{Name:ha-234651-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 17:06:54.031177   31876 status.go:255] checking status of ha-234651-m04 ...
	I0731 17:06:54.031486   31876 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:54.031520   31876 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:54.045775   31876 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38781
	I0731 17:06:54.046228   31876 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:54.046685   31876 main.go:141] libmachine: Using API Version  1
	I0731 17:06:54.046706   31876 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:54.046990   31876 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:54.047152   31876 main.go:141] libmachine: (ha-234651-m04) Calling .GetState
	I0731 17:06:54.048821   31876 status.go:330] ha-234651-m04 host status = "Running" (err=<nil>)
	I0731 17:06:54.048835   31876 host.go:66] Checking if "ha-234651-m04" exists ...
	I0731 17:06:54.049092   31876 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:54.049126   31876 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:54.064143   31876 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44687
	I0731 17:06:54.064509   31876 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:54.065010   31876 main.go:141] libmachine: Using API Version  1
	I0731 17:06:54.065031   31876 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:54.065340   31876 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:54.065515   31876 main.go:141] libmachine: (ha-234651-m04) Calling .GetIP
	I0731 17:06:54.068379   31876 main.go:141] libmachine: (ha-234651-m04) DBG | domain ha-234651-m04 has defined MAC address 52:54:00:25:f7:22 in network mk-ha-234651
	I0731 17:06:54.068832   31876 main.go:141] libmachine: (ha-234651-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f7:22", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:02:48 +0000 UTC Type:0 Mac:52:54:00:25:f7:22 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-234651-m04 Clientid:01:52:54:00:25:f7:22}
	I0731 17:06:54.068852   31876 main.go:141] libmachine: (ha-234651-m04) DBG | domain ha-234651-m04 has defined IP address 192.168.39.216 and MAC address 52:54:00:25:f7:22 in network mk-ha-234651
	I0731 17:06:54.069000   31876 host.go:66] Checking if "ha-234651-m04" exists ...
	I0731 17:06:54.069328   31876 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:06:54.069368   31876 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:06:54.086887   31876 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46173
	I0731 17:06:54.087289   31876 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:06:54.087785   31876 main.go:141] libmachine: Using API Version  1
	I0731 17:06:54.087799   31876 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:06:54.088098   31876 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:06:54.088280   31876 main.go:141] libmachine: (ha-234651-m04) Calling .DriverName
	I0731 17:06:54.088457   31876 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 17:06:54.088475   31876 main.go:141] libmachine: (ha-234651-m04) Calling .GetSSHHostname
	I0731 17:06:54.091559   31876 main.go:141] libmachine: (ha-234651-m04) DBG | domain ha-234651-m04 has defined MAC address 52:54:00:25:f7:22 in network mk-ha-234651
	I0731 17:06:54.091980   31876 main.go:141] libmachine: (ha-234651-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f7:22", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:02:48 +0000 UTC Type:0 Mac:52:54:00:25:f7:22 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-234651-m04 Clientid:01:52:54:00:25:f7:22}
	I0731 17:06:54.092001   31876 main.go:141] libmachine: (ha-234651-m04) DBG | domain ha-234651-m04 has defined IP address 192.168.39.216 and MAC address 52:54:00:25:f7:22 in network mk-ha-234651
	I0731 17:06:54.092149   31876 main.go:141] libmachine: (ha-234651-m04) Calling .GetSSHPort
	I0731 17:06:54.092313   31876 main.go:141] libmachine: (ha-234651-m04) Calling .GetSSHKeyPath
	I0731 17:06:54.092438   31876 main.go:141] libmachine: (ha-234651-m04) Calling .GetSSHUsername
	I0731 17:06:54.092560   31876 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m04/id_rsa Username:docker}
	I0731 17:06:54.174040   31876 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 17:06:54.186882   31876 status.go:257] ha-234651-m04 status: &{Name:ha-234651-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-234651 status -v=7 --alsologtostderr: exit status 7 (604.259139ms)

                                                
                                                
-- stdout --
	ha-234651
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-234651-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-234651-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-234651-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 17:07:03.813762   31984 out.go:291] Setting OutFile to fd 1 ...
	I0731 17:07:03.814043   31984 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 17:07:03.814062   31984 out.go:304] Setting ErrFile to fd 2...
	I0731 17:07:03.814070   31984 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 17:07:03.814562   31984 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19349-8084/.minikube/bin
	I0731 17:07:03.814769   31984 out.go:298] Setting JSON to false
	I0731 17:07:03.814800   31984 mustload.go:65] Loading cluster: ha-234651
	I0731 17:07:03.814892   31984 notify.go:220] Checking for updates...
	I0731 17:07:03.815220   31984 config.go:182] Loaded profile config "ha-234651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 17:07:03.815237   31984 status.go:255] checking status of ha-234651 ...
	I0731 17:07:03.815637   31984 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:07:03.815690   31984 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:07:03.830428   31984 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45033
	I0731 17:07:03.830798   31984 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:07:03.831393   31984 main.go:141] libmachine: Using API Version  1
	I0731 17:07:03.831417   31984 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:07:03.831736   31984 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:07:03.831932   31984 main.go:141] libmachine: (ha-234651) Calling .GetState
	I0731 17:07:03.833602   31984 status.go:330] ha-234651 host status = "Running" (err=<nil>)
	I0731 17:07:03.833626   31984 host.go:66] Checking if "ha-234651" exists ...
	I0731 17:07:03.833945   31984 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:07:03.833978   31984 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:07:03.849146   31984 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39393
	I0731 17:07:03.849554   31984 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:07:03.849988   31984 main.go:141] libmachine: Using API Version  1
	I0731 17:07:03.850010   31984 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:07:03.850284   31984 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:07:03.850445   31984 main.go:141] libmachine: (ha-234651) Calling .GetIP
	I0731 17:07:03.853353   31984 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:07:03.853836   31984 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 17:07:03.853863   31984 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:07:03.853973   31984 host.go:66] Checking if "ha-234651" exists ...
	I0731 17:07:03.854329   31984 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:07:03.854368   31984 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:07:03.868902   31984 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43655
	I0731 17:07:03.869332   31984 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:07:03.869783   31984 main.go:141] libmachine: Using API Version  1
	I0731 17:07:03.869816   31984 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:07:03.870115   31984 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:07:03.870269   31984 main.go:141] libmachine: (ha-234651) Calling .DriverName
	I0731 17:07:03.870500   31984 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 17:07:03.870524   31984 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 17:07:03.873483   31984 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:07:03.873893   31984 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 17:07:03.873920   31984 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:07:03.874041   31984 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 17:07:03.874296   31984 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 17:07:03.874458   31984 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 17:07:03.874580   31984 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651/id_rsa Username:docker}
	I0731 17:07:03.954381   31984 ssh_runner.go:195] Run: systemctl --version
	I0731 17:07:03.960375   31984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 17:07:03.975624   31984 kubeconfig.go:125] found "ha-234651" server: "https://192.168.39.254:8443"
	I0731 17:07:03.975652   31984 api_server.go:166] Checking apiserver status ...
	I0731 17:07:03.975691   31984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 17:07:03.989437   31984 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1220/cgroup
	W0731 17:07:03.998622   31984 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1220/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 17:07:03.998682   31984 ssh_runner.go:195] Run: ls
	I0731 17:07:04.002848   31984 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 17:07:04.008628   31984 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 17:07:04.008649   31984 status.go:422] ha-234651 apiserver status = Running (err=<nil>)
	I0731 17:07:04.008659   31984 status.go:257] ha-234651 status: &{Name:ha-234651 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 17:07:04.008673   31984 status.go:255] checking status of ha-234651-m02 ...
	I0731 17:07:04.008965   31984 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:07:04.009000   31984 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:07:04.024628   31984 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41451
	I0731 17:07:04.025037   31984 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:07:04.025475   31984 main.go:141] libmachine: Using API Version  1
	I0731 17:07:04.025493   31984 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:07:04.025845   31984 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:07:04.026031   31984 main.go:141] libmachine: (ha-234651-m02) Calling .GetState
	I0731 17:07:04.027776   31984 status.go:330] ha-234651-m02 host status = "Stopped" (err=<nil>)
	I0731 17:07:04.027789   31984 status.go:343] host is not running, skipping remaining checks
	I0731 17:07:04.027805   31984 status.go:257] ha-234651-m02 status: &{Name:ha-234651-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 17:07:04.027821   31984 status.go:255] checking status of ha-234651-m03 ...
	I0731 17:07:04.028088   31984 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:07:04.028124   31984 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:07:04.044218   31984 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44349
	I0731 17:07:04.044743   31984 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:07:04.045311   31984 main.go:141] libmachine: Using API Version  1
	I0731 17:07:04.045341   31984 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:07:04.045666   31984 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:07:04.045838   31984 main.go:141] libmachine: (ha-234651-m03) Calling .GetState
	I0731 17:07:04.047532   31984 status.go:330] ha-234651-m03 host status = "Running" (err=<nil>)
	I0731 17:07:04.047555   31984 host.go:66] Checking if "ha-234651-m03" exists ...
	I0731 17:07:04.047987   31984 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:07:04.048026   31984 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:07:04.064365   31984 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35793
	I0731 17:07:04.064746   31984 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:07:04.065141   31984 main.go:141] libmachine: Using API Version  1
	I0731 17:07:04.065160   31984 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:07:04.065455   31984 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:07:04.065637   31984 main.go:141] libmachine: (ha-234651-m03) Calling .GetIP
	I0731 17:07:04.068474   31984 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:07:04.068914   31984 main.go:141] libmachine: (ha-234651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c0:cf", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:01:23 +0000 UTC Type:0 Mac:52:54:00:ac:c0:cf Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-234651-m03 Clientid:01:52:54:00:ac:c0:cf}
	I0731 17:07:04.068944   31984 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined IP address 192.168.39.139 and MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:07:04.069107   31984 host.go:66] Checking if "ha-234651-m03" exists ...
	I0731 17:07:04.069432   31984 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:07:04.069467   31984 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:07:04.085427   31984 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44551
	I0731 17:07:04.085867   31984 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:07:04.086685   31984 main.go:141] libmachine: Using API Version  1
	I0731 17:07:04.086712   31984 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:07:04.087044   31984 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:07:04.087307   31984 main.go:141] libmachine: (ha-234651-m03) Calling .DriverName
	I0731 17:07:04.087507   31984 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 17:07:04.087531   31984 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHHostname
	I0731 17:07:04.090324   31984 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:07:04.090764   31984 main.go:141] libmachine: (ha-234651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c0:cf", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:01:23 +0000 UTC Type:0 Mac:52:54:00:ac:c0:cf Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-234651-m03 Clientid:01:52:54:00:ac:c0:cf}
	I0731 17:07:04.090801   31984 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined IP address 192.168.39.139 and MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:07:04.090901   31984 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHPort
	I0731 17:07:04.091062   31984 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHKeyPath
	I0731 17:07:04.091237   31984 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHUsername
	I0731 17:07:04.091402   31984 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m03/id_rsa Username:docker}
	I0731 17:07:04.166334   31984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 17:07:04.185257   31984 kubeconfig.go:125] found "ha-234651" server: "https://192.168.39.254:8443"
	I0731 17:07:04.185296   31984 api_server.go:166] Checking apiserver status ...
	I0731 17:07:04.185340   31984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 17:07:04.198847   31984 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1530/cgroup
	W0731 17:07:04.208042   31984 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1530/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 17:07:04.208093   31984 ssh_runner.go:195] Run: ls
	I0731 17:07:04.212321   31984 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 17:07:04.216593   31984 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 17:07:04.216620   31984 status.go:422] ha-234651-m03 apiserver status = Running (err=<nil>)
	I0731 17:07:04.216628   31984 status.go:257] ha-234651-m03 status: &{Name:ha-234651-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 17:07:04.216643   31984 status.go:255] checking status of ha-234651-m04 ...
	I0731 17:07:04.216954   31984 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:07:04.216986   31984 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:07:04.231692   31984 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34831
	I0731 17:07:04.232128   31984 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:07:04.232643   31984 main.go:141] libmachine: Using API Version  1
	I0731 17:07:04.232668   31984 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:07:04.232975   31984 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:07:04.233161   31984 main.go:141] libmachine: (ha-234651-m04) Calling .GetState
	I0731 17:07:04.234533   31984 status.go:330] ha-234651-m04 host status = "Running" (err=<nil>)
	I0731 17:07:04.234547   31984 host.go:66] Checking if "ha-234651-m04" exists ...
	I0731 17:07:04.234924   31984 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:07:04.234963   31984 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:07:04.249143   31984 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41121
	I0731 17:07:04.249554   31984 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:07:04.249980   31984 main.go:141] libmachine: Using API Version  1
	I0731 17:07:04.250001   31984 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:07:04.250290   31984 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:07:04.250451   31984 main.go:141] libmachine: (ha-234651-m04) Calling .GetIP
	I0731 17:07:04.252719   31984 main.go:141] libmachine: (ha-234651-m04) DBG | domain ha-234651-m04 has defined MAC address 52:54:00:25:f7:22 in network mk-ha-234651
	I0731 17:07:04.253263   31984 main.go:141] libmachine: (ha-234651-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f7:22", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:02:48 +0000 UTC Type:0 Mac:52:54:00:25:f7:22 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-234651-m04 Clientid:01:52:54:00:25:f7:22}
	I0731 17:07:04.253298   31984 main.go:141] libmachine: (ha-234651-m04) DBG | domain ha-234651-m04 has defined IP address 192.168.39.216 and MAC address 52:54:00:25:f7:22 in network mk-ha-234651
	I0731 17:07:04.253453   31984 host.go:66] Checking if "ha-234651-m04" exists ...
	I0731 17:07:04.253852   31984 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:07:04.253898   31984 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:07:04.268352   31984 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33507
	I0731 17:07:04.268757   31984 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:07:04.269194   31984 main.go:141] libmachine: Using API Version  1
	I0731 17:07:04.269216   31984 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:07:04.269558   31984 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:07:04.269749   31984 main.go:141] libmachine: (ha-234651-m04) Calling .DriverName
	I0731 17:07:04.269928   31984 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 17:07:04.269951   31984 main.go:141] libmachine: (ha-234651-m04) Calling .GetSSHHostname
	I0731 17:07:04.272953   31984 main.go:141] libmachine: (ha-234651-m04) DBG | domain ha-234651-m04 has defined MAC address 52:54:00:25:f7:22 in network mk-ha-234651
	I0731 17:07:04.273399   31984 main.go:141] libmachine: (ha-234651-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f7:22", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:02:48 +0000 UTC Type:0 Mac:52:54:00:25:f7:22 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-234651-m04 Clientid:01:52:54:00:25:f7:22}
	I0731 17:07:04.273424   31984 main.go:141] libmachine: (ha-234651-m04) DBG | domain ha-234651-m04 has defined IP address 192.168.39.216 and MAC address 52:54:00:25:f7:22 in network mk-ha-234651
	I0731 17:07:04.273596   31984 main.go:141] libmachine: (ha-234651-m04) Calling .GetSSHPort
	I0731 17:07:04.273785   31984 main.go:141] libmachine: (ha-234651-m04) Calling .GetSSHKeyPath
	I0731 17:07:04.273950   31984 main.go:141] libmachine: (ha-234651-m04) Calling .GetSSHUsername
	I0731 17:07:04.274083   31984 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m04/id_rsa Username:docker}
	I0731 17:07:04.361580   31984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 17:07:04.376227   31984 status.go:257] ha-234651-m04 status: &{Name:ha-234651-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-234651 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-234651 -n ha-234651
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-234651 logs -n 25: (1.265966414s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-234651 ssh -n                                                                 | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | ha-234651-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-234651 cp ha-234651-m03:/home/docker/cp-test.txt                              | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | ha-234651:/home/docker/cp-test_ha-234651-m03_ha-234651.txt                       |           |         |         |                     |                     |
	| ssh     | ha-234651 ssh -n                                                                 | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | ha-234651-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-234651 ssh -n ha-234651 sudo cat                                              | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | /home/docker/cp-test_ha-234651-m03_ha-234651.txt                                 |           |         |         |                     |                     |
	| cp      | ha-234651 cp ha-234651-m03:/home/docker/cp-test.txt                              | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | ha-234651-m02:/home/docker/cp-test_ha-234651-m03_ha-234651-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-234651 ssh -n                                                                 | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | ha-234651-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-234651 ssh -n ha-234651-m02 sudo cat                                          | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | /home/docker/cp-test_ha-234651-m03_ha-234651-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-234651 cp ha-234651-m03:/home/docker/cp-test.txt                              | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | ha-234651-m04:/home/docker/cp-test_ha-234651-m03_ha-234651-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-234651 ssh -n                                                                 | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | ha-234651-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-234651 ssh -n ha-234651-m04 sudo cat                                          | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | /home/docker/cp-test_ha-234651-m03_ha-234651-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-234651 cp testdata/cp-test.txt                                                | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | ha-234651-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-234651 ssh -n                                                                 | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | ha-234651-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-234651 cp ha-234651-m04:/home/docker/cp-test.txt                              | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2373933749/001/cp-test_ha-234651-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-234651 ssh -n                                                                 | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | ha-234651-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-234651 cp ha-234651-m04:/home/docker/cp-test.txt                              | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | ha-234651:/home/docker/cp-test_ha-234651-m04_ha-234651.txt                       |           |         |         |                     |                     |
	| ssh     | ha-234651 ssh -n                                                                 | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | ha-234651-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-234651 ssh -n ha-234651 sudo cat                                              | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | /home/docker/cp-test_ha-234651-m04_ha-234651.txt                                 |           |         |         |                     |                     |
	| cp      | ha-234651 cp ha-234651-m04:/home/docker/cp-test.txt                              | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | ha-234651-m02:/home/docker/cp-test_ha-234651-m04_ha-234651-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-234651 ssh -n                                                                 | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | ha-234651-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-234651 ssh -n ha-234651-m02 sudo cat                                          | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | /home/docker/cp-test_ha-234651-m04_ha-234651-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-234651 cp ha-234651-m04:/home/docker/cp-test.txt                              | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | ha-234651-m03:/home/docker/cp-test_ha-234651-m04_ha-234651-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-234651 ssh -n                                                                 | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | ha-234651-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-234651 ssh -n ha-234651-m03 sudo cat                                          | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | /home/docker/cp-test_ha-234651-m04_ha-234651-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-234651 node stop m02 -v=7                                                     | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-234651 node start m02 -v=7                                                    | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:06 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 16:59:02
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 16:59:02.086616   26392 out.go:291] Setting OutFile to fd 1 ...
	I0731 16:59:02.086847   26392 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 16:59:02.086856   26392 out.go:304] Setting ErrFile to fd 2...
	I0731 16:59:02.086860   26392 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 16:59:02.087017   26392 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19349-8084/.minikube/bin
	I0731 16:59:02.087598   26392 out.go:298] Setting JSON to false
	I0731 16:59:02.088397   26392 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2486,"bootTime":1722442656,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 16:59:02.088452   26392 start.go:139] virtualization: kvm guest
	I0731 16:59:02.090518   26392 out.go:177] * [ha-234651] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 16:59:02.091938   26392 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 16:59:02.091946   26392 notify.go:220] Checking for updates...
	I0731 16:59:02.094020   26392 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 16:59:02.095139   26392 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 16:59:02.096213   26392 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19349-8084/.minikube
	I0731 16:59:02.097279   26392 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 16:59:02.098361   26392 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 16:59:02.099733   26392 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 16:59:02.134045   26392 out.go:177] * Using the kvm2 driver based on user configuration
	I0731 16:59:02.135190   26392 start.go:297] selected driver: kvm2
	I0731 16:59:02.135203   26392 start.go:901] validating driver "kvm2" against <nil>
	I0731 16:59:02.135212   26392 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 16:59:02.135908   26392 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 16:59:02.135972   26392 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19349-8084/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 16:59:02.150423   26392 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 16:59:02.150475   26392 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 16:59:02.150683   26392 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 16:59:02.150736   26392 cni.go:84] Creating CNI manager for ""
	I0731 16:59:02.150748   26392 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0731 16:59:02.150753   26392 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0731 16:59:02.150810   26392 start.go:340] cluster config:
	{Name:ha-234651 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-234651 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0731 16:59:02.150893   26392 iso.go:125] acquiring lock: {Name:mk4494bb5a63902bf12a909c3109db85229bc516 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 16:59:02.152634   26392 out.go:177] * Starting "ha-234651" primary control-plane node in "ha-234651" cluster
	I0731 16:59:02.153827   26392 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 16:59:02.153858   26392 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0731 16:59:02.153866   26392 cache.go:56] Caching tarball of preloaded images
	I0731 16:59:02.153961   26392 preload.go:172] Found /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 16:59:02.153975   26392 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 16:59:02.154325   26392 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/config.json ...
	I0731 16:59:02.154361   26392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/config.json: {Name:mk345cf47c371bb2b8d9e899fabd4f55ea2e688d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 16:59:02.154511   26392 start.go:360] acquireMachinesLock for ha-234651: {Name:mkd78eacfab7d6c3058c6674434b3d889ec957e0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 16:59:02.154550   26392 start.go:364] duration metric: took 23.284µs to acquireMachinesLock for "ha-234651"
	I0731 16:59:02.154573   26392 start.go:93] Provisioning new machine with config: &{Name:ha-234651 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-234651 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 16:59:02.154632   26392 start.go:125] createHost starting for "" (driver="kvm2")
	I0731 16:59:02.157048   26392 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 16:59:02.157187   26392 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:59:02.157242   26392 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:59:02.171049   26392 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37109
	I0731 16:59:02.171465   26392 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:59:02.172023   26392 main.go:141] libmachine: Using API Version  1
	I0731 16:59:02.172049   26392 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:59:02.172390   26392 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:59:02.172559   26392 main.go:141] libmachine: (ha-234651) Calling .GetMachineName
	I0731 16:59:02.172680   26392 main.go:141] libmachine: (ha-234651) Calling .DriverName
	I0731 16:59:02.172817   26392 start.go:159] libmachine.API.Create for "ha-234651" (driver="kvm2")
	I0731 16:59:02.172846   26392 client.go:168] LocalClient.Create starting
	I0731 16:59:02.172879   26392 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem
	I0731 16:59:02.172910   26392 main.go:141] libmachine: Decoding PEM data...
	I0731 16:59:02.172925   26392 main.go:141] libmachine: Parsing certificate...
	I0731 16:59:02.173003   26392 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem
	I0731 16:59:02.173022   26392 main.go:141] libmachine: Decoding PEM data...
	I0731 16:59:02.173034   26392 main.go:141] libmachine: Parsing certificate...
	I0731 16:59:02.173050   26392 main.go:141] libmachine: Running pre-create checks...
	I0731 16:59:02.173062   26392 main.go:141] libmachine: (ha-234651) Calling .PreCreateCheck
	I0731 16:59:02.173410   26392 main.go:141] libmachine: (ha-234651) Calling .GetConfigRaw
	I0731 16:59:02.173764   26392 main.go:141] libmachine: Creating machine...
	I0731 16:59:02.173777   26392 main.go:141] libmachine: (ha-234651) Calling .Create
	I0731 16:59:02.173883   26392 main.go:141] libmachine: (ha-234651) Creating KVM machine...
	I0731 16:59:02.175020   26392 main.go:141] libmachine: (ha-234651) DBG | found existing default KVM network
	I0731 16:59:02.175675   26392 main.go:141] libmachine: (ha-234651) DBG | I0731 16:59:02.175519   26415 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ac0}
	I0731 16:59:02.175700   26392 main.go:141] libmachine: (ha-234651) DBG | created network xml: 
	I0731 16:59:02.175716   26392 main.go:141] libmachine: (ha-234651) DBG | <network>
	I0731 16:59:02.175727   26392 main.go:141] libmachine: (ha-234651) DBG |   <name>mk-ha-234651</name>
	I0731 16:59:02.175736   26392 main.go:141] libmachine: (ha-234651) DBG |   <dns enable='no'/>
	I0731 16:59:02.175743   26392 main.go:141] libmachine: (ha-234651) DBG |   
	I0731 16:59:02.175749   26392 main.go:141] libmachine: (ha-234651) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0731 16:59:02.175755   26392 main.go:141] libmachine: (ha-234651) DBG |     <dhcp>
	I0731 16:59:02.175762   26392 main.go:141] libmachine: (ha-234651) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0731 16:59:02.175768   26392 main.go:141] libmachine: (ha-234651) DBG |     </dhcp>
	I0731 16:59:02.175775   26392 main.go:141] libmachine: (ha-234651) DBG |   </ip>
	I0731 16:59:02.175785   26392 main.go:141] libmachine: (ha-234651) DBG |   
	I0731 16:59:02.175792   26392 main.go:141] libmachine: (ha-234651) DBG | </network>
	I0731 16:59:02.175807   26392 main.go:141] libmachine: (ha-234651) DBG | 
	I0731 16:59:02.181065   26392 main.go:141] libmachine: (ha-234651) DBG | trying to create private KVM network mk-ha-234651 192.168.39.0/24...
	I0731 16:59:02.245390   26392 main.go:141] libmachine: (ha-234651) DBG | private KVM network mk-ha-234651 192.168.39.0/24 created
	I0731 16:59:02.245418   26392 main.go:141] libmachine: (ha-234651) DBG | I0731 16:59:02.245363   26415 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19349-8084/.minikube
	I0731 16:59:02.245427   26392 main.go:141] libmachine: (ha-234651) Setting up store path in /home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651 ...
	I0731 16:59:02.245440   26392 main.go:141] libmachine: (ha-234651) Building disk image from file:///home/jenkins/minikube-integration/19349-8084/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0731 16:59:02.245494   26392 main.go:141] libmachine: (ha-234651) Downloading /home/jenkins/minikube-integration/19349-8084/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19349-8084/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0731 16:59:02.479460   26392 main.go:141] libmachine: (ha-234651) DBG | I0731 16:59:02.479343   26415 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651/id_rsa...
	I0731 16:59:02.575082   26392 main.go:141] libmachine: (ha-234651) DBG | I0731 16:59:02.574936   26415 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651/ha-234651.rawdisk...
	I0731 16:59:02.575138   26392 main.go:141] libmachine: (ha-234651) DBG | Writing magic tar header
	I0731 16:59:02.575154   26392 main.go:141] libmachine: (ha-234651) DBG | Writing SSH key tar header
	I0731 16:59:02.575181   26392 main.go:141] libmachine: (ha-234651) DBG | I0731 16:59:02.575046   26415 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651 ...
	I0731 16:59:02.575197   26392 main.go:141] libmachine: (ha-234651) Setting executable bit set on /home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651 (perms=drwx------)
	I0731 16:59:02.575207   26392 main.go:141] libmachine: (ha-234651) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651
	I0731 16:59:02.575218   26392 main.go:141] libmachine: (ha-234651) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19349-8084/.minikube/machines
	I0731 16:59:02.575224   26392 main.go:141] libmachine: (ha-234651) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19349-8084/.minikube
	I0731 16:59:02.575233   26392 main.go:141] libmachine: (ha-234651) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19349-8084
	I0731 16:59:02.575238   26392 main.go:141] libmachine: (ha-234651) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0731 16:59:02.575248   26392 main.go:141] libmachine: (ha-234651) DBG | Checking permissions on dir: /home/jenkins
	I0731 16:59:02.575256   26392 main.go:141] libmachine: (ha-234651) DBG | Checking permissions on dir: /home
	I0731 16:59:02.575281   26392 main.go:141] libmachine: (ha-234651) DBG | Skipping /home - not owner
	I0731 16:59:02.575295   26392 main.go:141] libmachine: (ha-234651) Setting executable bit set on /home/jenkins/minikube-integration/19349-8084/.minikube/machines (perms=drwxr-xr-x)
	I0731 16:59:02.575307   26392 main.go:141] libmachine: (ha-234651) Setting executable bit set on /home/jenkins/minikube-integration/19349-8084/.minikube (perms=drwxr-xr-x)
	I0731 16:59:02.575317   26392 main.go:141] libmachine: (ha-234651) Setting executable bit set on /home/jenkins/minikube-integration/19349-8084 (perms=drwxrwxr-x)
	I0731 16:59:02.575331   26392 main.go:141] libmachine: (ha-234651) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0731 16:59:02.575338   26392 main.go:141] libmachine: (ha-234651) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0731 16:59:02.575433   26392 main.go:141] libmachine: (ha-234651) Creating domain...
	I0731 16:59:02.576354   26392 main.go:141] libmachine: (ha-234651) define libvirt domain using xml: 
	I0731 16:59:02.576381   26392 main.go:141] libmachine: (ha-234651) <domain type='kvm'>
	I0731 16:59:02.576391   26392 main.go:141] libmachine: (ha-234651)   <name>ha-234651</name>
	I0731 16:59:02.576399   26392 main.go:141] libmachine: (ha-234651)   <memory unit='MiB'>2200</memory>
	I0731 16:59:02.576407   26392 main.go:141] libmachine: (ha-234651)   <vcpu>2</vcpu>
	I0731 16:59:02.576415   26392 main.go:141] libmachine: (ha-234651)   <features>
	I0731 16:59:02.576423   26392 main.go:141] libmachine: (ha-234651)     <acpi/>
	I0731 16:59:02.576430   26392 main.go:141] libmachine: (ha-234651)     <apic/>
	I0731 16:59:02.576453   26392 main.go:141] libmachine: (ha-234651)     <pae/>
	I0731 16:59:02.576464   26392 main.go:141] libmachine: (ha-234651)     
	I0731 16:59:02.576490   26392 main.go:141] libmachine: (ha-234651)   </features>
	I0731 16:59:02.576510   26392 main.go:141] libmachine: (ha-234651)   <cpu mode='host-passthrough'>
	I0731 16:59:02.576517   26392 main.go:141] libmachine: (ha-234651)   
	I0731 16:59:02.576522   26392 main.go:141] libmachine: (ha-234651)   </cpu>
	I0731 16:59:02.576529   26392 main.go:141] libmachine: (ha-234651)   <os>
	I0731 16:59:02.576533   26392 main.go:141] libmachine: (ha-234651)     <type>hvm</type>
	I0731 16:59:02.576539   26392 main.go:141] libmachine: (ha-234651)     <boot dev='cdrom'/>
	I0731 16:59:02.576544   26392 main.go:141] libmachine: (ha-234651)     <boot dev='hd'/>
	I0731 16:59:02.576554   26392 main.go:141] libmachine: (ha-234651)     <bootmenu enable='no'/>
	I0731 16:59:02.576559   26392 main.go:141] libmachine: (ha-234651)   </os>
	I0731 16:59:02.576566   26392 main.go:141] libmachine: (ha-234651)   <devices>
	I0731 16:59:02.576571   26392 main.go:141] libmachine: (ha-234651)     <disk type='file' device='cdrom'>
	I0731 16:59:02.576609   26392 main.go:141] libmachine: (ha-234651)       <source file='/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651/boot2docker.iso'/>
	I0731 16:59:02.576638   26392 main.go:141] libmachine: (ha-234651)       <target dev='hdc' bus='scsi'/>
	I0731 16:59:02.576667   26392 main.go:141] libmachine: (ha-234651)       <readonly/>
	I0731 16:59:02.576685   26392 main.go:141] libmachine: (ha-234651)     </disk>
	I0731 16:59:02.576700   26392 main.go:141] libmachine: (ha-234651)     <disk type='file' device='disk'>
	I0731 16:59:02.576712   26392 main.go:141] libmachine: (ha-234651)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0731 16:59:02.576730   26392 main.go:141] libmachine: (ha-234651)       <source file='/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651/ha-234651.rawdisk'/>
	I0731 16:59:02.576743   26392 main.go:141] libmachine: (ha-234651)       <target dev='hda' bus='virtio'/>
	I0731 16:59:02.576761   26392 main.go:141] libmachine: (ha-234651)     </disk>
	I0731 16:59:02.576779   26392 main.go:141] libmachine: (ha-234651)     <interface type='network'>
	I0731 16:59:02.576805   26392 main.go:141] libmachine: (ha-234651)       <source network='mk-ha-234651'/>
	I0731 16:59:02.576825   26392 main.go:141] libmachine: (ha-234651)       <model type='virtio'/>
	I0731 16:59:02.576836   26392 main.go:141] libmachine: (ha-234651)     </interface>
	I0731 16:59:02.576848   26392 main.go:141] libmachine: (ha-234651)     <interface type='network'>
	I0731 16:59:02.576867   26392 main.go:141] libmachine: (ha-234651)       <source network='default'/>
	I0731 16:59:02.576875   26392 main.go:141] libmachine: (ha-234651)       <model type='virtio'/>
	I0731 16:59:02.576880   26392 main.go:141] libmachine: (ha-234651)     </interface>
	I0731 16:59:02.576887   26392 main.go:141] libmachine: (ha-234651)     <serial type='pty'>
	I0731 16:59:02.576892   26392 main.go:141] libmachine: (ha-234651)       <target port='0'/>
	I0731 16:59:02.576898   26392 main.go:141] libmachine: (ha-234651)     </serial>
	I0731 16:59:02.576904   26392 main.go:141] libmachine: (ha-234651)     <console type='pty'>
	I0731 16:59:02.576910   26392 main.go:141] libmachine: (ha-234651)       <target type='serial' port='0'/>
	I0731 16:59:02.576915   26392 main.go:141] libmachine: (ha-234651)     </console>
	I0731 16:59:02.576922   26392 main.go:141] libmachine: (ha-234651)     <rng model='virtio'>
	I0731 16:59:02.576928   26392 main.go:141] libmachine: (ha-234651)       <backend model='random'>/dev/random</backend>
	I0731 16:59:02.576932   26392 main.go:141] libmachine: (ha-234651)     </rng>
	I0731 16:59:02.576937   26392 main.go:141] libmachine: (ha-234651)     
	I0731 16:59:02.576941   26392 main.go:141] libmachine: (ha-234651)     
	I0731 16:59:02.576946   26392 main.go:141] libmachine: (ha-234651)   </devices>
	I0731 16:59:02.576951   26392 main.go:141] libmachine: (ha-234651) </domain>
	I0731 16:59:02.576958   26392 main.go:141] libmachine: (ha-234651) 
	I0731 16:59:02.581016   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:a4:a7:99 in network default
	I0731 16:59:02.581631   26392 main.go:141] libmachine: (ha-234651) Ensuring networks are active...
	I0731 16:59:02.581649   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:02.582218   26392 main.go:141] libmachine: (ha-234651) Ensuring network default is active
	I0731 16:59:02.582490   26392 main.go:141] libmachine: (ha-234651) Ensuring network mk-ha-234651 is active
	I0731 16:59:02.582926   26392 main.go:141] libmachine: (ha-234651) Getting domain xml...
	I0731 16:59:02.583572   26392 main.go:141] libmachine: (ha-234651) Creating domain...
	I0731 16:59:03.758566   26392 main.go:141] libmachine: (ha-234651) Waiting to get IP...
	I0731 16:59:03.759252   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:03.759681   26392 main.go:141] libmachine: (ha-234651) DBG | unable to find current IP address of domain ha-234651 in network mk-ha-234651
	I0731 16:59:03.759703   26392 main.go:141] libmachine: (ha-234651) DBG | I0731 16:59:03.759661   26415 retry.go:31] will retry after 261.150283ms: waiting for machine to come up
	I0731 16:59:04.022061   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:04.022478   26392 main.go:141] libmachine: (ha-234651) DBG | unable to find current IP address of domain ha-234651 in network mk-ha-234651
	I0731 16:59:04.022501   26392 main.go:141] libmachine: (ha-234651) DBG | I0731 16:59:04.022433   26415 retry.go:31] will retry after 324.011133ms: waiting for machine to come up
	I0731 16:59:04.347982   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:04.348423   26392 main.go:141] libmachine: (ha-234651) DBG | unable to find current IP address of domain ha-234651 in network mk-ha-234651
	I0731 16:59:04.348442   26392 main.go:141] libmachine: (ha-234651) DBG | I0731 16:59:04.348383   26415 retry.go:31] will retry after 378.78361ms: waiting for machine to come up
	I0731 16:59:04.728908   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:04.729471   26392 main.go:141] libmachine: (ha-234651) DBG | unable to find current IP address of domain ha-234651 in network mk-ha-234651
	I0731 16:59:04.729500   26392 main.go:141] libmachine: (ha-234651) DBG | I0731 16:59:04.729404   26415 retry.go:31] will retry after 582.839678ms: waiting for machine to come up
	I0731 16:59:05.314006   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:05.314617   26392 main.go:141] libmachine: (ha-234651) DBG | unable to find current IP address of domain ha-234651 in network mk-ha-234651
	I0731 16:59:05.314640   26392 main.go:141] libmachine: (ha-234651) DBG | I0731 16:59:05.314578   26415 retry.go:31] will retry after 543.640775ms: waiting for machine to come up
	I0731 16:59:05.860403   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:05.860843   26392 main.go:141] libmachine: (ha-234651) DBG | unable to find current IP address of domain ha-234651 in network mk-ha-234651
	I0731 16:59:05.860867   26392 main.go:141] libmachine: (ha-234651) DBG | I0731 16:59:05.860796   26415 retry.go:31] will retry after 885.211489ms: waiting for machine to come up
	I0731 16:59:06.747859   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:06.748290   26392 main.go:141] libmachine: (ha-234651) DBG | unable to find current IP address of domain ha-234651 in network mk-ha-234651
	I0731 16:59:06.748326   26392 main.go:141] libmachine: (ha-234651) DBG | I0731 16:59:06.748244   26415 retry.go:31] will retry after 872.987133ms: waiting for machine to come up
	I0731 16:59:07.622973   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:07.623513   26392 main.go:141] libmachine: (ha-234651) DBG | unable to find current IP address of domain ha-234651 in network mk-ha-234651
	I0731 16:59:07.623541   26392 main.go:141] libmachine: (ha-234651) DBG | I0731 16:59:07.623457   26415 retry.go:31] will retry after 1.063595754s: waiting for machine to come up
	I0731 16:59:08.688832   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:08.689277   26392 main.go:141] libmachine: (ha-234651) DBG | unable to find current IP address of domain ha-234651 in network mk-ha-234651
	I0731 16:59:08.689309   26392 main.go:141] libmachine: (ha-234651) DBG | I0731 16:59:08.689226   26415 retry.go:31] will retry after 1.211748796s: waiting for machine to come up
	I0731 16:59:09.902688   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:09.903250   26392 main.go:141] libmachine: (ha-234651) DBG | unable to find current IP address of domain ha-234651 in network mk-ha-234651
	I0731 16:59:09.903282   26392 main.go:141] libmachine: (ha-234651) DBG | I0731 16:59:09.903203   26415 retry.go:31] will retry after 1.480030878s: waiting for machine to come up
	I0731 16:59:11.385039   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:11.385459   26392 main.go:141] libmachine: (ha-234651) DBG | unable to find current IP address of domain ha-234651 in network mk-ha-234651
	I0731 16:59:11.385483   26392 main.go:141] libmachine: (ha-234651) DBG | I0731 16:59:11.385395   26415 retry.go:31] will retry after 1.914673374s: waiting for machine to come up
	I0731 16:59:13.301279   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:13.301612   26392 main.go:141] libmachine: (ha-234651) DBG | unable to find current IP address of domain ha-234651 in network mk-ha-234651
	I0731 16:59:13.301648   26392 main.go:141] libmachine: (ha-234651) DBG | I0731 16:59:13.301555   26415 retry.go:31] will retry after 2.413581052s: waiting for machine to come up
	I0731 16:59:15.718131   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:15.718454   26392 main.go:141] libmachine: (ha-234651) DBG | unable to find current IP address of domain ha-234651 in network mk-ha-234651
	I0731 16:59:15.718482   26392 main.go:141] libmachine: (ha-234651) DBG | I0731 16:59:15.718405   26415 retry.go:31] will retry after 4.359438277s: waiting for machine to come up
	I0731 16:59:20.081334   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:20.081705   26392 main.go:141] libmachine: (ha-234651) DBG | unable to find current IP address of domain ha-234651 in network mk-ha-234651
	I0731 16:59:20.081730   26392 main.go:141] libmachine: (ha-234651) DBG | I0731 16:59:20.081673   26415 retry.go:31] will retry after 3.951412981s: waiting for machine to come up
	I0731 16:59:24.035653   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:24.036108   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has current primary IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:24.036193   26392 main.go:141] libmachine: (ha-234651) Found IP for machine: 192.168.39.243
	I0731 16:59:24.036235   26392 main.go:141] libmachine: (ha-234651) Reserving static IP address...
	I0731 16:59:24.036545   26392 main.go:141] libmachine: (ha-234651) DBG | unable to find host DHCP lease matching {name: "ha-234651", mac: "52:54:00:20:60:53", ip: "192.168.39.243"} in network mk-ha-234651
	I0731 16:59:24.106091   26392 main.go:141] libmachine: (ha-234651) DBG | Getting to WaitForSSH function...
	I0731 16:59:24.106174   26392 main.go:141] libmachine: (ha-234651) Reserved static IP address: 192.168.39.243
	I0731 16:59:24.106192   26392 main.go:141] libmachine: (ha-234651) Waiting for SSH to be available...
	I0731 16:59:24.108490   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:24.108857   26392 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:minikube Clientid:01:52:54:00:20:60:53}
	I0731 16:59:24.108896   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:24.109012   26392 main.go:141] libmachine: (ha-234651) DBG | Using SSH client type: external
	I0731 16:59:24.109035   26392 main.go:141] libmachine: (ha-234651) DBG | Using SSH private key: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651/id_rsa (-rw-------)
	I0731 16:59:24.109063   26392 main.go:141] libmachine: (ha-234651) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.243 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 16:59:24.109077   26392 main.go:141] libmachine: (ha-234651) DBG | About to run SSH command:
	I0731 16:59:24.109103   26392 main.go:141] libmachine: (ha-234651) DBG | exit 0
	I0731 16:59:24.234933   26392 main.go:141] libmachine: (ha-234651) DBG | SSH cmd err, output: <nil>: 
	I0731 16:59:24.235227   26392 main.go:141] libmachine: (ha-234651) KVM machine creation complete!
	I0731 16:59:24.235572   26392 main.go:141] libmachine: (ha-234651) Calling .GetConfigRaw
	I0731 16:59:24.236082   26392 main.go:141] libmachine: (ha-234651) Calling .DriverName
	I0731 16:59:24.236267   26392 main.go:141] libmachine: (ha-234651) Calling .DriverName
	I0731 16:59:24.236428   26392 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0731 16:59:24.236441   26392 main.go:141] libmachine: (ha-234651) Calling .GetState
	I0731 16:59:24.237676   26392 main.go:141] libmachine: Detecting operating system of created instance...
	I0731 16:59:24.237689   26392 main.go:141] libmachine: Waiting for SSH to be available...
	I0731 16:59:24.237694   26392 main.go:141] libmachine: Getting to WaitForSSH function...
	I0731 16:59:24.237700   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 16:59:24.239709   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:24.240031   26392 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 16:59:24.240057   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:24.240225   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 16:59:24.240399   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 16:59:24.240522   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 16:59:24.240650   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 16:59:24.240778   26392 main.go:141] libmachine: Using SSH client type: native
	I0731 16:59:24.240957   26392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0731 16:59:24.240967   26392 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0731 16:59:24.342185   26392 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 16:59:24.342206   26392 main.go:141] libmachine: Detecting the provisioner...
	I0731 16:59:24.342216   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 16:59:24.344783   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:24.345095   26392 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 16:59:24.345121   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:24.345282   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 16:59:24.345454   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 16:59:24.345623   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 16:59:24.345721   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 16:59:24.345887   26392 main.go:141] libmachine: Using SSH client type: native
	I0731 16:59:24.346058   26392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0731 16:59:24.346068   26392 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0731 16:59:24.451456   26392 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0731 16:59:24.451514   26392 main.go:141] libmachine: found compatible host: buildroot
	I0731 16:59:24.451520   26392 main.go:141] libmachine: Provisioning with buildroot...
	I0731 16:59:24.451527   26392 main.go:141] libmachine: (ha-234651) Calling .GetMachineName
	I0731 16:59:24.451760   26392 buildroot.go:166] provisioning hostname "ha-234651"
	I0731 16:59:24.451784   26392 main.go:141] libmachine: (ha-234651) Calling .GetMachineName
	I0731 16:59:24.451944   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 16:59:24.454316   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:24.454697   26392 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 16:59:24.454719   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:24.454943   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 16:59:24.455133   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 16:59:24.455260   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 16:59:24.455450   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 16:59:24.455628   26392 main.go:141] libmachine: Using SSH client type: native
	I0731 16:59:24.455820   26392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0731 16:59:24.455838   26392 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-234651 && echo "ha-234651" | sudo tee /etc/hostname
	I0731 16:59:24.572098   26392 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-234651
	
	I0731 16:59:24.572125   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 16:59:24.574462   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:24.574833   26392 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 16:59:24.574855   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:24.575006   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 16:59:24.575203   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 16:59:24.575334   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 16:59:24.575488   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 16:59:24.575626   26392 main.go:141] libmachine: Using SSH client type: native
	I0731 16:59:24.575805   26392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0731 16:59:24.575823   26392 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-234651' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-234651/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-234651' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 16:59:24.687017   26392 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 16:59:24.687044   26392 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19349-8084/.minikube CaCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19349-8084/.minikube}
	I0731 16:59:24.687092   26392 buildroot.go:174] setting up certificates
	I0731 16:59:24.687144   26392 provision.go:84] configureAuth start
	I0731 16:59:24.687262   26392 main.go:141] libmachine: (ha-234651) Calling .GetMachineName
	I0731 16:59:24.687587   26392 main.go:141] libmachine: (ha-234651) Calling .GetIP
	I0731 16:59:24.690308   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:24.690636   26392 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 16:59:24.690668   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:24.690787   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 16:59:24.692980   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:24.693269   26392 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 16:59:24.693290   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:24.693408   26392 provision.go:143] copyHostCerts
	I0731 16:59:24.693436   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem
	I0731 16:59:24.693474   26392 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem, removing ...
	I0731 16:59:24.693486   26392 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem
	I0731 16:59:24.693564   26392 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem (1082 bytes)
	I0731 16:59:24.693693   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem
	I0731 16:59:24.693720   26392 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem, removing ...
	I0731 16:59:24.693730   26392 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem
	I0731 16:59:24.693769   26392 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem (1123 bytes)
	I0731 16:59:24.693843   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem
	I0731 16:59:24.693865   26392 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem, removing ...
	I0731 16:59:24.693872   26392 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem
	I0731 16:59:24.693904   26392 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem (1675 bytes)
	I0731 16:59:24.693971   26392 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem org=jenkins.ha-234651 san=[127.0.0.1 192.168.39.243 ha-234651 localhost minikube]
	I0731 16:59:24.825150   26392 provision.go:177] copyRemoteCerts
	I0731 16:59:24.825214   26392 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 16:59:24.825237   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 16:59:24.828022   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:24.828285   26392 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 16:59:24.828307   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:24.828513   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 16:59:24.828654   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 16:59:24.828804   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 16:59:24.828951   26392 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651/id_rsa Username:docker}
	I0731 16:59:24.908983   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0731 16:59:24.909061   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 16:59:24.930932   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0731 16:59:24.931007   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0731 16:59:24.952360   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0731 16:59:24.952415   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 16:59:24.973200   26392 provision.go:87] duration metric: took 285.951239ms to configureAuth
	I0731 16:59:24.973235   26392 buildroot.go:189] setting minikube options for container-runtime
	I0731 16:59:24.973426   26392 config.go:182] Loaded profile config "ha-234651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 16:59:24.973500   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 16:59:24.975877   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:24.976220   26392 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 16:59:24.976239   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:24.976400   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 16:59:24.976556   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 16:59:24.976698   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 16:59:24.976814   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 16:59:24.976947   26392 main.go:141] libmachine: Using SSH client type: native
	I0731 16:59:24.977123   26392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0731 16:59:24.977145   26392 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 16:59:25.232552   26392 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 16:59:25.232574   26392 main.go:141] libmachine: Checking connection to Docker...
	I0731 16:59:25.232581   26392 main.go:141] libmachine: (ha-234651) Calling .GetURL
	I0731 16:59:25.233847   26392 main.go:141] libmachine: (ha-234651) DBG | Using libvirt version 6000000
	I0731 16:59:25.235805   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:25.236125   26392 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 16:59:25.236147   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:25.236299   26392 main.go:141] libmachine: Docker is up and running!
	I0731 16:59:25.236312   26392 main.go:141] libmachine: Reticulating splines...
	I0731 16:59:25.236317   26392 client.go:171] duration metric: took 23.06346261s to LocalClient.Create
	I0731 16:59:25.236342   26392 start.go:167] duration metric: took 23.063527006s to libmachine.API.Create "ha-234651"
	I0731 16:59:25.236351   26392 start.go:293] postStartSetup for "ha-234651" (driver="kvm2")
	I0731 16:59:25.236360   26392 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 16:59:25.236372   26392 main.go:141] libmachine: (ha-234651) Calling .DriverName
	I0731 16:59:25.236626   26392 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 16:59:25.236651   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 16:59:25.238593   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:25.238936   26392 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 16:59:25.238974   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:25.239086   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 16:59:25.239260   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 16:59:25.239404   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 16:59:25.239540   26392 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651/id_rsa Username:docker}
	I0731 16:59:25.320880   26392 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 16:59:25.324680   26392 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 16:59:25.324703   26392 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/addons for local assets ...
	I0731 16:59:25.324770   26392 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/files for local assets ...
	I0731 16:59:25.324875   26392 filesync.go:149] local asset: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem -> 152592.pem in /etc/ssl/certs
	I0731 16:59:25.324887   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem -> /etc/ssl/certs/152592.pem
	I0731 16:59:25.325012   26392 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 16:59:25.333559   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /etc/ssl/certs/152592.pem (1708 bytes)
	I0731 16:59:25.355130   26392 start.go:296] duration metric: took 118.766984ms for postStartSetup
	I0731 16:59:25.355184   26392 main.go:141] libmachine: (ha-234651) Calling .GetConfigRaw
	I0731 16:59:25.355771   26392 main.go:141] libmachine: (ha-234651) Calling .GetIP
	I0731 16:59:25.358270   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:25.358576   26392 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 16:59:25.358600   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:25.358857   26392 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/config.json ...
	I0731 16:59:25.359033   26392 start.go:128] duration metric: took 23.204391608s to createHost
	I0731 16:59:25.359054   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 16:59:25.361175   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:25.361424   26392 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 16:59:25.361449   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:25.361646   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 16:59:25.361799   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 16:59:25.361951   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 16:59:25.362075   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 16:59:25.362211   26392 main.go:141] libmachine: Using SSH client type: native
	I0731 16:59:25.362444   26392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0731 16:59:25.362466   26392 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 16:59:25.467488   26392 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722445165.445964437
	
	I0731 16:59:25.467508   26392 fix.go:216] guest clock: 1722445165.445964437
	I0731 16:59:25.467525   26392 fix.go:229] Guest: 2024-07-31 16:59:25.445964437 +0000 UTC Remote: 2024-07-31 16:59:25.359045152 +0000 UTC m=+23.305280078 (delta=86.919285ms)
	I0731 16:59:25.467549   26392 fix.go:200] guest clock delta is within tolerance: 86.919285ms
	I0731 16:59:25.467559   26392 start.go:83] releasing machines lock for "ha-234651", held for 23.312997688s
	I0731 16:59:25.467581   26392 main.go:141] libmachine: (ha-234651) Calling .DriverName
	I0731 16:59:25.467827   26392 main.go:141] libmachine: (ha-234651) Calling .GetIP
	I0731 16:59:25.470269   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:25.470547   26392 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 16:59:25.470588   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:25.470708   26392 main.go:141] libmachine: (ha-234651) Calling .DriverName
	I0731 16:59:25.471160   26392 main.go:141] libmachine: (ha-234651) Calling .DriverName
	I0731 16:59:25.471315   26392 main.go:141] libmachine: (ha-234651) Calling .DriverName
	I0731 16:59:25.471412   26392 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 16:59:25.471447   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 16:59:25.471500   26392 ssh_runner.go:195] Run: cat /version.json
	I0731 16:59:25.471523   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 16:59:25.473900   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:25.473923   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:25.474275   26392 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 16:59:25.474312   26392 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 16:59:25.474337   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:25.474398   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:25.474471   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 16:59:25.474670   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 16:59:25.474679   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 16:59:25.474819   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 16:59:25.474835   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 16:59:25.474970   26392 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651/id_rsa Username:docker}
	I0731 16:59:25.474978   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 16:59:25.475151   26392 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651/id_rsa Username:docker}
	I0731 16:59:25.577098   26392 ssh_runner.go:195] Run: systemctl --version
	I0731 16:59:25.582747   26392 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 16:59:25.737662   26392 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 16:59:25.743019   26392 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 16:59:25.743081   26392 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 16:59:25.758785   26392 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 16:59:25.758804   26392 start.go:495] detecting cgroup driver to use...
	I0731 16:59:25.758859   26392 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 16:59:25.773597   26392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 16:59:25.786568   26392 docker.go:217] disabling cri-docker service (if available) ...
	I0731 16:59:25.786640   26392 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 16:59:25.799385   26392 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 16:59:25.813818   26392 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 16:59:25.921385   26392 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 16:59:26.059608   26392 docker.go:233] disabling docker service ...
	I0731 16:59:26.059699   26392 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 16:59:26.073467   26392 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 16:59:26.085380   26392 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 16:59:26.225351   26392 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 16:59:26.343719   26392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 16:59:26.356563   26392 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 16:59:26.372956   26392 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 16:59:26.373021   26392 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 16:59:26.382181   26392 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 16:59:26.382235   26392 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 16:59:26.391652   26392 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 16:59:26.401109   26392 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 16:59:26.410465   26392 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 16:59:26.420129   26392 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 16:59:26.429507   26392 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 16:59:26.445726   26392 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 16:59:26.455319   26392 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 16:59:26.464186   26392 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 16:59:26.464236   26392 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 16:59:26.476232   26392 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 16:59:26.488515   26392 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 16:59:26.599047   26392 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 16:59:26.727563   26392 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 16:59:26.727633   26392 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 16:59:26.731803   26392 start.go:563] Will wait 60s for crictl version
	I0731 16:59:26.731863   26392 ssh_runner.go:195] Run: which crictl
	I0731 16:59:26.735353   26392 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 16:59:26.770458   26392 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 16:59:26.770538   26392 ssh_runner.go:195] Run: crio --version
	I0731 16:59:26.796805   26392 ssh_runner.go:195] Run: crio --version
	I0731 16:59:26.826014   26392 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 16:59:26.827253   26392 main.go:141] libmachine: (ha-234651) Calling .GetIP
	I0731 16:59:26.829618   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:26.829958   26392 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 16:59:26.829999   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:26.830170   26392 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 16:59:26.833815   26392 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 16:59:26.845446   26392 kubeadm.go:883] updating cluster {Name:ha-234651 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-234651 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.243 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 16:59:26.845537   26392 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 16:59:26.845578   26392 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 16:59:26.877140   26392 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 16:59:26.877195   26392 ssh_runner.go:195] Run: which lz4
	I0731 16:59:26.880717   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0731 16:59:26.880811   26392 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 16:59:26.884569   26392 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 16:59:26.884604   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 16:59:28.081998   26392 crio.go:462] duration metric: took 1.201214851s to copy over tarball
	I0731 16:59:28.082059   26392 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 16:59:30.220227   26392 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.138143947s)
	I0731 16:59:30.220261   26392 crio.go:469] duration metric: took 2.138235975s to extract the tarball
	I0731 16:59:30.220270   26392 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 16:59:30.257971   26392 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 16:59:30.299500   26392 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 16:59:30.299525   26392 cache_images.go:84] Images are preloaded, skipping loading
	I0731 16:59:30.299533   26392 kubeadm.go:934] updating node { 192.168.39.243 8443 v1.30.3 crio true true} ...
	I0731 16:59:30.299640   26392 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-234651 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.243
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-234651 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 16:59:30.299706   26392 ssh_runner.go:195] Run: crio config
	I0731 16:59:30.341032   26392 cni.go:84] Creating CNI manager for ""
	I0731 16:59:30.341053   26392 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0731 16:59:30.341066   26392 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 16:59:30.341085   26392 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.243 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-234651 NodeName:ha-234651 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.243"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.243 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 16:59:30.341212   26392 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.243
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-234651"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.243
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.243"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 16:59:30.341238   26392 kube-vip.go:115] generating kube-vip config ...
	I0731 16:59:30.341273   26392 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0731 16:59:30.359265   26392 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0731 16:59:30.359355   26392 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0731 16:59:30.359418   26392 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 16:59:30.368615   26392 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 16:59:30.368681   26392 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0731 16:59:30.377525   26392 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0731 16:59:30.392491   26392 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 16:59:30.407319   26392 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0731 16:59:30.422134   26392 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0731 16:59:30.436820   26392 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0731 16:59:30.440442   26392 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 16:59:30.451341   26392 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 16:59:30.581682   26392 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 16:59:30.597960   26392 certs.go:68] Setting up /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651 for IP: 192.168.39.243
	I0731 16:59:30.597987   26392 certs.go:194] generating shared ca certs ...
	I0731 16:59:30.598009   26392 certs.go:226] acquiring lock for ca certs: {Name:mk49eed439085f9c30746706460ce213570d1997 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 16:59:30.598199   26392 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key
	I0731 16:59:30.598259   26392 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key
	I0731 16:59:30.598275   26392 certs.go:256] generating profile certs ...
	I0731 16:59:30.598341   26392 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/client.key
	I0731 16:59:30.598384   26392 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/client.crt with IP's: []
	I0731 16:59:30.700029   26392 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/client.crt ...
	I0731 16:59:30.700055   26392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/client.crt: {Name:mk7ee64628046b1d2da8c67709ceb5f483c647c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 16:59:30.700250   26392 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/client.key ...
	I0731 16:59:30.700268   26392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/client.key: {Name:mk9a3b2bee7d0d6eb498143fed75ea79c6d5cd05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 16:59:30.700383   26392 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.key.414d6831
	I0731 16:59:30.700408   26392 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.crt.414d6831 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.243 192.168.39.254]
	I0731 16:59:30.953973   26392 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.crt.414d6831 ...
	I0731 16:59:30.954003   26392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.crt.414d6831: {Name:mk17313042b397a79965fb7698fed9783403c484 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 16:59:30.954153   26392 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.key.414d6831 ...
	I0731 16:59:30.954165   26392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.key.414d6831: {Name:mk7b1e0e449b763530a552eb308f6593ad6d0ed2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 16:59:30.954236   26392 certs.go:381] copying /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.crt.414d6831 -> /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.crt
	I0731 16:59:30.954316   26392 certs.go:385] copying /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.key.414d6831 -> /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.key
	I0731 16:59:30.954381   26392 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/proxy-client.key
	I0731 16:59:30.954402   26392 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/proxy-client.crt with IP's: []
	I0731 16:59:31.190411   26392 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/proxy-client.crt ...
	I0731 16:59:31.190442   26392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/proxy-client.crt: {Name:mkbe422e4a5b3ad16cdbcc06c237d001864e7f29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 16:59:31.190605   26392 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/proxy-client.key ...
	I0731 16:59:31.190616   26392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/proxy-client.key: {Name:mkd04cb40fa82623f4bd1825fcdb903f6f94bfe6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 16:59:31.190677   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 16:59:31.190694   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0731 16:59:31.190705   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 16:59:31.190718   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 16:59:31.190731   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 16:59:31.190744   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 16:59:31.190757   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 16:59:31.190769   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 16:59:31.190821   26392 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem (1338 bytes)
	W0731 16:59:31.190853   26392 certs.go:480] ignoring /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259_empty.pem, impossibly tiny 0 bytes
	I0731 16:59:31.190862   26392 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 16:59:31.190884   26392 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem (1082 bytes)
	I0731 16:59:31.190906   26392 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem (1123 bytes)
	I0731 16:59:31.190930   26392 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem (1675 bytes)
	I0731 16:59:31.190972   26392 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem (1708 bytes)
	I0731 16:59:31.191001   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 16:59:31.191014   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem -> /usr/share/ca-certificates/15259.pem
	I0731 16:59:31.191026   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem -> /usr/share/ca-certificates/152592.pem
	I0731 16:59:31.191561   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 16:59:31.219008   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 16:59:31.243811   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 16:59:31.268574   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 16:59:31.293150   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0731 16:59:31.317997   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 16:59:31.342785   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 16:59:31.373077   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 16:59:31.412217   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 16:59:31.439225   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem --> /usr/share/ca-certificates/15259.pem (1338 bytes)
	I0731 16:59:31.462362   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /usr/share/ca-certificates/152592.pem (1708 bytes)
	I0731 16:59:31.483715   26392 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 16:59:31.498726   26392 ssh_runner.go:195] Run: openssl version
	I0731 16:59:31.503960   26392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 16:59:31.514138   26392 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 16:59:31.518194   26392 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 16:42 /usr/share/ca-certificates/minikubeCA.pem
	I0731 16:59:31.518274   26392 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 16:59:31.523607   26392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 16:59:31.533617   26392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15259.pem && ln -fs /usr/share/ca-certificates/15259.pem /etc/ssl/certs/15259.pem"
	I0731 16:59:31.543851   26392 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15259.pem
	I0731 16:59:31.547852   26392 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 16:54 /usr/share/ca-certificates/15259.pem
	I0731 16:59:31.547912   26392 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15259.pem
	I0731 16:59:31.553469   26392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15259.pem /etc/ssl/certs/51391683.0"
	I0731 16:59:31.563615   26392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/152592.pem && ln -fs /usr/share/ca-certificates/152592.pem /etc/ssl/certs/152592.pem"
	I0731 16:59:31.573678   26392 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/152592.pem
	I0731 16:59:31.577713   26392 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 16:54 /usr/share/ca-certificates/152592.pem
	I0731 16:59:31.577753   26392 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/152592.pem
	I0731 16:59:31.582846   26392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/152592.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 16:59:31.593077   26392 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 16:59:31.596896   26392 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 16:59:31.596943   26392 kubeadm.go:392] StartCluster: {Name:ha-234651 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-234651 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.243 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 16:59:31.597021   26392 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 16:59:31.597103   26392 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 16:59:31.630641   26392 cri.go:89] found id: ""
	I0731 16:59:31.630714   26392 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 16:59:31.640096   26392 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 16:59:31.649935   26392 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 16:59:31.660459   26392 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 16:59:31.660477   26392 kubeadm.go:157] found existing configuration files:
	
	I0731 16:59:31.660528   26392 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 16:59:31.669310   26392 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 16:59:31.669357   26392 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 16:59:31.678192   26392 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 16:59:31.686853   26392 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 16:59:31.686922   26392 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 16:59:31.695910   26392 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 16:59:31.704263   26392 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 16:59:31.704311   26392 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 16:59:31.713139   26392 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 16:59:31.722155   26392 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 16:59:31.722221   26392 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 16:59:31.731387   26392 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 16:59:31.832450   26392 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0731 16:59:31.832542   26392 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 16:59:31.953896   26392 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 16:59:31.954043   26392 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 16:59:31.954158   26392 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 16:59:32.151343   26392 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 16:59:32.359125   26392 out.go:204]   - Generating certificates and keys ...
	I0731 16:59:32.359246   26392 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 16:59:32.359314   26392 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 16:59:32.359435   26392 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0731 16:59:32.377248   26392 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0731 16:59:32.677549   26392 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0731 16:59:32.867146   26392 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0731 16:59:33.360775   26392 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0731 16:59:33.360993   26392 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-234651 localhost] and IPs [192.168.39.243 127.0.0.1 ::1]
	I0731 16:59:33.466112   26392 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0731 16:59:33.466409   26392 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-234651 localhost] and IPs [192.168.39.243 127.0.0.1 ::1]
	I0731 16:59:33.625099   26392 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0731 16:59:33.962462   26392 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0731 16:59:34.432296   26392 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0731 16:59:34.432396   26392 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 16:59:34.697433   26392 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 16:59:34.764397   26392 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0731 16:59:34.908374   26392 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 16:59:34.990770   26392 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 16:59:35.091185   26392 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 16:59:35.092604   26392 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 16:59:35.096555   26392 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 16:59:35.145654   26392 out.go:204]   - Booting up control plane ...
	I0731 16:59:35.145821   26392 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 16:59:35.145942   26392 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 16:59:35.146050   26392 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 16:59:35.146209   26392 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 16:59:35.146339   26392 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 16:59:35.146418   26392 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 16:59:35.270078   26392 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0731 16:59:35.270238   26392 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0731 16:59:35.772279   26392 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.37404ms
	I0731 16:59:35.772411   26392 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0731 16:59:41.880564   26392 kubeadm.go:310] [api-check] The API server is healthy after 6.111372285s
	I0731 16:59:41.892891   26392 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 16:59:41.913605   26392 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 16:59:42.438762   26392 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 16:59:42.439017   26392 kubeadm.go:310] [mark-control-plane] Marking the node ha-234651 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 16:59:42.449883   26392 kubeadm.go:310] [bootstrap-token] Using token: nfptp5.vhfyienhf110vt3u
	I0731 16:59:42.451360   26392 out.go:204]   - Configuring RBAC rules ...
	I0731 16:59:42.451490   26392 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 16:59:42.458097   26392 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 16:59:42.468508   26392 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 16:59:42.471813   26392 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 16:59:42.474759   26392 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 16:59:42.478202   26392 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 16:59:42.498715   26392 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 16:59:42.764249   26392 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 16:59:43.287784   26392 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 16:59:43.287808   26392 kubeadm.go:310] 
	I0731 16:59:43.287870   26392 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 16:59:43.287903   26392 kubeadm.go:310] 
	I0731 16:59:43.288019   26392 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 16:59:43.288030   26392 kubeadm.go:310] 
	I0731 16:59:43.288092   26392 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 16:59:43.288172   26392 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 16:59:43.288259   26392 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 16:59:43.288280   26392 kubeadm.go:310] 
	I0731 16:59:43.288355   26392 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 16:59:43.288362   26392 kubeadm.go:310] 
	I0731 16:59:43.288431   26392 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 16:59:43.288440   26392 kubeadm.go:310] 
	I0731 16:59:43.288514   26392 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 16:59:43.288636   26392 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 16:59:43.288740   26392 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 16:59:43.288750   26392 kubeadm.go:310] 
	I0731 16:59:43.288863   26392 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 16:59:43.288980   26392 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 16:59:43.288989   26392 kubeadm.go:310] 
	I0731 16:59:43.289087   26392 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token nfptp5.vhfyienhf110vt3u \
	I0731 16:59:43.289260   26392 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:460b24b51d10a3760f2391542a8a89e401359854f437f922cbee3f2d8ed6c856 \
	I0731 16:59:43.289295   26392 kubeadm.go:310] 	--control-plane 
	I0731 16:59:43.289306   26392 kubeadm.go:310] 
	I0731 16:59:43.289443   26392 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 16:59:43.289461   26392 kubeadm.go:310] 
	I0731 16:59:43.289574   26392 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token nfptp5.vhfyienhf110vt3u \
	I0731 16:59:43.289714   26392 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:460b24b51d10a3760f2391542a8a89e401359854f437f922cbee3f2d8ed6c856 
	I0731 16:59:43.289863   26392 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 16:59:43.289898   26392 cni.go:84] Creating CNI manager for ""
	I0731 16:59:43.289910   26392 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0731 16:59:43.291714   26392 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0731 16:59:43.293067   26392 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0731 16:59:43.299040   26392 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0731 16:59:43.299057   26392 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0731 16:59:43.318479   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0731 16:59:43.641272   26392 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 16:59:43.641365   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-234651 minikube.k8s.io/updated_at=2024_07_31T16_59_43_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1d737dad7efa60c56d30434fcd857dd3b14c91d9 minikube.k8s.io/name=ha-234651 minikube.k8s.io/primary=true
	I0731 16:59:43.641384   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:59:43.763146   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:59:43.763278   26392 ops.go:34] apiserver oom_adj: -16
	I0731 16:59:44.263961   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:59:44.763593   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:59:45.263430   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:59:45.763237   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:59:46.262918   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:59:46.763733   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:59:47.263588   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:59:47.763260   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:59:48.263680   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:59:48.763380   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:59:49.263682   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:59:49.762871   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:59:50.263309   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:59:50.763587   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:59:51.263226   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:59:51.763668   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:59:52.263227   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:59:52.763318   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:59:53.263205   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:59:53.763797   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:59:54.262996   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:59:54.763149   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:59:55.262996   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:59:55.762890   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:59:56.263910   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 16:59:56.344269   26392 kubeadm.go:1113] duration metric: took 12.702990197s to wait for elevateKubeSystemPrivileges
	I0731 16:59:56.344312   26392 kubeadm.go:394] duration metric: took 24.747371577s to StartCluster
	I0731 16:59:56.344330   26392 settings.go:142] acquiring lock: {Name:mk8ac8269296bf0b887a00ff74caaad180005f89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 16:59:56.344404   26392 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 16:59:56.345043   26392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/kubeconfig: {Name:mkf2340bc1ada3c683f132aca37228d7588e1fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 16:59:56.345263   26392 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0731 16:59:56.345267   26392 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.243 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 16:59:56.345284   26392 start.go:241] waiting for startup goroutines ...
	I0731 16:59:56.345292   26392 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 16:59:56.345336   26392 addons.go:69] Setting storage-provisioner=true in profile "ha-234651"
	I0731 16:59:56.345353   26392 addons.go:69] Setting default-storageclass=true in profile "ha-234651"
	I0731 16:59:56.345366   26392 addons.go:234] Setting addon storage-provisioner=true in "ha-234651"
	I0731 16:59:56.345391   26392 host.go:66] Checking if "ha-234651" exists ...
	I0731 16:59:56.345398   26392 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-234651"
	I0731 16:59:56.345506   26392 config.go:182] Loaded profile config "ha-234651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 16:59:56.345772   26392 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:59:56.345790   26392 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:59:56.345815   26392 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:59:56.345910   26392 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:59:56.361291   26392 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42931
	I0731 16:59:56.361349   26392 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45401
	I0731 16:59:56.361793   26392 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:59:56.361835   26392 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:59:56.362300   26392 main.go:141] libmachine: Using API Version  1
	I0731 16:59:56.362319   26392 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:59:56.362450   26392 main.go:141] libmachine: Using API Version  1
	I0731 16:59:56.362472   26392 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:59:56.362631   26392 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:59:56.362787   26392 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:59:56.362937   26392 main.go:141] libmachine: (ha-234651) Calling .GetState
	I0731 16:59:56.363241   26392 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:59:56.363286   26392 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:59:56.364992   26392 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 16:59:56.365190   26392 kapi.go:59] client config for ha-234651: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/client.crt", KeyFile:"/home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/client.key", CAFile:"/home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 16:59:56.365640   26392 cert_rotation.go:137] Starting client certificate rotation controller
	I0731 16:59:56.365778   26392 addons.go:234] Setting addon default-storageclass=true in "ha-234651"
	I0731 16:59:56.365818   26392 host.go:66] Checking if "ha-234651" exists ...
	I0731 16:59:56.366094   26392 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:59:56.366126   26392 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:59:56.377968   26392 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39165
	I0731 16:59:56.378432   26392 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:59:56.378938   26392 main.go:141] libmachine: Using API Version  1
	I0731 16:59:56.378966   26392 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:59:56.379329   26392 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:59:56.379497   26392 main.go:141] libmachine: (ha-234651) Calling .GetState
	I0731 16:59:56.380724   26392 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35845
	I0731 16:59:56.381083   26392 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:59:56.381396   26392 main.go:141] libmachine: (ha-234651) Calling .DriverName
	I0731 16:59:56.381537   26392 main.go:141] libmachine: Using API Version  1
	I0731 16:59:56.381558   26392 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:59:56.381875   26392 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:59:56.382348   26392 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:59:56.382384   26392 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:59:56.383775   26392 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 16:59:56.385342   26392 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 16:59:56.385362   26392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 16:59:56.385381   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 16:59:56.388154   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:56.388571   26392 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 16:59:56.388592   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:56.388729   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 16:59:56.388910   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 16:59:56.389059   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 16:59:56.389201   26392 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651/id_rsa Username:docker}
	I0731 16:59:56.398746   26392 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41463
	I0731 16:59:56.399161   26392 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:59:56.399652   26392 main.go:141] libmachine: Using API Version  1
	I0731 16:59:56.399674   26392 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:59:56.400071   26392 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:59:56.400250   26392 main.go:141] libmachine: (ha-234651) Calling .GetState
	I0731 16:59:56.401966   26392 main.go:141] libmachine: (ha-234651) Calling .DriverName
	I0731 16:59:56.402156   26392 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 16:59:56.402172   26392 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 16:59:56.402189   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 16:59:56.404481   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:56.404847   26392 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 16:59:56.404880   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 16:59:56.404995   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 16:59:56.405161   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 16:59:56.405297   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 16:59:56.405423   26392 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651/id_rsa Username:docker}
	I0731 16:59:56.487386   26392 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0731 16:59:56.554849   26392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 16:59:56.574262   26392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 16:59:56.780210   26392 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0731 16:59:57.037064   26392 main.go:141] libmachine: Making call to close driver server
	I0731 16:59:57.037091   26392 main.go:141] libmachine: (ha-234651) Calling .Close
	I0731 16:59:57.037161   26392 main.go:141] libmachine: Making call to close driver server
	I0731 16:59:57.037184   26392 main.go:141] libmachine: (ha-234651) Calling .Close
	I0731 16:59:57.037387   26392 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:59:57.037400   26392 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:59:57.037409   26392 main.go:141] libmachine: Making call to close driver server
	I0731 16:59:57.037416   26392 main.go:141] libmachine: (ha-234651) Calling .Close
	I0731 16:59:57.037506   26392 main.go:141] libmachine: (ha-234651) DBG | Closing plugin on server side
	I0731 16:59:57.037530   26392 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:59:57.037543   26392 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:59:57.037552   26392 main.go:141] libmachine: Making call to close driver server
	I0731 16:59:57.037559   26392 main.go:141] libmachine: (ha-234651) Calling .Close
	I0731 16:59:57.037593   26392 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:59:57.037614   26392 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:59:57.037752   26392 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:59:57.037765   26392 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:59:57.037834   26392 main.go:141] libmachine: (ha-234651) DBG | Closing plugin on server side
	I0731 16:59:57.037878   26392 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0731 16:59:57.037889   26392 round_trippers.go:469] Request Headers:
	I0731 16:59:57.037899   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 16:59:57.037906   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 16:59:57.047242   26392 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0731 16:59:57.047913   26392 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0731 16:59:57.047928   26392 round_trippers.go:469] Request Headers:
	I0731 16:59:57.047936   26392 round_trippers.go:473]     Content-Type: application/json
	I0731 16:59:57.047944   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 16:59:57.047949   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 16:59:57.051404   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 16:59:57.051567   26392 main.go:141] libmachine: Making call to close driver server
	I0731 16:59:57.051579   26392 main.go:141] libmachine: (ha-234651) Calling .Close
	I0731 16:59:57.051797   26392 main.go:141] libmachine: Successfully made call to close driver server
	I0731 16:59:57.051821   26392 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 16:59:57.053464   26392 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0731 16:59:57.054697   26392 addons.go:510] duration metric: took 709.400523ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0731 16:59:57.054733   26392 start.go:246] waiting for cluster config update ...
	I0731 16:59:57.054747   26392 start.go:255] writing updated cluster config ...
	I0731 16:59:57.056334   26392 out.go:177] 
	I0731 16:59:57.057638   26392 config.go:182] Loaded profile config "ha-234651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 16:59:57.057709   26392 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/config.json ...
	I0731 16:59:57.059150   26392 out.go:177] * Starting "ha-234651-m02" control-plane node in "ha-234651" cluster
	I0731 16:59:57.060105   26392 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 16:59:57.060125   26392 cache.go:56] Caching tarball of preloaded images
	I0731 16:59:57.060204   26392 preload.go:172] Found /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 16:59:57.060214   26392 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 16:59:57.060282   26392 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/config.json ...
	I0731 16:59:57.060470   26392 start.go:360] acquireMachinesLock for ha-234651-m02: {Name:mkd78eacfab7d6c3058c6674434b3d889ec957e0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 16:59:57.060513   26392 start.go:364] duration metric: took 24.628µs to acquireMachinesLock for "ha-234651-m02"
	I0731 16:59:57.060537   26392 start.go:93] Provisioning new machine with config: &{Name:ha-234651 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-234651 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.243 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 16:59:57.060617   26392 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0731 16:59:57.062693   26392 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 16:59:57.062769   26392 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:59:57.062791   26392 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:59:57.076864   26392 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43663
	I0731 16:59:57.077265   26392 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:59:57.077768   26392 main.go:141] libmachine: Using API Version  1
	I0731 16:59:57.077790   26392 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:59:57.078055   26392 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:59:57.078270   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetMachineName
	I0731 16:59:57.078418   26392 main.go:141] libmachine: (ha-234651-m02) Calling .DriverName
	I0731 16:59:57.078532   26392 start.go:159] libmachine.API.Create for "ha-234651" (driver="kvm2")
	I0731 16:59:57.078552   26392 client.go:168] LocalClient.Create starting
	I0731 16:59:57.078582   26392 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem
	I0731 16:59:57.078620   26392 main.go:141] libmachine: Decoding PEM data...
	I0731 16:59:57.078637   26392 main.go:141] libmachine: Parsing certificate...
	I0731 16:59:57.078683   26392 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem
	I0731 16:59:57.078702   26392 main.go:141] libmachine: Decoding PEM data...
	I0731 16:59:57.078713   26392 main.go:141] libmachine: Parsing certificate...
	I0731 16:59:57.078728   26392 main.go:141] libmachine: Running pre-create checks...
	I0731 16:59:57.078736   26392 main.go:141] libmachine: (ha-234651-m02) Calling .PreCreateCheck
	I0731 16:59:57.078891   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetConfigRaw
	I0731 16:59:57.079226   26392 main.go:141] libmachine: Creating machine...
	I0731 16:59:57.079238   26392 main.go:141] libmachine: (ha-234651-m02) Calling .Create
	I0731 16:59:57.079339   26392 main.go:141] libmachine: (ha-234651-m02) Creating KVM machine...
	I0731 16:59:57.080488   26392 main.go:141] libmachine: (ha-234651-m02) DBG | found existing default KVM network
	I0731 16:59:57.080588   26392 main.go:141] libmachine: (ha-234651-m02) DBG | found existing private KVM network mk-ha-234651
	I0731 16:59:57.080678   26392 main.go:141] libmachine: (ha-234651-m02) Setting up store path in /home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m02 ...
	I0731 16:59:57.080697   26392 main.go:141] libmachine: (ha-234651-m02) Building disk image from file:///home/jenkins/minikube-integration/19349-8084/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0731 16:59:57.080738   26392 main.go:141] libmachine: (ha-234651-m02) DBG | I0731 16:59:57.080659   26772 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19349-8084/.minikube
	I0731 16:59:57.080790   26392 main.go:141] libmachine: (ha-234651-m02) Downloading /home/jenkins/minikube-integration/19349-8084/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19349-8084/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0731 16:59:57.334000   26392 main.go:141] libmachine: (ha-234651-m02) DBG | I0731 16:59:57.333829   26772 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m02/id_rsa...
	I0731 16:59:57.482649   26392 main.go:141] libmachine: (ha-234651-m02) DBG | I0731 16:59:57.482546   26772 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m02/ha-234651-m02.rawdisk...
	I0731 16:59:57.482673   26392 main.go:141] libmachine: (ha-234651-m02) DBG | Writing magic tar header
	I0731 16:59:57.482684   26392 main.go:141] libmachine: (ha-234651-m02) DBG | Writing SSH key tar header
	I0731 16:59:57.482735   26392 main.go:141] libmachine: (ha-234651-m02) DBG | I0731 16:59:57.482680   26772 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m02 ...
	I0731 16:59:57.482817   26392 main.go:141] libmachine: (ha-234651-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m02
	I0731 16:59:57.482845   26392 main.go:141] libmachine: (ha-234651-m02) Setting executable bit set on /home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m02 (perms=drwx------)
	I0731 16:59:57.482861   26392 main.go:141] libmachine: (ha-234651-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19349-8084/.minikube/machines
	I0731 16:59:57.482876   26392 main.go:141] libmachine: (ha-234651-m02) Setting executable bit set on /home/jenkins/minikube-integration/19349-8084/.minikube/machines (perms=drwxr-xr-x)
	I0731 16:59:57.482889   26392 main.go:141] libmachine: (ha-234651-m02) Setting executable bit set on /home/jenkins/minikube-integration/19349-8084/.minikube (perms=drwxr-xr-x)
	I0731 16:59:57.482900   26392 main.go:141] libmachine: (ha-234651-m02) Setting executable bit set on /home/jenkins/minikube-integration/19349-8084 (perms=drwxrwxr-x)
	I0731 16:59:57.482912   26392 main.go:141] libmachine: (ha-234651-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0731 16:59:57.482925   26392 main.go:141] libmachine: (ha-234651-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19349-8084/.minikube
	I0731 16:59:57.482940   26392 main.go:141] libmachine: (ha-234651-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0731 16:59:57.482954   26392 main.go:141] libmachine: (ha-234651-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19349-8084
	I0731 16:59:57.482966   26392 main.go:141] libmachine: (ha-234651-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0731 16:59:57.482977   26392 main.go:141] libmachine: (ha-234651-m02) DBG | Checking permissions on dir: /home/jenkins
	I0731 16:59:57.482988   26392 main.go:141] libmachine: (ha-234651-m02) DBG | Checking permissions on dir: /home
	I0731 16:59:57.483006   26392 main.go:141] libmachine: (ha-234651-m02) DBG | Skipping /home - not owner
	I0731 16:59:57.483018   26392 main.go:141] libmachine: (ha-234651-m02) Creating domain...
	I0731 16:59:57.484125   26392 main.go:141] libmachine: (ha-234651-m02) define libvirt domain using xml: 
	I0731 16:59:57.484158   26392 main.go:141] libmachine: (ha-234651-m02) <domain type='kvm'>
	I0731 16:59:57.484171   26392 main.go:141] libmachine: (ha-234651-m02)   <name>ha-234651-m02</name>
	I0731 16:59:57.484183   26392 main.go:141] libmachine: (ha-234651-m02)   <memory unit='MiB'>2200</memory>
	I0731 16:59:57.484192   26392 main.go:141] libmachine: (ha-234651-m02)   <vcpu>2</vcpu>
	I0731 16:59:57.484202   26392 main.go:141] libmachine: (ha-234651-m02)   <features>
	I0731 16:59:57.484210   26392 main.go:141] libmachine: (ha-234651-m02)     <acpi/>
	I0731 16:59:57.484220   26392 main.go:141] libmachine: (ha-234651-m02)     <apic/>
	I0731 16:59:57.484231   26392 main.go:141] libmachine: (ha-234651-m02)     <pae/>
	I0731 16:59:57.484241   26392 main.go:141] libmachine: (ha-234651-m02)     
	I0731 16:59:57.484250   26392 main.go:141] libmachine: (ha-234651-m02)   </features>
	I0731 16:59:57.484261   26392 main.go:141] libmachine: (ha-234651-m02)   <cpu mode='host-passthrough'>
	I0731 16:59:57.484272   26392 main.go:141] libmachine: (ha-234651-m02)   
	I0731 16:59:57.484281   26392 main.go:141] libmachine: (ha-234651-m02)   </cpu>
	I0731 16:59:57.484289   26392 main.go:141] libmachine: (ha-234651-m02)   <os>
	I0731 16:59:57.484298   26392 main.go:141] libmachine: (ha-234651-m02)     <type>hvm</type>
	I0731 16:59:57.484307   26392 main.go:141] libmachine: (ha-234651-m02)     <boot dev='cdrom'/>
	I0731 16:59:57.484320   26392 main.go:141] libmachine: (ha-234651-m02)     <boot dev='hd'/>
	I0731 16:59:57.484332   26392 main.go:141] libmachine: (ha-234651-m02)     <bootmenu enable='no'/>
	I0731 16:59:57.484341   26392 main.go:141] libmachine: (ha-234651-m02)   </os>
	I0731 16:59:57.484348   26392 main.go:141] libmachine: (ha-234651-m02)   <devices>
	I0731 16:59:57.484359   26392 main.go:141] libmachine: (ha-234651-m02)     <disk type='file' device='cdrom'>
	I0731 16:59:57.484375   26392 main.go:141] libmachine: (ha-234651-m02)       <source file='/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m02/boot2docker.iso'/>
	I0731 16:59:57.484386   26392 main.go:141] libmachine: (ha-234651-m02)       <target dev='hdc' bus='scsi'/>
	I0731 16:59:57.484406   26392 main.go:141] libmachine: (ha-234651-m02)       <readonly/>
	I0731 16:59:57.484426   26392 main.go:141] libmachine: (ha-234651-m02)     </disk>
	I0731 16:59:57.484438   26392 main.go:141] libmachine: (ha-234651-m02)     <disk type='file' device='disk'>
	I0731 16:59:57.484451   26392 main.go:141] libmachine: (ha-234651-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0731 16:59:57.484468   26392 main.go:141] libmachine: (ha-234651-m02)       <source file='/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m02/ha-234651-m02.rawdisk'/>
	I0731 16:59:57.484480   26392 main.go:141] libmachine: (ha-234651-m02)       <target dev='hda' bus='virtio'/>
	I0731 16:59:57.484491   26392 main.go:141] libmachine: (ha-234651-m02)     </disk>
	I0731 16:59:57.484500   26392 main.go:141] libmachine: (ha-234651-m02)     <interface type='network'>
	I0731 16:59:57.484513   26392 main.go:141] libmachine: (ha-234651-m02)       <source network='mk-ha-234651'/>
	I0731 16:59:57.484525   26392 main.go:141] libmachine: (ha-234651-m02)       <model type='virtio'/>
	I0731 16:59:57.484537   26392 main.go:141] libmachine: (ha-234651-m02)     </interface>
	I0731 16:59:57.484548   26392 main.go:141] libmachine: (ha-234651-m02)     <interface type='network'>
	I0731 16:59:57.484559   26392 main.go:141] libmachine: (ha-234651-m02)       <source network='default'/>
	I0731 16:59:57.484574   26392 main.go:141] libmachine: (ha-234651-m02)       <model type='virtio'/>
	I0731 16:59:57.484586   26392 main.go:141] libmachine: (ha-234651-m02)     </interface>
	I0731 16:59:57.484593   26392 main.go:141] libmachine: (ha-234651-m02)     <serial type='pty'>
	I0731 16:59:57.484604   26392 main.go:141] libmachine: (ha-234651-m02)       <target port='0'/>
	I0731 16:59:57.484612   26392 main.go:141] libmachine: (ha-234651-m02)     </serial>
	I0731 16:59:57.484624   26392 main.go:141] libmachine: (ha-234651-m02)     <console type='pty'>
	I0731 16:59:57.484636   26392 main.go:141] libmachine: (ha-234651-m02)       <target type='serial' port='0'/>
	I0731 16:59:57.484669   26392 main.go:141] libmachine: (ha-234651-m02)     </console>
	I0731 16:59:57.484699   26392 main.go:141] libmachine: (ha-234651-m02)     <rng model='virtio'>
	I0731 16:59:57.484717   26392 main.go:141] libmachine: (ha-234651-m02)       <backend model='random'>/dev/random</backend>
	I0731 16:59:57.484732   26392 main.go:141] libmachine: (ha-234651-m02)     </rng>
	I0731 16:59:57.484745   26392 main.go:141] libmachine: (ha-234651-m02)     
	I0731 16:59:57.484756   26392 main.go:141] libmachine: (ha-234651-m02)     
	I0731 16:59:57.484770   26392 main.go:141] libmachine: (ha-234651-m02)   </devices>
	I0731 16:59:57.484783   26392 main.go:141] libmachine: (ha-234651-m02) </domain>
	I0731 16:59:57.484799   26392 main.go:141] libmachine: (ha-234651-m02) 
	I0731 16:59:57.492379   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:87:d0:bf in network default
	I0731 16:59:57.493034   26392 main.go:141] libmachine: (ha-234651-m02) Ensuring networks are active...
	I0731 16:59:57.493054   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 16:59:57.493802   26392 main.go:141] libmachine: (ha-234651-m02) Ensuring network default is active
	I0731 16:59:57.494113   26392 main.go:141] libmachine: (ha-234651-m02) Ensuring network mk-ha-234651 is active
	I0731 16:59:57.494448   26392 main.go:141] libmachine: (ha-234651-m02) Getting domain xml...
	I0731 16:59:57.495283   26392 main.go:141] libmachine: (ha-234651-m02) Creating domain...
	I0731 16:59:58.698086   26392 main.go:141] libmachine: (ha-234651-m02) Waiting to get IP...
	I0731 16:59:58.698849   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 16:59:58.699286   26392 main.go:141] libmachine: (ha-234651-m02) DBG | unable to find current IP address of domain ha-234651-m02 in network mk-ha-234651
	I0731 16:59:58.699314   26392 main.go:141] libmachine: (ha-234651-m02) DBG | I0731 16:59:58.699260   26772 retry.go:31] will retry after 237.684145ms: waiting for machine to come up
	I0731 16:59:58.938824   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 16:59:58.939376   26392 main.go:141] libmachine: (ha-234651-m02) DBG | unable to find current IP address of domain ha-234651-m02 in network mk-ha-234651
	I0731 16:59:58.939406   26392 main.go:141] libmachine: (ha-234651-m02) DBG | I0731 16:59:58.939313   26772 retry.go:31] will retry after 380.331665ms: waiting for machine to come up
	I0731 16:59:59.320818   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 16:59:59.321283   26392 main.go:141] libmachine: (ha-234651-m02) DBG | unable to find current IP address of domain ha-234651-m02 in network mk-ha-234651
	I0731 16:59:59.321314   26392 main.go:141] libmachine: (ha-234651-m02) DBG | I0731 16:59:59.321229   26772 retry.go:31] will retry after 409.470005ms: waiting for machine to come up
	I0731 16:59:59.732928   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 16:59:59.733349   26392 main.go:141] libmachine: (ha-234651-m02) DBG | unable to find current IP address of domain ha-234651-m02 in network mk-ha-234651
	I0731 16:59:59.733377   26392 main.go:141] libmachine: (ha-234651-m02) DBG | I0731 16:59:59.733301   26772 retry.go:31] will retry after 539.092112ms: waiting for machine to come up
	I0731 17:00:00.274038   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:00.274440   26392 main.go:141] libmachine: (ha-234651-m02) DBG | unable to find current IP address of domain ha-234651-m02 in network mk-ha-234651
	I0731 17:00:00.274494   26392 main.go:141] libmachine: (ha-234651-m02) DBG | I0731 17:00:00.274418   26772 retry.go:31] will retry after 704.175056ms: waiting for machine to come up
	I0731 17:00:00.980162   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:00.980593   26392 main.go:141] libmachine: (ha-234651-m02) DBG | unable to find current IP address of domain ha-234651-m02 in network mk-ha-234651
	I0731 17:00:00.980631   26392 main.go:141] libmachine: (ha-234651-m02) DBG | I0731 17:00:00.980535   26772 retry.go:31] will retry after 904.538693ms: waiting for machine to come up
	I0731 17:00:01.886662   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:01.887100   26392 main.go:141] libmachine: (ha-234651-m02) DBG | unable to find current IP address of domain ha-234651-m02 in network mk-ha-234651
	I0731 17:00:01.887139   26392 main.go:141] libmachine: (ha-234651-m02) DBG | I0731 17:00:01.887051   26772 retry.go:31] will retry after 930.755767ms: waiting for machine to come up
	I0731 17:00:02.819648   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:02.820080   26392 main.go:141] libmachine: (ha-234651-m02) DBG | unable to find current IP address of domain ha-234651-m02 in network mk-ha-234651
	I0731 17:00:02.820107   26392 main.go:141] libmachine: (ha-234651-m02) DBG | I0731 17:00:02.820029   26772 retry.go:31] will retry after 1.34592852s: waiting for machine to come up
	I0731 17:00:04.168273   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:04.168755   26392 main.go:141] libmachine: (ha-234651-m02) DBG | unable to find current IP address of domain ha-234651-m02 in network mk-ha-234651
	I0731 17:00:04.168785   26392 main.go:141] libmachine: (ha-234651-m02) DBG | I0731 17:00:04.168700   26772 retry.go:31] will retry after 1.692001302s: waiting for machine to come up
	I0731 17:00:05.862244   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:05.862748   26392 main.go:141] libmachine: (ha-234651-m02) DBG | unable to find current IP address of domain ha-234651-m02 in network mk-ha-234651
	I0731 17:00:05.862779   26392 main.go:141] libmachine: (ha-234651-m02) DBG | I0731 17:00:05.862711   26772 retry.go:31] will retry after 2.150428945s: waiting for machine to come up
	I0731 17:00:08.014515   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:08.014935   26392 main.go:141] libmachine: (ha-234651-m02) DBG | unable to find current IP address of domain ha-234651-m02 in network mk-ha-234651
	I0731 17:00:08.014970   26392 main.go:141] libmachine: (ha-234651-m02) DBG | I0731 17:00:08.014893   26772 retry.go:31] will retry after 2.239362339s: waiting for machine to come up
	I0731 17:00:10.256555   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:10.256967   26392 main.go:141] libmachine: (ha-234651-m02) DBG | unable to find current IP address of domain ha-234651-m02 in network mk-ha-234651
	I0731 17:00:10.257008   26392 main.go:141] libmachine: (ha-234651-m02) DBG | I0731 17:00:10.256949   26772 retry.go:31] will retry after 2.400335015s: waiting for machine to come up
	I0731 17:00:12.658945   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:12.659349   26392 main.go:141] libmachine: (ha-234651-m02) DBG | unable to find current IP address of domain ha-234651-m02 in network mk-ha-234651
	I0731 17:00:12.659377   26392 main.go:141] libmachine: (ha-234651-m02) DBG | I0731 17:00:12.659299   26772 retry.go:31] will retry after 4.392574536s: waiting for machine to come up
	I0731 17:00:17.056090   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:17.056590   26392 main.go:141] libmachine: (ha-234651-m02) Found IP for machine: 192.168.39.235
	I0731 17:00:17.056616   26392 main.go:141] libmachine: (ha-234651-m02) Reserving static IP address...
	I0731 17:00:17.056625   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has current primary IP address 192.168.39.235 and MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:17.057033   26392 main.go:141] libmachine: (ha-234651-m02) DBG | unable to find host DHCP lease matching {name: "ha-234651-m02", mac: "52:54:00:4c:97:0e", ip: "192.168.39.235"} in network mk-ha-234651
	I0731 17:00:17.129001   26392 main.go:141] libmachine: (ha-234651-m02) Reserved static IP address: 192.168.39.235
	I0731 17:00:17.129026   26392 main.go:141] libmachine: (ha-234651-m02) Waiting for SSH to be available...
	I0731 17:00:17.129036   26392 main.go:141] libmachine: (ha-234651-m02) DBG | Getting to WaitForSSH function...
	I0731 17:00:17.132214   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:17.132647   26392 main.go:141] libmachine: (ha-234651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:97:0e", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:00:10 +0000 UTC Type:0 Mac:52:54:00:4c:97:0e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:minikube Clientid:01:52:54:00:4c:97:0e}
	I0731 17:00:17.132673   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined IP address 192.168.39.235 and MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:17.132805   26392 main.go:141] libmachine: (ha-234651-m02) DBG | Using SSH client type: external
	I0731 17:00:17.132831   26392 main.go:141] libmachine: (ha-234651-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m02/id_rsa (-rw-------)
	I0731 17:00:17.132863   26392 main.go:141] libmachine: (ha-234651-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.235 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 17:00:17.132876   26392 main.go:141] libmachine: (ha-234651-m02) DBG | About to run SSH command:
	I0731 17:00:17.132888   26392 main.go:141] libmachine: (ha-234651-m02) DBG | exit 0
	I0731 17:00:17.255197   26392 main.go:141] libmachine: (ha-234651-m02) DBG | SSH cmd err, output: <nil>: 
	I0731 17:00:17.255468   26392 main.go:141] libmachine: (ha-234651-m02) KVM machine creation complete!
	I0731 17:00:17.255825   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetConfigRaw
	I0731 17:00:17.256384   26392 main.go:141] libmachine: (ha-234651-m02) Calling .DriverName
	I0731 17:00:17.256582   26392 main.go:141] libmachine: (ha-234651-m02) Calling .DriverName
	I0731 17:00:17.256740   26392 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0731 17:00:17.256753   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetState
	I0731 17:00:17.258006   26392 main.go:141] libmachine: Detecting operating system of created instance...
	I0731 17:00:17.258026   26392 main.go:141] libmachine: Waiting for SSH to be available...
	I0731 17:00:17.258042   26392 main.go:141] libmachine: Getting to WaitForSSH function...
	I0731 17:00:17.258056   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHHostname
	I0731 17:00:17.260254   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:17.260688   26392 main.go:141] libmachine: (ha-234651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:97:0e", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:00:10 +0000 UTC Type:0 Mac:52:54:00:4c:97:0e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-234651-m02 Clientid:01:52:54:00:4c:97:0e}
	I0731 17:00:17.260716   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined IP address 192.168.39.235 and MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:17.260821   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHPort
	I0731 17:00:17.261006   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHKeyPath
	I0731 17:00:17.261159   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHKeyPath
	I0731 17:00:17.261312   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHUsername
	I0731 17:00:17.261500   26392 main.go:141] libmachine: Using SSH client type: native
	I0731 17:00:17.261716   26392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I0731 17:00:17.261731   26392 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0731 17:00:17.358219   26392 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 17:00:17.358239   26392 main.go:141] libmachine: Detecting the provisioner...
	I0731 17:00:17.358246   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHHostname
	I0731 17:00:17.361134   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:17.361437   26392 main.go:141] libmachine: (ha-234651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:97:0e", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:00:10 +0000 UTC Type:0 Mac:52:54:00:4c:97:0e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-234651-m02 Clientid:01:52:54:00:4c:97:0e}
	I0731 17:00:17.361455   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined IP address 192.168.39.235 and MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:17.361603   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHPort
	I0731 17:00:17.361821   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHKeyPath
	I0731 17:00:17.362006   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHKeyPath
	I0731 17:00:17.362168   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHUsername
	I0731 17:00:17.362348   26392 main.go:141] libmachine: Using SSH client type: native
	I0731 17:00:17.362511   26392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I0731 17:00:17.362522   26392 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0731 17:00:17.463637   26392 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0731 17:00:17.463701   26392 main.go:141] libmachine: found compatible host: buildroot
	I0731 17:00:17.463709   26392 main.go:141] libmachine: Provisioning with buildroot...
	I0731 17:00:17.463717   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetMachineName
	I0731 17:00:17.463967   26392 buildroot.go:166] provisioning hostname "ha-234651-m02"
	I0731 17:00:17.463989   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetMachineName
	I0731 17:00:17.464189   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHHostname
	I0731 17:00:17.466804   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:17.467152   26392 main.go:141] libmachine: (ha-234651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:97:0e", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:00:10 +0000 UTC Type:0 Mac:52:54:00:4c:97:0e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-234651-m02 Clientid:01:52:54:00:4c:97:0e}
	I0731 17:00:17.467182   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined IP address 192.168.39.235 and MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:17.467328   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHPort
	I0731 17:00:17.467486   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHKeyPath
	I0731 17:00:17.467713   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHKeyPath
	I0731 17:00:17.467853   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHUsername
	I0731 17:00:17.468031   26392 main.go:141] libmachine: Using SSH client type: native
	I0731 17:00:17.468201   26392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I0731 17:00:17.468214   26392 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-234651-m02 && echo "ha-234651-m02" | sudo tee /etc/hostname
	I0731 17:00:17.580654   26392 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-234651-m02
	
	I0731 17:00:17.580682   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHHostname
	I0731 17:00:17.583497   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:17.583988   26392 main.go:141] libmachine: (ha-234651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:97:0e", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:00:10 +0000 UTC Type:0 Mac:52:54:00:4c:97:0e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-234651-m02 Clientid:01:52:54:00:4c:97:0e}
	I0731 17:00:17.584018   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined IP address 192.168.39.235 and MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:17.584218   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHPort
	I0731 17:00:17.584432   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHKeyPath
	I0731 17:00:17.584605   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHKeyPath
	I0731 17:00:17.584748   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHUsername
	I0731 17:00:17.584976   26392 main.go:141] libmachine: Using SSH client type: native
	I0731 17:00:17.585148   26392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I0731 17:00:17.585170   26392 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-234651-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-234651-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-234651-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 17:00:17.690947   26392 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 17:00:17.690971   26392 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19349-8084/.minikube CaCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19349-8084/.minikube}
	I0731 17:00:17.690984   26392 buildroot.go:174] setting up certificates
	I0731 17:00:17.690993   26392 provision.go:84] configureAuth start
	I0731 17:00:17.691001   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetMachineName
	I0731 17:00:17.691317   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetIP
	I0731 17:00:17.694019   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:17.694376   26392 main.go:141] libmachine: (ha-234651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:97:0e", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:00:10 +0000 UTC Type:0 Mac:52:54:00:4c:97:0e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-234651-m02 Clientid:01:52:54:00:4c:97:0e}
	I0731 17:00:17.694397   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined IP address 192.168.39.235 and MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:17.694595   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHHostname
	I0731 17:00:17.696590   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:17.696852   26392 main.go:141] libmachine: (ha-234651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:97:0e", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:00:10 +0000 UTC Type:0 Mac:52:54:00:4c:97:0e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-234651-m02 Clientid:01:52:54:00:4c:97:0e}
	I0731 17:00:17.696871   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined IP address 192.168.39.235 and MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:17.697034   26392 provision.go:143] copyHostCerts
	I0731 17:00:17.697067   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem
	I0731 17:00:17.697098   26392 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem, removing ...
	I0731 17:00:17.697107   26392 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem
	I0731 17:00:17.697161   26392 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem (1082 bytes)
	I0731 17:00:17.697230   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem
	I0731 17:00:17.697254   26392 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem, removing ...
	I0731 17:00:17.697265   26392 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem
	I0731 17:00:17.697305   26392 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem (1123 bytes)
	I0731 17:00:17.697375   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem
	I0731 17:00:17.697398   26392 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem, removing ...
	I0731 17:00:17.697407   26392 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem
	I0731 17:00:17.697441   26392 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem (1675 bytes)
	I0731 17:00:17.697516   26392 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem org=jenkins.ha-234651-m02 san=[127.0.0.1 192.168.39.235 ha-234651-m02 localhost minikube]
	I0731 17:00:17.908626   26392 provision.go:177] copyRemoteCerts
	I0731 17:00:17.908682   26392 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 17:00:17.908703   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHHostname
	I0731 17:00:17.911371   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:17.911692   26392 main.go:141] libmachine: (ha-234651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:97:0e", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:00:10 +0000 UTC Type:0 Mac:52:54:00:4c:97:0e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-234651-m02 Clientid:01:52:54:00:4c:97:0e}
	I0731 17:00:17.911722   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined IP address 192.168.39.235 and MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:17.911904   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHPort
	I0731 17:00:17.912099   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHKeyPath
	I0731 17:00:17.912274   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHUsername
	I0731 17:00:17.912401   26392 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m02/id_rsa Username:docker}
	I0731 17:00:17.992622   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0731 17:00:17.992722   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 17:00:18.015551   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0731 17:00:18.015631   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0731 17:00:18.041263   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0731 17:00:18.041332   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 17:00:18.066724   26392 provision.go:87] duration metric: took 375.720136ms to configureAuth
	I0731 17:00:18.066748   26392 buildroot.go:189] setting minikube options for container-runtime
	I0731 17:00:18.066908   26392 config.go:182] Loaded profile config "ha-234651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 17:00:18.066973   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHHostname
	I0731 17:00:18.069440   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:18.069773   26392 main.go:141] libmachine: (ha-234651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:97:0e", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:00:10 +0000 UTC Type:0 Mac:52:54:00:4c:97:0e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-234651-m02 Clientid:01:52:54:00:4c:97:0e}
	I0731 17:00:18.069802   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined IP address 192.168.39.235 and MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:18.069932   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHPort
	I0731 17:00:18.070099   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHKeyPath
	I0731 17:00:18.070230   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHKeyPath
	I0731 17:00:18.070368   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHUsername
	I0731 17:00:18.070543   26392 main.go:141] libmachine: Using SSH client type: native
	I0731 17:00:18.070697   26392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I0731 17:00:18.070712   26392 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 17:00:18.347128   26392 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 17:00:18.347153   26392 main.go:141] libmachine: Checking connection to Docker...
	I0731 17:00:18.347164   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetURL
	I0731 17:00:18.348381   26392 main.go:141] libmachine: (ha-234651-m02) DBG | Using libvirt version 6000000
	I0731 17:00:18.350311   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:18.350648   26392 main.go:141] libmachine: (ha-234651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:97:0e", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:00:10 +0000 UTC Type:0 Mac:52:54:00:4c:97:0e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-234651-m02 Clientid:01:52:54:00:4c:97:0e}
	I0731 17:00:18.350668   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined IP address 192.168.39.235 and MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:18.350842   26392 main.go:141] libmachine: Docker is up and running!
	I0731 17:00:18.350851   26392 main.go:141] libmachine: Reticulating splines...
	I0731 17:00:18.350859   26392 client.go:171] duration metric: took 21.272299468s to LocalClient.Create
	I0731 17:00:18.350883   26392 start.go:167] duration metric: took 21.272351031s to libmachine.API.Create "ha-234651"
	I0731 17:00:18.350895   26392 start.go:293] postStartSetup for "ha-234651-m02" (driver="kvm2")
	I0731 17:00:18.350910   26392 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 17:00:18.350931   26392 main.go:141] libmachine: (ha-234651-m02) Calling .DriverName
	I0731 17:00:18.351157   26392 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 17:00:18.351183   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHHostname
	I0731 17:00:18.353164   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:18.353519   26392 main.go:141] libmachine: (ha-234651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:97:0e", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:00:10 +0000 UTC Type:0 Mac:52:54:00:4c:97:0e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-234651-m02 Clientid:01:52:54:00:4c:97:0e}
	I0731 17:00:18.353538   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined IP address 192.168.39.235 and MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:18.353702   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHPort
	I0731 17:00:18.353871   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHKeyPath
	I0731 17:00:18.354025   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHUsername
	I0731 17:00:18.354128   26392 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m02/id_rsa Username:docker}
	I0731 17:00:18.434127   26392 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 17:00:18.438634   26392 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 17:00:18.438660   26392 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/addons for local assets ...
	I0731 17:00:18.438733   26392 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/files for local assets ...
	I0731 17:00:18.438812   26392 filesync.go:149] local asset: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem -> 152592.pem in /etc/ssl/certs
	I0731 17:00:18.438822   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem -> /etc/ssl/certs/152592.pem
	I0731 17:00:18.438899   26392 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 17:00:18.448874   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /etc/ssl/certs/152592.pem (1708 bytes)
	I0731 17:00:18.471819   26392 start.go:296] duration metric: took 120.909521ms for postStartSetup
	I0731 17:00:18.471862   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetConfigRaw
	I0731 17:00:18.472501   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetIP
	I0731 17:00:18.474939   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:18.475288   26392 main.go:141] libmachine: (ha-234651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:97:0e", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:00:10 +0000 UTC Type:0 Mac:52:54:00:4c:97:0e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-234651-m02 Clientid:01:52:54:00:4c:97:0e}
	I0731 17:00:18.475336   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined IP address 192.168.39.235 and MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:18.475514   26392 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/config.json ...
	I0731 17:00:18.475705   26392 start.go:128] duration metric: took 21.415075174s to createHost
	I0731 17:00:18.475728   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHHostname
	I0731 17:00:18.477838   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:18.478170   26392 main.go:141] libmachine: (ha-234651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:97:0e", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:00:10 +0000 UTC Type:0 Mac:52:54:00:4c:97:0e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-234651-m02 Clientid:01:52:54:00:4c:97:0e}
	I0731 17:00:18.478198   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined IP address 192.168.39.235 and MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:18.478304   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHPort
	I0731 17:00:18.478481   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHKeyPath
	I0731 17:00:18.478664   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHKeyPath
	I0731 17:00:18.478817   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHUsername
	I0731 17:00:18.478972   26392 main.go:141] libmachine: Using SSH client type: native
	I0731 17:00:18.479176   26392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I0731 17:00:18.479192   26392 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 17:00:18.579698   26392 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722445218.541361703
	
	I0731 17:00:18.579716   26392 fix.go:216] guest clock: 1722445218.541361703
	I0731 17:00:18.579725   26392 fix.go:229] Guest: 2024-07-31 17:00:18.541361703 +0000 UTC Remote: 2024-07-31 17:00:18.475717804 +0000 UTC m=+76.421952730 (delta=65.643899ms)
	I0731 17:00:18.579748   26392 fix.go:200] guest clock delta is within tolerance: 65.643899ms
	I0731 17:00:18.579754   26392 start.go:83] releasing machines lock for "ha-234651-m02", held for 21.519229906s
	I0731 17:00:18.579782   26392 main.go:141] libmachine: (ha-234651-m02) Calling .DriverName
	I0731 17:00:18.580031   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetIP
	I0731 17:00:18.582506   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:18.582885   26392 main.go:141] libmachine: (ha-234651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:97:0e", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:00:10 +0000 UTC Type:0 Mac:52:54:00:4c:97:0e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-234651-m02 Clientid:01:52:54:00:4c:97:0e}
	I0731 17:00:18.582906   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined IP address 192.168.39.235 and MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:18.585329   26392 out.go:177] * Found network options:
	I0731 17:00:18.586805   26392 out.go:177]   - NO_PROXY=192.168.39.243
	W0731 17:00:18.588117   26392 proxy.go:119] fail to check proxy env: Error ip not in block
	I0731 17:00:18.588147   26392 main.go:141] libmachine: (ha-234651-m02) Calling .DriverName
	I0731 17:00:18.588782   26392 main.go:141] libmachine: (ha-234651-m02) Calling .DriverName
	I0731 17:00:18.588975   26392 main.go:141] libmachine: (ha-234651-m02) Calling .DriverName
	I0731 17:00:18.589060   26392 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 17:00:18.589098   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHHostname
	W0731 17:00:18.589138   26392 proxy.go:119] fail to check proxy env: Error ip not in block
	I0731 17:00:18.589203   26392 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 17:00:18.589224   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHHostname
	I0731 17:00:18.591692   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:18.592051   26392 main.go:141] libmachine: (ha-234651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:97:0e", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:00:10 +0000 UTC Type:0 Mac:52:54:00:4c:97:0e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-234651-m02 Clientid:01:52:54:00:4c:97:0e}
	I0731 17:00:18.592077   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:18.592095   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined IP address 192.168.39.235 and MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:18.592208   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHPort
	I0731 17:00:18.592411   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHKeyPath
	I0731 17:00:18.592495   26392 main.go:141] libmachine: (ha-234651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:97:0e", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:00:10 +0000 UTC Type:0 Mac:52:54:00:4c:97:0e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-234651-m02 Clientid:01:52:54:00:4c:97:0e}
	I0731 17:00:18.592518   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined IP address 192.168.39.235 and MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:18.592561   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHUsername
	I0731 17:00:18.592711   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHPort
	I0731 17:00:18.592715   26392 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m02/id_rsa Username:docker}
	I0731 17:00:18.592883   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHKeyPath
	I0731 17:00:18.593025   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHUsername
	I0731 17:00:18.593173   26392 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m02/id_rsa Username:docker}
	I0731 17:00:18.825075   26392 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 17:00:18.831022   26392 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 17:00:18.831083   26392 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 17:00:18.846216   26392 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 17:00:18.846241   26392 start.go:495] detecting cgroup driver to use...
	I0731 17:00:18.846293   26392 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 17:00:18.863094   26392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 17:00:18.876496   26392 docker.go:217] disabling cri-docker service (if available) ...
	I0731 17:00:18.876553   26392 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 17:00:18.889713   26392 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 17:00:18.903349   26392 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 17:00:19.025223   26392 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 17:00:19.158638   26392 docker.go:233] disabling docker service ...
	I0731 17:00:19.158697   26392 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 17:00:19.172351   26392 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 17:00:19.184900   26392 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 17:00:19.315583   26392 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 17:00:19.429374   26392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 17:00:19.442911   26392 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 17:00:19.461034   26392 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 17:00:19.461092   26392 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:00:19.470896   26392 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 17:00:19.470949   26392 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:00:19.481866   26392 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:00:19.491687   26392 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:00:19.501624   26392 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 17:00:19.511588   26392 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:00:19.521261   26392 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:00:19.537335   26392 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:00:19.547447   26392 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 17:00:19.556407   26392 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 17:00:19.556455   26392 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 17:00:19.569736   26392 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 17:00:19.579076   26392 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 17:00:19.698202   26392 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 17:00:19.843319   26392 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 17:00:19.843394   26392 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 17:00:19.848277   26392 start.go:563] Will wait 60s for crictl version
	I0731 17:00:19.848331   26392 ssh_runner.go:195] Run: which crictl
	I0731 17:00:19.851961   26392 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 17:00:19.894272   26392 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 17:00:19.894339   26392 ssh_runner.go:195] Run: crio --version
	I0731 17:00:19.920497   26392 ssh_runner.go:195] Run: crio --version
	I0731 17:00:19.949030   26392 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 17:00:19.950456   26392 out.go:177]   - env NO_PROXY=192.168.39.243
	I0731 17:00:19.951537   26392 main.go:141] libmachine: (ha-234651-m02) Calling .GetIP
	I0731 17:00:19.953921   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:19.954230   26392 main.go:141] libmachine: (ha-234651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:97:0e", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:00:10 +0000 UTC Type:0 Mac:52:54:00:4c:97:0e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-234651-m02 Clientid:01:52:54:00:4c:97:0e}
	I0731 17:00:19.954255   26392 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined IP address 192.168.39.235 and MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:00:19.954452   26392 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 17:00:19.958187   26392 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 17:00:19.969356   26392 mustload.go:65] Loading cluster: ha-234651
	I0731 17:00:19.969576   26392 config.go:182] Loaded profile config "ha-234651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 17:00:19.969827   26392 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:00:19.969853   26392 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:00:19.984671   26392 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34657
	I0731 17:00:19.985040   26392 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:00:19.985467   26392 main.go:141] libmachine: Using API Version  1
	I0731 17:00:19.985491   26392 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:00:19.985830   26392 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:00:19.986020   26392 main.go:141] libmachine: (ha-234651) Calling .GetState
	I0731 17:00:19.987572   26392 host.go:66] Checking if "ha-234651" exists ...
	I0731 17:00:19.987863   26392 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:00:19.987885   26392 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:00:20.001991   26392 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44525
	I0731 17:00:20.002330   26392 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:00:20.002823   26392 main.go:141] libmachine: Using API Version  1
	I0731 17:00:20.002845   26392 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:00:20.003177   26392 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:00:20.003379   26392 main.go:141] libmachine: (ha-234651) Calling .DriverName
	I0731 17:00:20.003557   26392 certs.go:68] Setting up /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651 for IP: 192.168.39.235
	I0731 17:00:20.003566   26392 certs.go:194] generating shared ca certs ...
	I0731 17:00:20.003584   26392 certs.go:226] acquiring lock for ca certs: {Name:mk49eed439085f9c30746706460ce213570d1997 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 17:00:20.003728   26392 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key
	I0731 17:00:20.003781   26392 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key
	I0731 17:00:20.003793   26392 certs.go:256] generating profile certs ...
	I0731 17:00:20.003884   26392 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/client.key
	I0731 17:00:20.003915   26392 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.key.541fa027
	I0731 17:00:20.003935   26392 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.crt.541fa027 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.243 192.168.39.235 192.168.39.254]
	I0731 17:00:20.231073   26392 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.crt.541fa027 ...
	I0731 17:00:20.231099   26392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.crt.541fa027: {Name:mkd03ff98bd704ad38226e3ee0bb5356dbd65d25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 17:00:20.231310   26392 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.key.541fa027 ...
	I0731 17:00:20.231328   26392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.key.541fa027: {Name:mk7c4021e5a655d2f0b8e6095debb8ef91e562e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 17:00:20.231428   26392 certs.go:381] copying /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.crt.541fa027 -> /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.crt
	I0731 17:00:20.231581   26392 certs.go:385] copying /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.key.541fa027 -> /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.key
	I0731 17:00:20.231748   26392 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/proxy-client.key
	I0731 17:00:20.231768   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 17:00:20.231786   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0731 17:00:20.231803   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 17:00:20.231820   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 17:00:20.231837   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 17:00:20.231852   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 17:00:20.231869   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 17:00:20.231897   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 17:00:20.231965   26392 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem (1338 bytes)
	W0731 17:00:20.232010   26392 certs.go:480] ignoring /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259_empty.pem, impossibly tiny 0 bytes
	I0731 17:00:20.232023   26392 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 17:00:20.232053   26392 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem (1082 bytes)
	I0731 17:00:20.232084   26392 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem (1123 bytes)
	I0731 17:00:20.232115   26392 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem (1675 bytes)
	I0731 17:00:20.232169   26392 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem (1708 bytes)
	I0731 17:00:20.232209   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem -> /usr/share/ca-certificates/15259.pem
	I0731 17:00:20.232229   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem -> /usr/share/ca-certificates/152592.pem
	I0731 17:00:20.232248   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 17:00:20.232285   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 17:00:20.235380   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:00:20.235767   26392 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 17:00:20.235794   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:00:20.235980   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 17:00:20.236191   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 17:00:20.236362   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 17:00:20.236521   26392 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651/id_rsa Username:docker}
	I0731 17:00:20.311575   26392 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0731 17:00:20.316492   26392 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0731 17:00:20.327568   26392 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0731 17:00:20.331459   26392 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0731 17:00:20.341242   26392 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0731 17:00:20.344904   26392 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0731 17:00:20.354887   26392 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0731 17:00:20.358864   26392 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0731 17:00:20.368198   26392 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0731 17:00:20.371805   26392 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0731 17:00:20.381366   26392 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0731 17:00:20.385007   26392 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0731 17:00:20.395679   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 17:00:20.418874   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 17:00:20.441432   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 17:00:20.466283   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 17:00:20.490395   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0731 17:00:20.516051   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 17:00:20.541408   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 17:00:20.564348   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 17:00:20.586742   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem --> /usr/share/ca-certificates/15259.pem (1338 bytes)
	I0731 17:00:20.612026   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /usr/share/ca-certificates/152592.pem (1708 bytes)
	I0731 17:00:20.634527   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 17:00:20.657770   26392 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0731 17:00:20.673725   26392 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0731 17:00:20.689738   26392 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0731 17:00:20.705362   26392 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0731 17:00:20.720834   26392 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0731 17:00:20.737416   26392 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0731 17:00:20.753122   26392 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0731 17:00:20.768107   26392 ssh_runner.go:195] Run: openssl version
	I0731 17:00:20.773384   26392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/152592.pem && ln -fs /usr/share/ca-certificates/152592.pem /etc/ssl/certs/152592.pem"
	I0731 17:00:20.783132   26392 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/152592.pem
	I0731 17:00:20.787095   26392 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 16:54 /usr/share/ca-certificates/152592.pem
	I0731 17:00:20.787173   26392 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/152592.pem
	I0731 17:00:20.792533   26392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/152592.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 17:00:20.802491   26392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 17:00:20.812456   26392 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 17:00:20.816588   26392 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 16:42 /usr/share/ca-certificates/minikubeCA.pem
	I0731 17:00:20.816643   26392 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 17:00:20.822067   26392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 17:00:20.831770   26392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15259.pem && ln -fs /usr/share/ca-certificates/15259.pem /etc/ssl/certs/15259.pem"
	I0731 17:00:20.842065   26392 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15259.pem
	I0731 17:00:20.846099   26392 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 16:54 /usr/share/ca-certificates/15259.pem
	I0731 17:00:20.846146   26392 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15259.pem
	I0731 17:00:20.851266   26392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15259.pem /etc/ssl/certs/51391683.0"
	I0731 17:00:20.860658   26392 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 17:00:20.864271   26392 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 17:00:20.864317   26392 kubeadm.go:934] updating node {m02 192.168.39.235 8443 v1.30.3 crio true true} ...
	I0731 17:00:20.864398   26392 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-234651-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.235
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-234651 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 17:00:20.864421   26392 kube-vip.go:115] generating kube-vip config ...
	I0731 17:00:20.864448   26392 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0731 17:00:20.881697   26392 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0731 17:00:20.881758   26392 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0731 17:00:20.881805   26392 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 17:00:20.891145   26392 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0731 17:00:20.891210   26392 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0731 17:00:20.899812   26392 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0731 17:00:20.899836   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0731 17:00:20.899904   26392 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0731 17:00:20.899912   26392 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19349-8084/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0731 17:00:20.899940   26392 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19349-8084/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0731 17:00:20.904201   26392 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0731 17:00:20.904229   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0731 17:00:21.663380   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0731 17:00:21.663452   26392 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0731 17:00:21.668442   26392 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0731 17:00:21.668472   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0731 17:00:21.986946   26392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 17:00:22.003689   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0731 17:00:22.003795   26392 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0731 17:00:22.007955   26392 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0731 17:00:22.007994   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0731 17:00:22.382201   26392 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0731 17:00:22.390777   26392 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0731 17:00:22.406224   26392 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 17:00:22.421240   26392 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0731 17:00:22.436792   26392 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0731 17:00:22.441210   26392 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 17:00:22.452562   26392 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 17:00:22.576024   26392 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 17:00:22.594607   26392 host.go:66] Checking if "ha-234651" exists ...
	I0731 17:00:22.594940   26392 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:00:22.594978   26392 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:00:22.610985   26392 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46341
	I0731 17:00:22.611531   26392 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:00:22.612007   26392 main.go:141] libmachine: Using API Version  1
	I0731 17:00:22.612027   26392 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:00:22.612352   26392 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:00:22.612527   26392 main.go:141] libmachine: (ha-234651) Calling .DriverName
	I0731 17:00:22.612670   26392 start.go:317] joinCluster: &{Name:ha-234651 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-234651 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.243 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.235 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 17:00:22.612762   26392 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0731 17:00:22.612785   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 17:00:22.615818   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:00:22.616280   26392 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 17:00:22.616305   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:00:22.616439   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 17:00:22.616586   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 17:00:22.616726   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 17:00:22.616844   26392 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651/id_rsa Username:docker}
	I0731 17:00:22.761249   26392 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.235 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 17:00:22.761303   26392 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token hpnvqq.fpzfbqdbq5p8g3rv --discovery-token-ca-cert-hash sha256:460b24b51d10a3760f2391542a8a89e401359854f437f922cbee3f2d8ed6c856 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-234651-m02 --control-plane --apiserver-advertise-address=192.168.39.235 --apiserver-bind-port=8443"
	I0731 17:00:46.267965   26392 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token hpnvqq.fpzfbqdbq5p8g3rv --discovery-token-ca-cert-hash sha256:460b24b51d10a3760f2391542a8a89e401359854f437f922cbee3f2d8ed6c856 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-234651-m02 --control-plane --apiserver-advertise-address=192.168.39.235 --apiserver-bind-port=8443": (23.506634693s)
	I0731 17:00:46.268000   26392 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0731 17:00:46.768462   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-234651-m02 minikube.k8s.io/updated_at=2024_07_31T17_00_46_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1d737dad7efa60c56d30434fcd857dd3b14c91d9 minikube.k8s.io/name=ha-234651 minikube.k8s.io/primary=false
	I0731 17:00:46.921381   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-234651-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0731 17:00:47.029139   26392 start.go:319] duration metric: took 24.416465506s to joinCluster
	I0731 17:00:47.029219   26392 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.235 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 17:00:47.029494   26392 config.go:182] Loaded profile config "ha-234651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 17:00:47.030590   26392 out.go:177] * Verifying Kubernetes components...
	I0731 17:00:47.031802   26392 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 17:00:47.280650   26392 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 17:00:47.319645   26392 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 17:00:47.319853   26392 kapi.go:59] client config for ha-234651: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/client.crt", KeyFile:"/home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/client.key", CAFile:"/home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0731 17:00:47.319906   26392 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.243:8443
	I0731 17:00:47.320069   26392 node_ready.go:35] waiting up to 6m0s for node "ha-234651-m02" to be "Ready" ...
	I0731 17:00:47.320139   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:00:47.320144   26392 round_trippers.go:469] Request Headers:
	I0731 17:00:47.320151   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:00:47.320158   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:00:47.330603   26392 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0731 17:00:47.820885   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:00:47.820904   26392 round_trippers.go:469] Request Headers:
	I0731 17:00:47.820919   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:00:47.820924   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:00:47.824577   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:00:48.320541   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:00:48.320562   26392 round_trippers.go:469] Request Headers:
	I0731 17:00:48.320570   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:00:48.320575   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:00:48.324027   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:00:48.821053   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:00:48.821086   26392 round_trippers.go:469] Request Headers:
	I0731 17:00:48.821109   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:00:48.821113   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:00:48.824308   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:00:49.320290   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:00:49.320316   26392 round_trippers.go:469] Request Headers:
	I0731 17:00:49.320327   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:00:49.320333   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:00:49.323974   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:00:49.324681   26392 node_ready.go:53] node "ha-234651-m02" has status "Ready":"False"
	I0731 17:00:49.821158   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:00:49.821179   26392 round_trippers.go:469] Request Headers:
	I0731 17:00:49.821187   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:00:49.821192   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:00:49.824530   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:00:50.320318   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:00:50.320350   26392 round_trippers.go:469] Request Headers:
	I0731 17:00:50.320357   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:00:50.320365   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:00:50.323791   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:00:50.820653   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:00:50.820677   26392 round_trippers.go:469] Request Headers:
	I0731 17:00:50.820689   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:00:50.820696   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:00:50.824289   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:00:51.320251   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:00:51.320271   26392 round_trippers.go:469] Request Headers:
	I0731 17:00:51.320282   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:00:51.320290   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:00:51.323139   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:00:51.820380   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:00:51.820400   26392 round_trippers.go:469] Request Headers:
	I0731 17:00:51.820408   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:00:51.820412   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:00:51.823712   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:00:51.824612   26392 node_ready.go:53] node "ha-234651-m02" has status "Ready":"False"
	I0731 17:00:52.321232   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:00:52.321253   26392 round_trippers.go:469] Request Headers:
	I0731 17:00:52.321261   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:00:52.321264   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:00:52.324350   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:00:52.820794   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:00:52.820819   26392 round_trippers.go:469] Request Headers:
	I0731 17:00:52.820831   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:00:52.820838   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:00:52.850837   26392 round_trippers.go:574] Response Status: 200 OK in 29 milliseconds
	I0731 17:00:53.320599   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:00:53.320621   26392 round_trippers.go:469] Request Headers:
	I0731 17:00:53.320633   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:00:53.320641   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:00:53.324453   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:00:53.821278   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:00:53.821300   26392 round_trippers.go:469] Request Headers:
	I0731 17:00:53.821310   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:00:53.821314   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:00:53.824402   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:00:53.825080   26392 node_ready.go:53] node "ha-234651-m02" has status "Ready":"False"
	I0731 17:00:54.320491   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:00:54.320511   26392 round_trippers.go:469] Request Headers:
	I0731 17:00:54.320519   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:00:54.320522   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:00:54.323502   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:00:54.820491   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:00:54.820509   26392 round_trippers.go:469] Request Headers:
	I0731 17:00:54.820516   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:00:54.820521   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:00:54.823582   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:00:55.320329   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:00:55.320354   26392 round_trippers.go:469] Request Headers:
	I0731 17:00:55.320362   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:00:55.320366   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:00:55.324511   26392 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 17:00:55.820447   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:00:55.820469   26392 round_trippers.go:469] Request Headers:
	I0731 17:00:55.820478   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:00:55.820484   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:00:55.823836   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:00:56.320328   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:00:56.320350   26392 round_trippers.go:469] Request Headers:
	I0731 17:00:56.320358   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:00:56.320362   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:00:56.323672   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:00:56.324081   26392 node_ready.go:53] node "ha-234651-m02" has status "Ready":"False"
	I0731 17:00:56.820959   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:00:56.820978   26392 round_trippers.go:469] Request Headers:
	I0731 17:00:56.820985   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:00:56.820989   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:00:56.823706   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:00:57.321046   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:00:57.321065   26392 round_trippers.go:469] Request Headers:
	I0731 17:00:57.321073   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:00:57.321077   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:00:57.326002   26392 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 17:00:57.821159   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:00:57.821191   26392 round_trippers.go:469] Request Headers:
	I0731 17:00:57.821200   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:00:57.821205   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:00:57.824251   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:00:58.320326   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:00:58.320347   26392 round_trippers.go:469] Request Headers:
	I0731 17:00:58.320356   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:00:58.320361   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:00:58.323702   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:00:58.324189   26392 node_ready.go:53] node "ha-234651-m02" has status "Ready":"False"
	I0731 17:00:58.820578   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:00:58.820599   26392 round_trippers.go:469] Request Headers:
	I0731 17:00:58.820606   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:00:58.820611   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:00:58.823560   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:00:59.320493   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:00:59.320515   26392 round_trippers.go:469] Request Headers:
	I0731 17:00:59.320523   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:00:59.320526   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:00:59.323793   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:00:59.820247   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:00:59.820271   26392 round_trippers.go:469] Request Headers:
	I0731 17:00:59.820282   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:00:59.820290   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:00:59.823330   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:01:00.321048   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:01:00.321074   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:00.321084   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:00.321088   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:00.324269   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:01:00.324877   26392 node_ready.go:53] node "ha-234651-m02" has status "Ready":"False"
	I0731 17:01:00.821273   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:01:00.821297   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:00.821306   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:00.821312   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:00.823658   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:01:01.320514   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:01:01.320539   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:01.320547   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:01.320553   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:01.323351   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:01:01.820382   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:01:01.820404   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:01.820420   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:01.820426   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:01.823233   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:01:02.320692   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:01:02.320718   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:02.320728   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:02.320733   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:02.323624   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:01:02.820891   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:01:02.820925   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:02.820932   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:02.820936   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:02.824075   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:01:02.824690   26392 node_ready.go:53] node "ha-234651-m02" has status "Ready":"False"
	I0731 17:01:03.321136   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:01:03.321157   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:03.321166   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:03.321170   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:03.324114   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:01:03.820980   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:01:03.821004   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:03.821011   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:03.821014   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:03.825028   26392 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 17:01:04.320947   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:01:04.320967   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:04.320975   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:04.320981   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:04.324399   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:01:04.820225   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:01:04.820247   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:04.820256   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:04.820262   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:04.823891   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:01:04.824381   26392 node_ready.go:49] node "ha-234651-m02" has status "Ready":"True"
	I0731 17:01:04.824402   26392 node_ready.go:38] duration metric: took 17.504316493s for node "ha-234651-m02" to be "Ready" ...
	I0731 17:01:04.824413   26392 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 17:01:04.824479   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods
	I0731 17:01:04.824492   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:04.824502   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:04.824509   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:04.829454   26392 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 17:01:04.834900   26392 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-nsx9j" in "kube-system" namespace to be "Ready" ...
	I0731 17:01:04.834973   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nsx9j
	I0731 17:01:04.834984   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:04.834993   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:04.835003   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:04.837627   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:01:04.838113   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651
	I0731 17:01:04.838127   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:04.838133   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:04.838138   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:04.840091   26392 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0731 17:01:04.840502   26392 pod_ready.go:92] pod "coredns-7db6d8ff4d-nsx9j" in "kube-system" namespace has status "Ready":"True"
	I0731 17:01:04.840517   26392 pod_ready.go:81] duration metric: took 5.593343ms for pod "coredns-7db6d8ff4d-nsx9j" in "kube-system" namespace to be "Ready" ...
	I0731 17:01:04.840524   26392 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-qbqb9" in "kube-system" namespace to be "Ready" ...
	I0731 17:01:04.840565   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-qbqb9
	I0731 17:01:04.840572   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:04.840578   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:04.840581   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:04.842700   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:01:04.843202   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651
	I0731 17:01:04.843216   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:04.843222   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:04.843226   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:04.845356   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:01:04.845902   26392 pod_ready.go:92] pod "coredns-7db6d8ff4d-qbqb9" in "kube-system" namespace has status "Ready":"True"
	I0731 17:01:04.845921   26392 pod_ready.go:81] duration metric: took 5.388928ms for pod "coredns-7db6d8ff4d-qbqb9" in "kube-system" namespace to be "Ready" ...
	I0731 17:01:04.845932   26392 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-234651" in "kube-system" namespace to be "Ready" ...
	I0731 17:01:04.845986   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/etcd-ha-234651
	I0731 17:01:04.845997   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:04.846006   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:04.846015   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:04.848157   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:01:04.848691   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651
	I0731 17:01:04.848703   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:04.848708   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:04.848712   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:04.850608   26392 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0731 17:01:04.851183   26392 pod_ready.go:92] pod "etcd-ha-234651" in "kube-system" namespace has status "Ready":"True"
	I0731 17:01:04.851198   26392 pod_ready.go:81] duration metric: took 5.258896ms for pod "etcd-ha-234651" in "kube-system" namespace to be "Ready" ...
	I0731 17:01:04.851205   26392 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-234651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 17:01:04.851243   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/etcd-ha-234651-m02
	I0731 17:01:04.851250   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:04.851257   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:04.851262   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:04.853625   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:01:04.854050   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:01:04.854063   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:04.854068   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:04.854072   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:04.856215   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:01:04.856867   26392 pod_ready.go:92] pod "etcd-ha-234651-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 17:01:04.856882   26392 pod_ready.go:81] duration metric: took 5.67156ms for pod "etcd-ha-234651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 17:01:04.856893   26392 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-234651" in "kube-system" namespace to be "Ready" ...
	I0731 17:01:05.020903   26392 request.go:629] Waited for 163.95132ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-234651
	I0731 17:01:05.020981   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-234651
	I0731 17:01:05.020990   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:05.021004   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:05.021014   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:05.024257   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:01:05.220373   26392 request.go:629] Waited for 195.279659ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/nodes/ha-234651
	I0731 17:01:05.220443   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651
	I0731 17:01:05.220448   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:05.220455   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:05.220460   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:05.223551   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:01:05.224027   26392 pod_ready.go:92] pod "kube-apiserver-ha-234651" in "kube-system" namespace has status "Ready":"True"
	I0731 17:01:05.224044   26392 pod_ready.go:81] duration metric: took 367.145061ms for pod "kube-apiserver-ha-234651" in "kube-system" namespace to be "Ready" ...
	I0731 17:01:05.224053   26392 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-234651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 17:01:05.421120   26392 request.go:629] Waited for 197.005416ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-234651-m02
	I0731 17:01:05.421193   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-234651-m02
	I0731 17:01:05.421198   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:05.421206   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:05.421209   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:05.424140   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:01:05.621076   26392 request.go:629] Waited for 196.378614ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:01:05.621153   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:01:05.621161   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:05.621168   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:05.621172   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:05.625121   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:01:05.625607   26392 pod_ready.go:92] pod "kube-apiserver-ha-234651-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 17:01:05.625623   26392 pod_ready.go:81] duration metric: took 401.564067ms for pod "kube-apiserver-ha-234651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 17:01:05.625632   26392 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-234651" in "kube-system" namespace to be "Ready" ...
	I0731 17:01:05.820743   26392 request.go:629] Waited for 195.048096ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-234651
	I0731 17:01:05.820824   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-234651
	I0731 17:01:05.820829   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:05.820836   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:05.820844   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:05.824330   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:01:06.020622   26392 request.go:629] Waited for 195.343372ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/nodes/ha-234651
	I0731 17:01:06.020676   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651
	I0731 17:01:06.020682   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:06.020689   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:06.020693   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:06.023838   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:01:06.024292   26392 pod_ready.go:92] pod "kube-controller-manager-ha-234651" in "kube-system" namespace has status "Ready":"True"
	I0731 17:01:06.024309   26392 pod_ready.go:81] duration metric: took 398.671702ms for pod "kube-controller-manager-ha-234651" in "kube-system" namespace to be "Ready" ...
	I0731 17:01:06.024318   26392 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-234651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 17:01:06.220348   26392 request.go:629] Waited for 195.964017ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-234651-m02
	I0731 17:01:06.220428   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-234651-m02
	I0731 17:01:06.220435   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:06.220446   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:06.220456   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:06.223974   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:01:06.420844   26392 request.go:629] Waited for 196.07221ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:01:06.420898   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:01:06.420904   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:06.420911   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:06.420916   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:06.423911   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:01:06.424360   26392 pod_ready.go:92] pod "kube-controller-manager-ha-234651-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 17:01:06.424376   26392 pod_ready.go:81] duration metric: took 400.052749ms for pod "kube-controller-manager-ha-234651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 17:01:06.424385   26392 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b8dcw" in "kube-system" namespace to be "Ready" ...
	I0731 17:01:06.620606   26392 request.go:629] Waited for 196.143149ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b8dcw
	I0731 17:01:06.620664   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b8dcw
	I0731 17:01:06.620668   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:06.620675   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:06.620680   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:06.623829   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:01:06.820834   26392 request.go:629] Waited for 196.348156ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:01:06.820908   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:01:06.820915   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:06.820924   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:06.820929   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:06.824063   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:01:06.824684   26392 pod_ready.go:92] pod "kube-proxy-b8dcw" in "kube-system" namespace has status "Ready":"True"
	I0731 17:01:06.824706   26392 pod_ready.go:81] duration metric: took 400.313857ms for pod "kube-proxy-b8dcw" in "kube-system" namespace to be "Ready" ...
	I0731 17:01:06.824719   26392 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jfgs8" in "kube-system" namespace to be "Ready" ...
	I0731 17:01:07.020808   26392 request.go:629] Waited for 196.0095ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jfgs8
	I0731 17:01:07.020868   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jfgs8
	I0731 17:01:07.020873   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:07.020880   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:07.020883   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:07.024277   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:01:07.220900   26392 request.go:629] Waited for 195.972338ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/nodes/ha-234651
	I0731 17:01:07.220951   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651
	I0731 17:01:07.220956   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:07.220964   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:07.220969   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:07.227233   26392 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 17:01:07.227930   26392 pod_ready.go:92] pod "kube-proxy-jfgs8" in "kube-system" namespace has status "Ready":"True"
	I0731 17:01:07.227949   26392 pod_ready.go:81] duration metric: took 403.222708ms for pod "kube-proxy-jfgs8" in "kube-system" namespace to be "Ready" ...
	I0731 17:01:07.227957   26392 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-234651" in "kube-system" namespace to be "Ready" ...
	I0731 17:01:07.421053   26392 request.go:629] Waited for 193.035592ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-234651
	I0731 17:01:07.421104   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-234651
	I0731 17:01:07.421109   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:07.421116   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:07.421128   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:07.424728   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:01:07.620741   26392 request.go:629] Waited for 195.45029ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/nodes/ha-234651
	I0731 17:01:07.620791   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651
	I0731 17:01:07.620796   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:07.620804   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:07.620812   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:07.624127   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:01:07.624586   26392 pod_ready.go:92] pod "kube-scheduler-ha-234651" in "kube-system" namespace has status "Ready":"True"
	I0731 17:01:07.624605   26392 pod_ready.go:81] duration metric: took 396.642385ms for pod "kube-scheduler-ha-234651" in "kube-system" namespace to be "Ready" ...
	I0731 17:01:07.624615   26392 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-234651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 17:01:07.820747   26392 request.go:629] Waited for 196.068342ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-234651-m02
	I0731 17:01:07.820826   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-234651-m02
	I0731 17:01:07.820833   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:07.820842   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:07.820849   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:07.824093   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:01:08.021084   26392 request.go:629] Waited for 196.338927ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:01:08.021148   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:01:08.021155   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:08.021163   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:08.021167   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:08.027770   26392 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 17:01:08.028250   26392 pod_ready.go:92] pod "kube-scheduler-ha-234651-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 17:01:08.028267   26392 pod_ready.go:81] duration metric: took 403.643549ms for pod "kube-scheduler-ha-234651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 17:01:08.028277   26392 pod_ready.go:38] duration metric: took 3.20385274s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 17:01:08.028292   26392 api_server.go:52] waiting for apiserver process to appear ...
	I0731 17:01:08.028339   26392 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 17:01:08.043888   26392 api_server.go:72] duration metric: took 21.014637451s to wait for apiserver process to appear ...
	I0731 17:01:08.043908   26392 api_server.go:88] waiting for apiserver healthz status ...
	I0731 17:01:08.043922   26392 api_server.go:253] Checking apiserver healthz at https://192.168.39.243:8443/healthz ...
	I0731 17:01:08.048088   26392 api_server.go:279] https://192.168.39.243:8443/healthz returned 200:
	ok
	I0731 17:01:08.048141   26392 round_trippers.go:463] GET https://192.168.39.243:8443/version
	I0731 17:01:08.048148   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:08.048156   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:08.048159   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:08.049141   26392 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0731 17:01:08.049212   26392 api_server.go:141] control plane version: v1.30.3
	I0731 17:01:08.049227   26392 api_server.go:131] duration metric: took 5.313114ms to wait for apiserver health ...
	I0731 17:01:08.049233   26392 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 17:01:08.220721   26392 request.go:629] Waited for 171.43341ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods
	I0731 17:01:08.220793   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods
	I0731 17:01:08.220799   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:08.220806   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:08.220810   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:08.227018   26392 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 17:01:08.231092   26392 system_pods.go:59] 17 kube-system pods found
	I0731 17:01:08.231131   26392 system_pods.go:61] "coredns-7db6d8ff4d-nsx9j" [b2cde006-dbb7-4e6f-a5f1-cf7760740104] Running
	I0731 17:01:08.231139   26392 system_pods.go:61] "coredns-7db6d8ff4d-qbqb9" [4f76f862-d39e-4976-90e6-fb9a25cc485a] Running
	I0731 17:01:08.231144   26392 system_pods.go:61] "etcd-ha-234651" [c6fb7163-2f43-4b81-a3d9-e1550ddabc4c] Running
	I0731 17:01:08.231156   26392 system_pods.go:61] "etcd-ha-234651-m02" [ba88e585-8e9b-4177-82e0-4222210cfd2d] Running
	I0731 17:01:08.231163   26392 system_pods.go:61] "kindnet-phmdp" [f0e0736a-4005-4b0b-81fc-e4cea26a7d42] Running
	I0731 17:01:08.231166   26392 system_pods.go:61] "kindnet-wfbt4" [9eda8095-ce75-4043-8ddf-6e5663de8212] Running
	I0731 17:01:08.231169   26392 system_pods.go:61] "kube-apiserver-ha-234651" [ce3cbf02-100c-4437-8d1c-2ac4a2822866] Running
	I0731 17:01:08.231173   26392 system_pods.go:61] "kube-apiserver-ha-234651-m02" [e5eea595-36d9-497e-83be-aac5845b566a] Running
	I0731 17:01:08.231176   26392 system_pods.go:61] "kube-controller-manager-ha-234651" [3d111090-8fee-40ab-8cab-df7dd984bdb3] Running
	I0731 17:01:08.231182   26392 system_pods.go:61] "kube-controller-manager-ha-234651-m02" [28a10b93-7e55-4e2d-a5be-3605fa94fdb0] Running
	I0731 17:01:08.231185   26392 system_pods.go:61] "kube-proxy-b8dcw" [cbd2a846-0490-4465-b299-76b6fb0d5710] Running
	I0731 17:01:08.231188   26392 system_pods.go:61] "kube-proxy-jfgs8" [5ead85d8-0fd0-4900-8c02-2f23217ca208] Running
	I0731 17:01:08.231191   26392 system_pods.go:61] "kube-scheduler-ha-234651" [e1e2064c-17d7-4ea9-b6d3-b13c9ca34789] Running
	I0731 17:01:08.231194   26392 system_pods.go:61] "kube-scheduler-ha-234651-m02" [2ed86332-42ba-46c9-901c-38880e8ec54a] Running
	I0731 17:01:08.231197   26392 system_pods.go:61] "kube-vip-ha-234651" [205b811c-93e9-4f66-9d0e-67abbc8ff1ef] Running
	I0731 17:01:08.231201   26392 system_pods.go:61] "kube-vip-ha-234651-m02" [8fc0f86c-f043-44c2-aa8b-544fd6b7de22] Running
	I0731 17:01:08.231203   26392 system_pods.go:61] "storage-provisioner" [87455537-bdb8-438b-8122-db85bed01d09] Running
	I0731 17:01:08.231210   26392 system_pods.go:74] duration metric: took 181.97158ms to wait for pod list to return data ...
	I0731 17:01:08.231218   26392 default_sa.go:34] waiting for default service account to be created ...
	I0731 17:01:08.420603   26392 request.go:629] Waited for 189.314669ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/namespaces/default/serviceaccounts
	I0731 17:01:08.420709   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/default/serviceaccounts
	I0731 17:01:08.420718   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:08.420726   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:08.420731   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:08.424009   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:01:08.424203   26392 default_sa.go:45] found service account: "default"
	I0731 17:01:08.424218   26392 default_sa.go:55] duration metric: took 192.994736ms for default service account to be created ...
	I0731 17:01:08.424226   26392 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 17:01:08.620624   26392 request.go:629] Waited for 196.342137ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods
	I0731 17:01:08.620703   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods
	I0731 17:01:08.620711   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:08.620722   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:08.620728   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:08.625553   26392 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 17:01:08.629849   26392 system_pods.go:86] 17 kube-system pods found
	I0731 17:01:08.629873   26392 system_pods.go:89] "coredns-7db6d8ff4d-nsx9j" [b2cde006-dbb7-4e6f-a5f1-cf7760740104] Running
	I0731 17:01:08.629880   26392 system_pods.go:89] "coredns-7db6d8ff4d-qbqb9" [4f76f862-d39e-4976-90e6-fb9a25cc485a] Running
	I0731 17:01:08.629884   26392 system_pods.go:89] "etcd-ha-234651" [c6fb7163-2f43-4b81-a3d9-e1550ddabc4c] Running
	I0731 17:01:08.629888   26392 system_pods.go:89] "etcd-ha-234651-m02" [ba88e585-8e9b-4177-82e0-4222210cfd2d] Running
	I0731 17:01:08.629893   26392 system_pods.go:89] "kindnet-phmdp" [f0e0736a-4005-4b0b-81fc-e4cea26a7d42] Running
	I0731 17:01:08.629896   26392 system_pods.go:89] "kindnet-wfbt4" [9eda8095-ce75-4043-8ddf-6e5663de8212] Running
	I0731 17:01:08.629900   26392 system_pods.go:89] "kube-apiserver-ha-234651" [ce3cbf02-100c-4437-8d1c-2ac4a2822866] Running
	I0731 17:01:08.629904   26392 system_pods.go:89] "kube-apiserver-ha-234651-m02" [e5eea595-36d9-497e-83be-aac5845b566a] Running
	I0731 17:01:08.629909   26392 system_pods.go:89] "kube-controller-manager-ha-234651" [3d111090-8fee-40ab-8cab-df7dd984bdb3] Running
	I0731 17:01:08.629913   26392 system_pods.go:89] "kube-controller-manager-ha-234651-m02" [28a10b93-7e55-4e2d-a5be-3605fa94fdb0] Running
	I0731 17:01:08.629917   26392 system_pods.go:89] "kube-proxy-b8dcw" [cbd2a846-0490-4465-b299-76b6fb0d5710] Running
	I0731 17:01:08.629921   26392 system_pods.go:89] "kube-proxy-jfgs8" [5ead85d8-0fd0-4900-8c02-2f23217ca208] Running
	I0731 17:01:08.629924   26392 system_pods.go:89] "kube-scheduler-ha-234651" [e1e2064c-17d7-4ea9-b6d3-b13c9ca34789] Running
	I0731 17:01:08.629928   26392 system_pods.go:89] "kube-scheduler-ha-234651-m02" [2ed86332-42ba-46c9-901c-38880e8ec54a] Running
	I0731 17:01:08.629931   26392 system_pods.go:89] "kube-vip-ha-234651" [205b811c-93e9-4f66-9d0e-67abbc8ff1ef] Running
	I0731 17:01:08.629935   26392 system_pods.go:89] "kube-vip-ha-234651-m02" [8fc0f86c-f043-44c2-aa8b-544fd6b7de22] Running
	I0731 17:01:08.629938   26392 system_pods.go:89] "storage-provisioner" [87455537-bdb8-438b-8122-db85bed01d09] Running
	I0731 17:01:08.629945   26392 system_pods.go:126] duration metric: took 205.71471ms to wait for k8s-apps to be running ...
	I0731 17:01:08.629953   26392 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 17:01:08.629999   26392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 17:01:08.644784   26392 system_svc.go:56] duration metric: took 14.820249ms WaitForService to wait for kubelet
	I0731 17:01:08.644812   26392 kubeadm.go:582] duration metric: took 21.615565367s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 17:01:08.644831   26392 node_conditions.go:102] verifying NodePressure condition ...
	I0731 17:01:08.821216   26392 request.go:629] Waited for 176.313806ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/nodes
	I0731 17:01:08.821273   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes
	I0731 17:01:08.821281   26392 round_trippers.go:469] Request Headers:
	I0731 17:01:08.821289   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:01:08.821295   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:01:08.825439   26392 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 17:01:08.827911   26392 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 17:01:08.827947   26392 node_conditions.go:123] node cpu capacity is 2
	I0731 17:01:08.827957   26392 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 17:01:08.827961   26392 node_conditions.go:123] node cpu capacity is 2
	I0731 17:01:08.827966   26392 node_conditions.go:105] duration metric: took 183.129647ms to run NodePressure ...
	I0731 17:01:08.827976   26392 start.go:241] waiting for startup goroutines ...
	I0731 17:01:08.827996   26392 start.go:255] writing updated cluster config ...
	I0731 17:01:08.830200   26392 out.go:177] 
	I0731 17:01:08.831801   26392 config.go:182] Loaded profile config "ha-234651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 17:01:08.831914   26392 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/config.json ...
	I0731 17:01:08.833647   26392 out.go:177] * Starting "ha-234651-m03" control-plane node in "ha-234651" cluster
	I0731 17:01:08.834984   26392 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 17:01:08.835003   26392 cache.go:56] Caching tarball of preloaded images
	I0731 17:01:08.835090   26392 preload.go:172] Found /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 17:01:08.835100   26392 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 17:01:08.835245   26392 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/config.json ...
	I0731 17:01:08.835428   26392 start.go:360] acquireMachinesLock for ha-234651-m03: {Name:mkd78eacfab7d6c3058c6674434b3d889ec957e0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 17:01:08.835471   26392 start.go:364] duration metric: took 24.057µs to acquireMachinesLock for "ha-234651-m03"
	I0731 17:01:08.835487   26392 start.go:93] Provisioning new machine with config: &{Name:ha-234651 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-234651 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.243 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.235 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 17:01:08.835621   26392 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0731 17:01:08.837089   26392 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 17:01:08.837187   26392 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:01:08.837222   26392 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:01:08.851620   26392 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33979
	I0731 17:01:08.851990   26392 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:01:08.852406   26392 main.go:141] libmachine: Using API Version  1
	I0731 17:01:08.852426   26392 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:01:08.852766   26392 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:01:08.852945   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetMachineName
	I0731 17:01:08.853098   26392 main.go:141] libmachine: (ha-234651-m03) Calling .DriverName
	I0731 17:01:08.853223   26392 start.go:159] libmachine.API.Create for "ha-234651" (driver="kvm2")
	I0731 17:01:08.853244   26392 client.go:168] LocalClient.Create starting
	I0731 17:01:08.853269   26392 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem
	I0731 17:01:08.853298   26392 main.go:141] libmachine: Decoding PEM data...
	I0731 17:01:08.853311   26392 main.go:141] libmachine: Parsing certificate...
	I0731 17:01:08.853356   26392 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem
	I0731 17:01:08.853374   26392 main.go:141] libmachine: Decoding PEM data...
	I0731 17:01:08.853384   26392 main.go:141] libmachine: Parsing certificate...
	I0731 17:01:08.853399   26392 main.go:141] libmachine: Running pre-create checks...
	I0731 17:01:08.853406   26392 main.go:141] libmachine: (ha-234651-m03) Calling .PreCreateCheck
	I0731 17:01:08.853580   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetConfigRaw
	I0731 17:01:08.853893   26392 main.go:141] libmachine: Creating machine...
	I0731 17:01:08.853904   26392 main.go:141] libmachine: (ha-234651-m03) Calling .Create
	I0731 17:01:08.854005   26392 main.go:141] libmachine: (ha-234651-m03) Creating KVM machine...
	I0731 17:01:08.855060   26392 main.go:141] libmachine: (ha-234651-m03) DBG | found existing default KVM network
	I0731 17:01:08.855187   26392 main.go:141] libmachine: (ha-234651-m03) DBG | found existing private KVM network mk-ha-234651
	I0731 17:01:08.855288   26392 main.go:141] libmachine: (ha-234651-m03) Setting up store path in /home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m03 ...
	I0731 17:01:08.855326   26392 main.go:141] libmachine: (ha-234651-m03) Building disk image from file:///home/jenkins/minikube-integration/19349-8084/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0731 17:01:08.855425   26392 main.go:141] libmachine: (ha-234651-m03) DBG | I0731 17:01:08.855283   27175 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19349-8084/.minikube
	I0731 17:01:08.855467   26392 main.go:141] libmachine: (ha-234651-m03) Downloading /home/jenkins/minikube-integration/19349-8084/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19349-8084/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0731 17:01:09.084535   26392 main.go:141] libmachine: (ha-234651-m03) DBG | I0731 17:01:09.084390   27175 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m03/id_rsa...
	I0731 17:01:09.496483   26392 main.go:141] libmachine: (ha-234651-m03) DBG | I0731 17:01:09.496377   27175 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m03/ha-234651-m03.rawdisk...
	I0731 17:01:09.496510   26392 main.go:141] libmachine: (ha-234651-m03) DBG | Writing magic tar header
	I0731 17:01:09.496524   26392 main.go:141] libmachine: (ha-234651-m03) DBG | Writing SSH key tar header
	I0731 17:01:09.496538   26392 main.go:141] libmachine: (ha-234651-m03) DBG | I0731 17:01:09.496486   27175 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m03 ...
	I0731 17:01:09.496626   26392 main.go:141] libmachine: (ha-234651-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m03
	I0731 17:01:09.496670   26392 main.go:141] libmachine: (ha-234651-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19349-8084/.minikube/machines
	I0731 17:01:09.496685   26392 main.go:141] libmachine: (ha-234651-m03) Setting executable bit set on /home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m03 (perms=drwx------)
	I0731 17:01:09.496704   26392 main.go:141] libmachine: (ha-234651-m03) Setting executable bit set on /home/jenkins/minikube-integration/19349-8084/.minikube/machines (perms=drwxr-xr-x)
	I0731 17:01:09.496716   26392 main.go:141] libmachine: (ha-234651-m03) Setting executable bit set on /home/jenkins/minikube-integration/19349-8084/.minikube (perms=drwxr-xr-x)
	I0731 17:01:09.496729   26392 main.go:141] libmachine: (ha-234651-m03) Setting executable bit set on /home/jenkins/minikube-integration/19349-8084 (perms=drwxrwxr-x)
	I0731 17:01:09.496747   26392 main.go:141] libmachine: (ha-234651-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0731 17:01:09.496760   26392 main.go:141] libmachine: (ha-234651-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19349-8084/.minikube
	I0731 17:01:09.496772   26392 main.go:141] libmachine: (ha-234651-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0731 17:01:09.496789   26392 main.go:141] libmachine: (ha-234651-m03) Creating domain...
	I0731 17:01:09.496802   26392 main.go:141] libmachine: (ha-234651-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19349-8084
	I0731 17:01:09.496810   26392 main.go:141] libmachine: (ha-234651-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0731 17:01:09.496816   26392 main.go:141] libmachine: (ha-234651-m03) DBG | Checking permissions on dir: /home/jenkins
	I0731 17:01:09.496821   26392 main.go:141] libmachine: (ha-234651-m03) DBG | Checking permissions on dir: /home
	I0731 17:01:09.496832   26392 main.go:141] libmachine: (ha-234651-m03) DBG | Skipping /home - not owner
	I0731 17:01:09.497785   26392 main.go:141] libmachine: (ha-234651-m03) define libvirt domain using xml: 
	I0731 17:01:09.497804   26392 main.go:141] libmachine: (ha-234651-m03) <domain type='kvm'>
	I0731 17:01:09.497814   26392 main.go:141] libmachine: (ha-234651-m03)   <name>ha-234651-m03</name>
	I0731 17:01:09.497822   26392 main.go:141] libmachine: (ha-234651-m03)   <memory unit='MiB'>2200</memory>
	I0731 17:01:09.497830   26392 main.go:141] libmachine: (ha-234651-m03)   <vcpu>2</vcpu>
	I0731 17:01:09.497839   26392 main.go:141] libmachine: (ha-234651-m03)   <features>
	I0731 17:01:09.497848   26392 main.go:141] libmachine: (ha-234651-m03)     <acpi/>
	I0731 17:01:09.497857   26392 main.go:141] libmachine: (ha-234651-m03)     <apic/>
	I0731 17:01:09.497862   26392 main.go:141] libmachine: (ha-234651-m03)     <pae/>
	I0731 17:01:09.497866   26392 main.go:141] libmachine: (ha-234651-m03)     
	I0731 17:01:09.497872   26392 main.go:141] libmachine: (ha-234651-m03)   </features>
	I0731 17:01:09.497880   26392 main.go:141] libmachine: (ha-234651-m03)   <cpu mode='host-passthrough'>
	I0731 17:01:09.497885   26392 main.go:141] libmachine: (ha-234651-m03)   
	I0731 17:01:09.497891   26392 main.go:141] libmachine: (ha-234651-m03)   </cpu>
	I0731 17:01:09.497901   26392 main.go:141] libmachine: (ha-234651-m03)   <os>
	I0731 17:01:09.497908   26392 main.go:141] libmachine: (ha-234651-m03)     <type>hvm</type>
	I0731 17:01:09.497933   26392 main.go:141] libmachine: (ha-234651-m03)     <boot dev='cdrom'/>
	I0731 17:01:09.497954   26392 main.go:141] libmachine: (ha-234651-m03)     <boot dev='hd'/>
	I0731 17:01:09.497960   26392 main.go:141] libmachine: (ha-234651-m03)     <bootmenu enable='no'/>
	I0731 17:01:09.497967   26392 main.go:141] libmachine: (ha-234651-m03)   </os>
	I0731 17:01:09.497972   26392 main.go:141] libmachine: (ha-234651-m03)   <devices>
	I0731 17:01:09.497980   26392 main.go:141] libmachine: (ha-234651-m03)     <disk type='file' device='cdrom'>
	I0731 17:01:09.497990   26392 main.go:141] libmachine: (ha-234651-m03)       <source file='/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m03/boot2docker.iso'/>
	I0731 17:01:09.497998   26392 main.go:141] libmachine: (ha-234651-m03)       <target dev='hdc' bus='scsi'/>
	I0731 17:01:09.498004   26392 main.go:141] libmachine: (ha-234651-m03)       <readonly/>
	I0731 17:01:09.498010   26392 main.go:141] libmachine: (ha-234651-m03)     </disk>
	I0731 17:01:09.498017   26392 main.go:141] libmachine: (ha-234651-m03)     <disk type='file' device='disk'>
	I0731 17:01:09.498026   26392 main.go:141] libmachine: (ha-234651-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0731 17:01:09.498041   26392 main.go:141] libmachine: (ha-234651-m03)       <source file='/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m03/ha-234651-m03.rawdisk'/>
	I0731 17:01:09.498054   26392 main.go:141] libmachine: (ha-234651-m03)       <target dev='hda' bus='virtio'/>
	I0731 17:01:09.498064   26392 main.go:141] libmachine: (ha-234651-m03)     </disk>
	I0731 17:01:09.498076   26392 main.go:141] libmachine: (ha-234651-m03)     <interface type='network'>
	I0731 17:01:09.498086   26392 main.go:141] libmachine: (ha-234651-m03)       <source network='mk-ha-234651'/>
	I0731 17:01:09.498098   26392 main.go:141] libmachine: (ha-234651-m03)       <model type='virtio'/>
	I0731 17:01:09.498108   26392 main.go:141] libmachine: (ha-234651-m03)     </interface>
	I0731 17:01:09.498123   26392 main.go:141] libmachine: (ha-234651-m03)     <interface type='network'>
	I0731 17:01:09.498139   26392 main.go:141] libmachine: (ha-234651-m03)       <source network='default'/>
	I0731 17:01:09.498148   26392 main.go:141] libmachine: (ha-234651-m03)       <model type='virtio'/>
	I0731 17:01:09.498152   26392 main.go:141] libmachine: (ha-234651-m03)     </interface>
	I0731 17:01:09.498158   26392 main.go:141] libmachine: (ha-234651-m03)     <serial type='pty'>
	I0731 17:01:09.498165   26392 main.go:141] libmachine: (ha-234651-m03)       <target port='0'/>
	I0731 17:01:09.498170   26392 main.go:141] libmachine: (ha-234651-m03)     </serial>
	I0731 17:01:09.498176   26392 main.go:141] libmachine: (ha-234651-m03)     <console type='pty'>
	I0731 17:01:09.498182   26392 main.go:141] libmachine: (ha-234651-m03)       <target type='serial' port='0'/>
	I0731 17:01:09.498189   26392 main.go:141] libmachine: (ha-234651-m03)     </console>
	I0731 17:01:09.498194   26392 main.go:141] libmachine: (ha-234651-m03)     <rng model='virtio'>
	I0731 17:01:09.498202   26392 main.go:141] libmachine: (ha-234651-m03)       <backend model='random'>/dev/random</backend>
	I0731 17:01:09.498212   26392 main.go:141] libmachine: (ha-234651-m03)     </rng>
	I0731 17:01:09.498223   26392 main.go:141] libmachine: (ha-234651-m03)     
	I0731 17:01:09.498235   26392 main.go:141] libmachine: (ha-234651-m03)     
	I0731 17:01:09.498251   26392 main.go:141] libmachine: (ha-234651-m03)   </devices>
	I0731 17:01:09.498272   26392 main.go:141] libmachine: (ha-234651-m03) </domain>
	I0731 17:01:09.498285   26392 main.go:141] libmachine: (ha-234651-m03) 
	I0731 17:01:09.505009   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:98:32:f5 in network default
	I0731 17:01:09.505505   26392 main.go:141] libmachine: (ha-234651-m03) Ensuring networks are active...
	I0731 17:01:09.505525   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:09.506245   26392 main.go:141] libmachine: (ha-234651-m03) Ensuring network default is active
	I0731 17:01:09.506588   26392 main.go:141] libmachine: (ha-234651-m03) Ensuring network mk-ha-234651 is active
	I0731 17:01:09.506926   26392 main.go:141] libmachine: (ha-234651-m03) Getting domain xml...
	I0731 17:01:09.507698   26392 main.go:141] libmachine: (ha-234651-m03) Creating domain...
	I0731 17:01:10.710510   26392 main.go:141] libmachine: (ha-234651-m03) Waiting to get IP...
	I0731 17:01:10.711248   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:10.711639   26392 main.go:141] libmachine: (ha-234651-m03) DBG | unable to find current IP address of domain ha-234651-m03 in network mk-ha-234651
	I0731 17:01:10.711664   26392 main.go:141] libmachine: (ha-234651-m03) DBG | I0731 17:01:10.711622   27175 retry.go:31] will retry after 214.209915ms: waiting for machine to come up
	I0731 17:01:10.926907   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:10.927356   26392 main.go:141] libmachine: (ha-234651-m03) DBG | unable to find current IP address of domain ha-234651-m03 in network mk-ha-234651
	I0731 17:01:10.927378   26392 main.go:141] libmachine: (ha-234651-m03) DBG | I0731 17:01:10.927303   27175 retry.go:31] will retry after 376.598663ms: waiting for machine to come up
	I0731 17:01:11.305743   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:11.306195   26392 main.go:141] libmachine: (ha-234651-m03) DBG | unable to find current IP address of domain ha-234651-m03 in network mk-ha-234651
	I0731 17:01:11.306219   26392 main.go:141] libmachine: (ha-234651-m03) DBG | I0731 17:01:11.306160   27175 retry.go:31] will retry after 328.55691ms: waiting for machine to come up
	I0731 17:01:11.636615   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:11.637088   26392 main.go:141] libmachine: (ha-234651-m03) DBG | unable to find current IP address of domain ha-234651-m03 in network mk-ha-234651
	I0731 17:01:11.637117   26392 main.go:141] libmachine: (ha-234651-m03) DBG | I0731 17:01:11.637023   27175 retry.go:31] will retry after 509.868926ms: waiting for machine to come up
	I0731 17:01:12.148495   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:12.148932   26392 main.go:141] libmachine: (ha-234651-m03) DBG | unable to find current IP address of domain ha-234651-m03 in network mk-ha-234651
	I0731 17:01:12.148953   26392 main.go:141] libmachine: (ha-234651-m03) DBG | I0731 17:01:12.148904   27175 retry.go:31] will retry after 489.995297ms: waiting for machine to come up
	I0731 17:01:12.640709   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:12.641266   26392 main.go:141] libmachine: (ha-234651-m03) DBG | unable to find current IP address of domain ha-234651-m03 in network mk-ha-234651
	I0731 17:01:12.641299   26392 main.go:141] libmachine: (ha-234651-m03) DBG | I0731 17:01:12.641207   27175 retry.go:31] will retry after 891.834852ms: waiting for machine to come up
	I0731 17:01:13.534824   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:13.535341   26392 main.go:141] libmachine: (ha-234651-m03) DBG | unable to find current IP address of domain ha-234651-m03 in network mk-ha-234651
	I0731 17:01:13.535368   26392 main.go:141] libmachine: (ha-234651-m03) DBG | I0731 17:01:13.535293   27175 retry.go:31] will retry after 740.342338ms: waiting for machine to come up
	I0731 17:01:14.277390   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:14.277830   26392 main.go:141] libmachine: (ha-234651-m03) DBG | unable to find current IP address of domain ha-234651-m03 in network mk-ha-234651
	I0731 17:01:14.277852   26392 main.go:141] libmachine: (ha-234651-m03) DBG | I0731 17:01:14.277792   27175 retry.go:31] will retry after 1.412219536s: waiting for machine to come up
	I0731 17:01:15.692325   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:15.692790   26392 main.go:141] libmachine: (ha-234651-m03) DBG | unable to find current IP address of domain ha-234651-m03 in network mk-ha-234651
	I0731 17:01:15.692832   26392 main.go:141] libmachine: (ha-234651-m03) DBG | I0731 17:01:15.692742   27175 retry.go:31] will retry after 1.272314742s: waiting for machine to come up
	I0731 17:01:16.966944   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:16.967394   26392 main.go:141] libmachine: (ha-234651-m03) DBG | unable to find current IP address of domain ha-234651-m03 in network mk-ha-234651
	I0731 17:01:16.967424   26392 main.go:141] libmachine: (ha-234651-m03) DBG | I0731 17:01:16.967344   27175 retry.go:31] will retry after 1.443011677s: waiting for machine to come up
	I0731 17:01:18.411974   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:18.412499   26392 main.go:141] libmachine: (ha-234651-m03) DBG | unable to find current IP address of domain ha-234651-m03 in network mk-ha-234651
	I0731 17:01:18.412529   26392 main.go:141] libmachine: (ha-234651-m03) DBG | I0731 17:01:18.412449   27175 retry.go:31] will retry after 2.743615987s: waiting for machine to come up
	I0731 17:01:21.157559   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:21.157996   26392 main.go:141] libmachine: (ha-234651-m03) DBG | unable to find current IP address of domain ha-234651-m03 in network mk-ha-234651
	I0731 17:01:21.158043   26392 main.go:141] libmachine: (ha-234651-m03) DBG | I0731 17:01:21.157950   27175 retry.go:31] will retry after 2.604564384s: waiting for machine to come up
	I0731 17:01:23.763967   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:23.764422   26392 main.go:141] libmachine: (ha-234651-m03) DBG | unable to find current IP address of domain ha-234651-m03 in network mk-ha-234651
	I0731 17:01:23.764443   26392 main.go:141] libmachine: (ha-234651-m03) DBG | I0731 17:01:23.764380   27175 retry.go:31] will retry after 3.508285757s: waiting for machine to come up
	I0731 17:01:27.276084   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:27.276506   26392 main.go:141] libmachine: (ha-234651-m03) DBG | unable to find current IP address of domain ha-234651-m03 in network mk-ha-234651
	I0731 17:01:27.276536   26392 main.go:141] libmachine: (ha-234651-m03) DBG | I0731 17:01:27.276462   27175 retry.go:31] will retry after 4.892278928s: waiting for machine to come up
	I0731 17:01:32.172161   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:32.172728   26392 main.go:141] libmachine: (ha-234651-m03) Found IP for machine: 192.168.39.139
	I0731 17:01:32.172753   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has current primary IP address 192.168.39.139 and MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:32.172762   26392 main.go:141] libmachine: (ha-234651-m03) Reserving static IP address...
	I0731 17:01:32.173205   26392 main.go:141] libmachine: (ha-234651-m03) DBG | unable to find host DHCP lease matching {name: "ha-234651-m03", mac: "52:54:00:ac:c0:cf", ip: "192.168.39.139"} in network mk-ha-234651
	I0731 17:01:32.245008   26392 main.go:141] libmachine: (ha-234651-m03) DBG | Getting to WaitForSSH function...
	I0731 17:01:32.245035   26392 main.go:141] libmachine: (ha-234651-m03) Reserved static IP address: 192.168.39.139
	I0731 17:01:32.245049   26392 main.go:141] libmachine: (ha-234651-m03) Waiting for SSH to be available...
	I0731 17:01:32.247523   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:32.247958   26392 main.go:141] libmachine: (ha-234651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c0:cf", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:01:23 +0000 UTC Type:0 Mac:52:54:00:ac:c0:cf Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ac:c0:cf}
	I0731 17:01:32.248074   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined IP address 192.168.39.139 and MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:32.248199   26392 main.go:141] libmachine: (ha-234651-m03) DBG | Using SSH client type: external
	I0731 17:01:32.248227   26392 main.go:141] libmachine: (ha-234651-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m03/id_rsa (-rw-------)
	I0731 17:01:32.248270   26392 main.go:141] libmachine: (ha-234651-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.139 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 17:01:32.248304   26392 main.go:141] libmachine: (ha-234651-m03) DBG | About to run SSH command:
	I0731 17:01:32.248322   26392 main.go:141] libmachine: (ha-234651-m03) DBG | exit 0
	I0731 17:01:32.370824   26392 main.go:141] libmachine: (ha-234651-m03) DBG | SSH cmd err, output: <nil>: 
	I0731 17:01:32.371066   26392 main.go:141] libmachine: (ha-234651-m03) KVM machine creation complete!
	I0731 17:01:32.371306   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetConfigRaw
	I0731 17:01:32.371797   26392 main.go:141] libmachine: (ha-234651-m03) Calling .DriverName
	I0731 17:01:32.371998   26392 main.go:141] libmachine: (ha-234651-m03) Calling .DriverName
	I0731 17:01:32.372156   26392 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0731 17:01:32.372169   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetState
	I0731 17:01:32.373922   26392 main.go:141] libmachine: Detecting operating system of created instance...
	I0731 17:01:32.373935   26392 main.go:141] libmachine: Waiting for SSH to be available...
	I0731 17:01:32.373940   26392 main.go:141] libmachine: Getting to WaitForSSH function...
	I0731 17:01:32.373946   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHHostname
	I0731 17:01:32.376008   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:32.376384   26392 main.go:141] libmachine: (ha-234651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c0:cf", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:01:23 +0000 UTC Type:0 Mac:52:54:00:ac:c0:cf Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-234651-m03 Clientid:01:52:54:00:ac:c0:cf}
	I0731 17:01:32.376408   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined IP address 192.168.39.139 and MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:32.376620   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHPort
	I0731 17:01:32.376804   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHKeyPath
	I0731 17:01:32.376989   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHKeyPath
	I0731 17:01:32.377143   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHUsername
	I0731 17:01:32.377429   26392 main.go:141] libmachine: Using SSH client type: native
	I0731 17:01:32.377649   26392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0731 17:01:32.377660   26392 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0731 17:01:32.474217   26392 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 17:01:32.474239   26392 main.go:141] libmachine: Detecting the provisioner...
	I0731 17:01:32.474250   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHHostname
	I0731 17:01:32.477335   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:32.477812   26392 main.go:141] libmachine: (ha-234651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c0:cf", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:01:23 +0000 UTC Type:0 Mac:52:54:00:ac:c0:cf Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-234651-m03 Clientid:01:52:54:00:ac:c0:cf}
	I0731 17:01:32.477835   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined IP address 192.168.39.139 and MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:32.477980   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHPort
	I0731 17:01:32.478177   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHKeyPath
	I0731 17:01:32.478379   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHKeyPath
	I0731 17:01:32.478559   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHUsername
	I0731 17:01:32.478713   26392 main.go:141] libmachine: Using SSH client type: native
	I0731 17:01:32.478909   26392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0731 17:01:32.478921   26392 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0731 17:01:32.579743   26392 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0731 17:01:32.579798   26392 main.go:141] libmachine: found compatible host: buildroot
	I0731 17:01:32.579805   26392 main.go:141] libmachine: Provisioning with buildroot...
	I0731 17:01:32.579811   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetMachineName
	I0731 17:01:32.580068   26392 buildroot.go:166] provisioning hostname "ha-234651-m03"
	I0731 17:01:32.580096   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetMachineName
	I0731 17:01:32.580266   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHHostname
	I0731 17:01:32.582796   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:32.583164   26392 main.go:141] libmachine: (ha-234651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c0:cf", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:01:23 +0000 UTC Type:0 Mac:52:54:00:ac:c0:cf Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-234651-m03 Clientid:01:52:54:00:ac:c0:cf}
	I0731 17:01:32.583194   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined IP address 192.168.39.139 and MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:32.583353   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHPort
	I0731 17:01:32.583512   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHKeyPath
	I0731 17:01:32.583651   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHKeyPath
	I0731 17:01:32.583770   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHUsername
	I0731 17:01:32.583947   26392 main.go:141] libmachine: Using SSH client type: native
	I0731 17:01:32.584123   26392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0731 17:01:32.584143   26392 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-234651-m03 && echo "ha-234651-m03" | sudo tee /etc/hostname
	I0731 17:01:32.697431   26392 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-234651-m03
	
	I0731 17:01:32.697455   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHHostname
	I0731 17:01:32.700299   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:32.700674   26392 main.go:141] libmachine: (ha-234651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c0:cf", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:01:23 +0000 UTC Type:0 Mac:52:54:00:ac:c0:cf Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-234651-m03 Clientid:01:52:54:00:ac:c0:cf}
	I0731 17:01:32.700705   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined IP address 192.168.39.139 and MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:32.700940   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHPort
	I0731 17:01:32.701126   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHKeyPath
	I0731 17:01:32.701281   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHKeyPath
	I0731 17:01:32.701433   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHUsername
	I0731 17:01:32.701591   26392 main.go:141] libmachine: Using SSH client type: native
	I0731 17:01:32.701797   26392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0731 17:01:32.701822   26392 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-234651-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-234651-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-234651-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 17:01:32.807070   26392 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 17:01:32.807099   26392 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19349-8084/.minikube CaCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19349-8084/.minikube}
	I0731 17:01:32.807141   26392 buildroot.go:174] setting up certificates
	I0731 17:01:32.807151   26392 provision.go:84] configureAuth start
	I0731 17:01:32.807160   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetMachineName
	I0731 17:01:32.807456   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetIP
	I0731 17:01:32.810372   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:32.810735   26392 main.go:141] libmachine: (ha-234651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c0:cf", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:01:23 +0000 UTC Type:0 Mac:52:54:00:ac:c0:cf Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-234651-m03 Clientid:01:52:54:00:ac:c0:cf}
	I0731 17:01:32.810781   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined IP address 192.168.39.139 and MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:32.810910   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHHostname
	I0731 17:01:32.812999   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:32.813395   26392 main.go:141] libmachine: (ha-234651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c0:cf", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:01:23 +0000 UTC Type:0 Mac:52:54:00:ac:c0:cf Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-234651-m03 Clientid:01:52:54:00:ac:c0:cf}
	I0731 17:01:32.813420   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined IP address 192.168.39.139 and MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:32.813557   26392 provision.go:143] copyHostCerts
	I0731 17:01:32.813586   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem
	I0731 17:01:32.813627   26392 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem, removing ...
	I0731 17:01:32.813638   26392 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem
	I0731 17:01:32.813733   26392 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem (1675 bytes)
	I0731 17:01:32.813818   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem
	I0731 17:01:32.813837   26392 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem, removing ...
	I0731 17:01:32.813845   26392 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem
	I0731 17:01:32.813881   26392 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem (1082 bytes)
	I0731 17:01:32.813948   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem
	I0731 17:01:32.813973   26392 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem, removing ...
	I0731 17:01:32.813981   26392 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem
	I0731 17:01:32.814008   26392 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem (1123 bytes)
	I0731 17:01:32.814064   26392 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem org=jenkins.ha-234651-m03 san=[127.0.0.1 192.168.39.139 ha-234651-m03 localhost minikube]
	I0731 17:01:33.066175   26392 provision.go:177] copyRemoteCerts
	I0731 17:01:33.066233   26392 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 17:01:33.066255   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHHostname
	I0731 17:01:33.068872   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:33.069232   26392 main.go:141] libmachine: (ha-234651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c0:cf", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:01:23 +0000 UTC Type:0 Mac:52:54:00:ac:c0:cf Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-234651-m03 Clientid:01:52:54:00:ac:c0:cf}
	I0731 17:01:33.069260   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined IP address 192.168.39.139 and MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:33.069459   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHPort
	I0731 17:01:33.069645   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHKeyPath
	I0731 17:01:33.069791   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHUsername
	I0731 17:01:33.069906   26392 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m03/id_rsa Username:docker}
	I0731 17:01:33.153992   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0731 17:01:33.154068   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 17:01:33.177299   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0731 17:01:33.177378   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0731 17:01:33.199207   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0731 17:01:33.199275   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 17:01:33.221591   26392 provision.go:87] duration metric: took 414.430702ms to configureAuth
	I0731 17:01:33.221621   26392 buildroot.go:189] setting minikube options for container-runtime
	I0731 17:01:33.221823   26392 config.go:182] Loaded profile config "ha-234651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 17:01:33.221901   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHHostname
	I0731 17:01:33.224934   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:33.225386   26392 main.go:141] libmachine: (ha-234651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c0:cf", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:01:23 +0000 UTC Type:0 Mac:52:54:00:ac:c0:cf Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-234651-m03 Clientid:01:52:54:00:ac:c0:cf}
	I0731 17:01:33.225431   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined IP address 192.168.39.139 and MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:33.225586   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHPort
	I0731 17:01:33.225786   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHKeyPath
	I0731 17:01:33.225945   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHKeyPath
	I0731 17:01:33.226081   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHUsername
	I0731 17:01:33.226239   26392 main.go:141] libmachine: Using SSH client type: native
	I0731 17:01:33.226402   26392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0731 17:01:33.226415   26392 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 17:01:33.488220   26392 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 17:01:33.488252   26392 main.go:141] libmachine: Checking connection to Docker...
	I0731 17:01:33.488265   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetURL
	I0731 17:01:33.489561   26392 main.go:141] libmachine: (ha-234651-m03) DBG | Using libvirt version 6000000
	I0731 17:01:33.491742   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:33.492239   26392 main.go:141] libmachine: (ha-234651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c0:cf", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:01:23 +0000 UTC Type:0 Mac:52:54:00:ac:c0:cf Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-234651-m03 Clientid:01:52:54:00:ac:c0:cf}
	I0731 17:01:33.492272   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined IP address 192.168.39.139 and MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:33.492505   26392 main.go:141] libmachine: Docker is up and running!
	I0731 17:01:33.492523   26392 main.go:141] libmachine: Reticulating splines...
	I0731 17:01:33.492531   26392 client.go:171] duration metric: took 24.63927913s to LocalClient.Create
	I0731 17:01:33.492555   26392 start.go:167] duration metric: took 24.639330663s to libmachine.API.Create "ha-234651"
	I0731 17:01:33.492578   26392 start.go:293] postStartSetup for "ha-234651-m03" (driver="kvm2")
	I0731 17:01:33.492591   26392 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 17:01:33.492663   26392 main.go:141] libmachine: (ha-234651-m03) Calling .DriverName
	I0731 17:01:33.492918   26392 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 17:01:33.492944   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHHostname
	I0731 17:01:33.495518   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:33.495920   26392 main.go:141] libmachine: (ha-234651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c0:cf", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:01:23 +0000 UTC Type:0 Mac:52:54:00:ac:c0:cf Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-234651-m03 Clientid:01:52:54:00:ac:c0:cf}
	I0731 17:01:33.495946   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined IP address 192.168.39.139 and MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:33.496103   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHPort
	I0731 17:01:33.496285   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHKeyPath
	I0731 17:01:33.496459   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHUsername
	I0731 17:01:33.496601   26392 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m03/id_rsa Username:docker}
	I0731 17:01:33.573930   26392 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 17:01:33.578232   26392 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 17:01:33.578251   26392 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/addons for local assets ...
	I0731 17:01:33.578307   26392 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/files for local assets ...
	I0731 17:01:33.578378   26392 filesync.go:149] local asset: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem -> 152592.pem in /etc/ssl/certs
	I0731 17:01:33.578388   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem -> /etc/ssl/certs/152592.pem
	I0731 17:01:33.578465   26392 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 17:01:33.588147   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /etc/ssl/certs/152592.pem (1708 bytes)
	I0731 17:01:33.612020   26392 start.go:296] duration metric: took 119.431633ms for postStartSetup
	I0731 17:01:33.612060   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetConfigRaw
	I0731 17:01:33.612663   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetIP
	I0731 17:01:33.615187   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:33.615618   26392 main.go:141] libmachine: (ha-234651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c0:cf", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:01:23 +0000 UTC Type:0 Mac:52:54:00:ac:c0:cf Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-234651-m03 Clientid:01:52:54:00:ac:c0:cf}
	I0731 17:01:33.615645   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined IP address 192.168.39.139 and MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:33.615888   26392 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/config.json ...
	I0731 17:01:33.616086   26392 start.go:128] duration metric: took 24.780455282s to createHost
	I0731 17:01:33.616111   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHHostname
	I0731 17:01:33.618128   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:33.618421   26392 main.go:141] libmachine: (ha-234651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c0:cf", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:01:23 +0000 UTC Type:0 Mac:52:54:00:ac:c0:cf Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-234651-m03 Clientid:01:52:54:00:ac:c0:cf}
	I0731 17:01:33.618448   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined IP address 192.168.39.139 and MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:33.618599   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHPort
	I0731 17:01:33.618789   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHKeyPath
	I0731 17:01:33.618937   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHKeyPath
	I0731 17:01:33.619090   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHUsername
	I0731 17:01:33.619293   26392 main.go:141] libmachine: Using SSH client type: native
	I0731 17:01:33.619448   26392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0731 17:01:33.619457   26392 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 17:01:33.720061   26392 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722445293.697216051
	
	I0731 17:01:33.720083   26392 fix.go:216] guest clock: 1722445293.697216051
	I0731 17:01:33.720090   26392 fix.go:229] Guest: 2024-07-31 17:01:33.697216051 +0000 UTC Remote: 2024-07-31 17:01:33.616097561 +0000 UTC m=+151.562332489 (delta=81.11849ms)
	I0731 17:01:33.720109   26392 fix.go:200] guest clock delta is within tolerance: 81.11849ms
	I0731 17:01:33.720116   26392 start.go:83] releasing machines lock for "ha-234651-m03", held for 24.884635596s
	I0731 17:01:33.720138   26392 main.go:141] libmachine: (ha-234651-m03) Calling .DriverName
	I0731 17:01:33.720399   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetIP
	I0731 17:01:33.723510   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:33.723912   26392 main.go:141] libmachine: (ha-234651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c0:cf", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:01:23 +0000 UTC Type:0 Mac:52:54:00:ac:c0:cf Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-234651-m03 Clientid:01:52:54:00:ac:c0:cf}
	I0731 17:01:33.723968   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined IP address 192.168.39.139 and MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:33.726040   26392 out.go:177] * Found network options:
	I0731 17:01:33.727394   26392 out.go:177]   - NO_PROXY=192.168.39.243,192.168.39.235
	W0731 17:01:33.728619   26392 proxy.go:119] fail to check proxy env: Error ip not in block
	W0731 17:01:33.728640   26392 proxy.go:119] fail to check proxy env: Error ip not in block
	I0731 17:01:33.728653   26392 main.go:141] libmachine: (ha-234651-m03) Calling .DriverName
	I0731 17:01:33.729098   26392 main.go:141] libmachine: (ha-234651-m03) Calling .DriverName
	I0731 17:01:33.729272   26392 main.go:141] libmachine: (ha-234651-m03) Calling .DriverName
	I0731 17:01:33.729400   26392 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 17:01:33.729441   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHHostname
	W0731 17:01:33.729485   26392 proxy.go:119] fail to check proxy env: Error ip not in block
	W0731 17:01:33.729506   26392 proxy.go:119] fail to check proxy env: Error ip not in block
	I0731 17:01:33.729572   26392 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 17:01:33.729595   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHHostname
	I0731 17:01:33.732354   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:33.732559   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:33.732817   26392 main.go:141] libmachine: (ha-234651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c0:cf", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:01:23 +0000 UTC Type:0 Mac:52:54:00:ac:c0:cf Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-234651-m03 Clientid:01:52:54:00:ac:c0:cf}
	I0731 17:01:33.732844   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined IP address 192.168.39.139 and MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:33.732918   26392 main.go:141] libmachine: (ha-234651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c0:cf", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:01:23 +0000 UTC Type:0 Mac:52:54:00:ac:c0:cf Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-234651-m03 Clientid:01:52:54:00:ac:c0:cf}
	I0731 17:01:33.732938   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined IP address 192.168.39.139 and MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:33.732952   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHPort
	I0731 17:01:33.733108   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHKeyPath
	I0731 17:01:33.733174   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHPort
	I0731 17:01:33.733282   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHUsername
	I0731 17:01:33.733329   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHKeyPath
	I0731 17:01:33.733414   26392 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m03/id_rsa Username:docker}
	I0731 17:01:33.733471   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHUsername
	I0731 17:01:33.733612   26392 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m03/id_rsa Username:docker}
	I0731 17:01:33.971250   26392 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 17:01:33.977009   26392 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 17:01:33.977080   26392 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 17:01:33.995986   26392 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 17:01:33.996010   26392 start.go:495] detecting cgroup driver to use...
	I0731 17:01:33.996073   26392 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 17:01:34.012560   26392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 17:01:34.026037   26392 docker.go:217] disabling cri-docker service (if available) ...
	I0731 17:01:34.026090   26392 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 17:01:34.039868   26392 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 17:01:34.052763   26392 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 17:01:34.162187   26392 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 17:01:34.303171   26392 docker.go:233] disabling docker service ...
	I0731 17:01:34.303247   26392 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 17:01:34.319419   26392 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 17:01:34.332145   26392 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 17:01:34.467404   26392 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 17:01:34.584244   26392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 17:01:34.598198   26392 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 17:01:34.615593   26392 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 17:01:34.615655   26392 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:01:34.625100   26392 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 17:01:34.625150   26392 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:01:34.634613   26392 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:01:34.644216   26392 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:01:34.653866   26392 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 17:01:34.664457   26392 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:01:34.673728   26392 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:01:34.689886   26392 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:01:34.699411   26392 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 17:01:34.707870   26392 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 17:01:34.707921   26392 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 17:01:34.720271   26392 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 17:01:34.729278   26392 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 17:01:34.845550   26392 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 17:01:34.974748   26392 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 17:01:34.974824   26392 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 17:01:34.979616   26392 start.go:563] Will wait 60s for crictl version
	I0731 17:01:34.979670   26392 ssh_runner.go:195] Run: which crictl
	I0731 17:01:34.983733   26392 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 17:01:35.022775   26392 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 17:01:35.022854   26392 ssh_runner.go:195] Run: crio --version
	I0731 17:01:35.050515   26392 ssh_runner.go:195] Run: crio --version
	I0731 17:01:35.079964   26392 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 17:01:35.081319   26392 out.go:177]   - env NO_PROXY=192.168.39.243
	I0731 17:01:35.082627   26392 out.go:177]   - env NO_PROXY=192.168.39.243,192.168.39.235
	I0731 17:01:35.084057   26392 main.go:141] libmachine: (ha-234651-m03) Calling .GetIP
	I0731 17:01:35.087070   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:35.087418   26392 main.go:141] libmachine: (ha-234651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c0:cf", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:01:23 +0000 UTC Type:0 Mac:52:54:00:ac:c0:cf Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-234651-m03 Clientid:01:52:54:00:ac:c0:cf}
	I0731 17:01:35.087443   26392 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined IP address 192.168.39.139 and MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:01:35.087647   26392 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 17:01:35.091590   26392 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 17:01:35.102805   26392 mustload.go:65] Loading cluster: ha-234651
	I0731 17:01:35.103045   26392 config.go:182] Loaded profile config "ha-234651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 17:01:35.103387   26392 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:01:35.103423   26392 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:01:35.117581   26392 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45271
	I0731 17:01:35.117916   26392 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:01:35.118379   26392 main.go:141] libmachine: Using API Version  1
	I0731 17:01:35.118397   26392 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:01:35.118716   26392 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:01:35.118915   26392 main.go:141] libmachine: (ha-234651) Calling .GetState
	I0731 17:01:35.120442   26392 host.go:66] Checking if "ha-234651" exists ...
	I0731 17:01:35.120719   26392 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:01:35.120749   26392 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:01:35.134870   26392 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36341
	I0731 17:01:35.135283   26392 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:01:35.135793   26392 main.go:141] libmachine: Using API Version  1
	I0731 17:01:35.135815   26392 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:01:35.136144   26392 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:01:35.136361   26392 main.go:141] libmachine: (ha-234651) Calling .DriverName
	I0731 17:01:35.136522   26392 certs.go:68] Setting up /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651 for IP: 192.168.39.139
	I0731 17:01:35.136534   26392 certs.go:194] generating shared ca certs ...
	I0731 17:01:35.136550   26392 certs.go:226] acquiring lock for ca certs: {Name:mk49eed439085f9c30746706460ce213570d1997 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 17:01:35.136686   26392 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key
	I0731 17:01:35.136738   26392 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key
	I0731 17:01:35.136750   26392 certs.go:256] generating profile certs ...
	I0731 17:01:35.136841   26392 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/client.key
	I0731 17:01:35.136874   26392 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.key.e4b58186
	I0731 17:01:35.136892   26392 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.crt.e4b58186 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.243 192.168.39.235 192.168.39.139 192.168.39.254]
	I0731 17:01:35.434332   26392 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.crt.e4b58186 ...
	I0731 17:01:35.434363   26392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.crt.e4b58186: {Name:mk63bbc1c92e932d3d9f00338e4ca98819c6b1ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 17:01:35.434529   26392 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.key.e4b58186 ...
	I0731 17:01:35.434541   26392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.key.e4b58186: {Name:mk628b5feee434241ec59f12d267a78b3ae29d7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 17:01:35.434604   26392 certs.go:381] copying /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.crt.e4b58186 -> /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.crt
	I0731 17:01:35.434727   26392 certs.go:385] copying /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.key.e4b58186 -> /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.key
	I0731 17:01:35.434844   26392 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/proxy-client.key
	I0731 17:01:35.434858   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 17:01:35.434870   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0731 17:01:35.434883   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 17:01:35.434896   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 17:01:35.434908   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 17:01:35.434920   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 17:01:35.434932   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 17:01:35.434944   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 17:01:35.434994   26392 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem (1338 bytes)
	W0731 17:01:35.435020   26392 certs.go:480] ignoring /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259_empty.pem, impossibly tiny 0 bytes
	I0731 17:01:35.435029   26392 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 17:01:35.435050   26392 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem (1082 bytes)
	I0731 17:01:35.435071   26392 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem (1123 bytes)
	I0731 17:01:35.435092   26392 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem (1675 bytes)
	I0731 17:01:35.435181   26392 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem (1708 bytes)
	I0731 17:01:35.435212   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem -> /usr/share/ca-certificates/15259.pem
	I0731 17:01:35.435228   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem -> /usr/share/ca-certificates/152592.pem
	I0731 17:01:35.435259   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 17:01:35.435298   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 17:01:35.438461   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:01:35.438911   26392 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 17:01:35.438937   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:01:35.439097   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 17:01:35.439299   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 17:01:35.439476   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 17:01:35.439606   26392 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651/id_rsa Username:docker}
	I0731 17:01:35.515490   26392 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0731 17:01:35.520246   26392 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0731 17:01:35.536155   26392 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0731 17:01:35.540352   26392 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0731 17:01:35.552013   26392 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0731 17:01:35.555859   26392 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0731 17:01:35.565975   26392 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0731 17:01:35.573634   26392 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0731 17:01:35.586805   26392 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0731 17:01:35.591216   26392 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0731 17:01:35.602718   26392 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0731 17:01:35.606501   26392 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0731 17:01:35.616282   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 17:01:35.642723   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 17:01:35.666443   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 17:01:35.688466   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 17:01:35.710019   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0731 17:01:35.733173   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 17:01:35.755946   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 17:01:35.776967   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 17:01:35.799541   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem --> /usr/share/ca-certificates/15259.pem (1338 bytes)
	I0731 17:01:35.821594   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /usr/share/ca-certificates/152592.pem (1708 bytes)
	I0731 17:01:35.844776   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 17:01:35.867076   26392 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0731 17:01:35.881780   26392 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0731 17:01:35.897313   26392 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0731 17:01:35.912868   26392 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0731 17:01:35.927869   26392 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0731 17:01:35.943095   26392 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0731 17:01:35.959604   26392 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0731 17:01:35.976150   26392 ssh_runner.go:195] Run: openssl version
	I0731 17:01:35.981558   26392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 17:01:35.991616   26392 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 17:01:35.996103   26392 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 16:42 /usr/share/ca-certificates/minikubeCA.pem
	I0731 17:01:35.996165   26392 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 17:01:36.001855   26392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 17:01:36.012434   26392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15259.pem && ln -fs /usr/share/ca-certificates/15259.pem /etc/ssl/certs/15259.pem"
	I0731 17:01:36.023297   26392 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15259.pem
	I0731 17:01:36.027551   26392 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 16:54 /usr/share/ca-certificates/15259.pem
	I0731 17:01:36.027600   26392 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15259.pem
	I0731 17:01:36.033476   26392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15259.pem /etc/ssl/certs/51391683.0"
	I0731 17:01:36.043668   26392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/152592.pem && ln -fs /usr/share/ca-certificates/152592.pem /etc/ssl/certs/152592.pem"
	I0731 17:01:36.053694   26392 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/152592.pem
	I0731 17:01:36.057705   26392 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 16:54 /usr/share/ca-certificates/152592.pem
	I0731 17:01:36.057745   26392 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/152592.pem
	I0731 17:01:36.063064   26392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/152592.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 17:01:36.073104   26392 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 17:01:36.076675   26392 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 17:01:36.076729   26392 kubeadm.go:934] updating node {m03 192.168.39.139 8443 v1.30.3 crio true true} ...
	I0731 17:01:36.076821   26392 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-234651-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.139
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-234651 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 17:01:36.076849   26392 kube-vip.go:115] generating kube-vip config ...
	I0731 17:01:36.076883   26392 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0731 17:01:36.090759   26392 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0731 17:01:36.090814   26392 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0731 17:01:36.090879   26392 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 17:01:36.099903   26392 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0731 17:01:36.099959   26392 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0731 17:01:36.108632   26392 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0731 17:01:36.108656   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0731 17:01:36.108696   26392 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0731 17:01:36.108704   26392 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0731 17:01:36.108716   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0731 17:01:36.108725   26392 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0731 17:01:36.108738   26392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 17:01:36.108763   26392 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0731 17:01:36.121976   26392 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0731 17:01:36.122041   26392 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0731 17:01:36.122065   26392 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0731 17:01:36.122070   26392 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0731 17:01:36.122077   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0731 17:01:36.122094   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0731 17:01:36.135737   26392 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0731 17:01:36.135796   26392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0731 17:01:37.017391   26392 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0731 17:01:37.028340   26392 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0731 17:01:37.045738   26392 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 17:01:37.063014   26392 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0731 17:01:37.079613   26392 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0731 17:01:37.083276   26392 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 17:01:37.096309   26392 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 17:01:37.223821   26392 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 17:01:37.243570   26392 host.go:66] Checking if "ha-234651" exists ...
	I0731 17:01:37.244096   26392 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:01:37.244146   26392 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:01:37.259795   26392 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41825
	I0731 17:01:37.260175   26392 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:01:37.260691   26392 main.go:141] libmachine: Using API Version  1
	I0731 17:01:37.260709   26392 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:01:37.261048   26392 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:01:37.261245   26392 main.go:141] libmachine: (ha-234651) Calling .DriverName
	I0731 17:01:37.261440   26392 start.go:317] joinCluster: &{Name:ha-234651 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-234651 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.243 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.235 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.139 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 17:01:37.261606   26392 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0731 17:01:37.261632   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 17:01:37.264389   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:01:37.264786   26392 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 17:01:37.264826   26392 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:01:37.265006   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 17:01:37.265290   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 17:01:37.265423   26392 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 17:01:37.265584   26392 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651/id_rsa Username:docker}
	I0731 17:01:37.432973   26392 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.139 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 17:01:37.433036   26392 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token akz3y0.iylspc3e44qrqwx7 --discovery-token-ca-cert-hash sha256:460b24b51d10a3760f2391542a8a89e401359854f437f922cbee3f2d8ed6c856 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-234651-m03 --control-plane --apiserver-advertise-address=192.168.39.139 --apiserver-bind-port=8443"
	I0731 17:02:00.740154   26392 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token akz3y0.iylspc3e44qrqwx7 --discovery-token-ca-cert-hash sha256:460b24b51d10a3760f2391542a8a89e401359854f437f922cbee3f2d8ed6c856 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-234651-m03 --control-plane --apiserver-advertise-address=192.168.39.139 --apiserver-bind-port=8443": (23.307093126s)
	I0731 17:02:00.740192   26392 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0731 17:02:01.237200   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-234651-m03 minikube.k8s.io/updated_at=2024_07_31T17_02_01_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1d737dad7efa60c56d30434fcd857dd3b14c91d9 minikube.k8s.io/name=ha-234651 minikube.k8s.io/primary=false
	I0731 17:02:01.355371   26392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-234651-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0731 17:02:01.468704   26392 start.go:319] duration metric: took 24.207260736s to joinCluster
	I0731 17:02:01.468786   26392 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.139 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 17:02:01.469119   26392 config.go:182] Loaded profile config "ha-234651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 17:02:01.470089   26392 out.go:177] * Verifying Kubernetes components...
	I0731 17:02:01.471212   26392 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 17:02:01.743573   26392 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 17:02:01.768397   26392 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 17:02:01.770559   26392 kapi.go:59] client config for ha-234651: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/client.crt", KeyFile:"/home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/client.key", CAFile:"/home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0731 17:02:01.770697   26392 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.243:8443
	I0731 17:02:01.771427   26392 node_ready.go:35] waiting up to 6m0s for node "ha-234651-m03" to be "Ready" ...
	I0731 17:02:01.771524   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:01.771538   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:01.771549   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:01.771556   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:01.774783   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:02.272487   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:02.272511   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:02.272522   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:02.272527   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:02.298164   26392 round_trippers.go:574] Response Status: 200 OK in 25 milliseconds
	I0731 17:02:02.772644   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:02.772668   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:02.772676   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:02.772681   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:02.776157   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:03.272522   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:03.272545   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:03.272556   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:03.272563   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:03.276432   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:03.772332   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:03.772353   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:03.772363   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:03.772369   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:03.775129   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:02:03.776042   26392 node_ready.go:53] node "ha-234651-m03" has status "Ready":"False"
	I0731 17:02:04.271745   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:04.271763   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:04.271771   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:04.271775   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:04.275853   26392 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 17:02:04.772023   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:04.772042   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:04.772050   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:04.772053   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:04.775235   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:05.271694   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:05.271713   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:05.271721   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:05.271725   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:05.276039   26392 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 17:02:05.771697   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:05.771765   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:05.771789   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:05.771801   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:05.776249   26392 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 17:02:05.777069   26392 node_ready.go:53] node "ha-234651-m03" has status "Ready":"False"
	I0731 17:02:06.272459   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:06.272478   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:06.272486   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:06.272491   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:06.275595   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:06.772431   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:06.772460   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:06.772468   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:06.772472   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:06.776868   26392 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 17:02:07.271856   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:07.271874   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:07.271882   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:07.271886   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:07.276287   26392 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 17:02:07.771871   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:07.771891   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:07.771898   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:07.771902   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:07.775696   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:08.272429   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:08.272455   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:08.272464   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:08.272472   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:08.276105   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:08.276622   26392 node_ready.go:53] node "ha-234651-m03" has status "Ready":"False"
	I0731 17:02:08.771986   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:08.772012   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:08.772019   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:08.772023   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:08.775477   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:09.271760   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:09.271779   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:09.271788   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:09.271794   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:09.275193   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:09.772349   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:09.772367   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:09.772375   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:09.772380   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:09.775458   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:10.271949   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:10.271969   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:10.271976   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:10.271980   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:10.274905   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:02:10.771953   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:10.771973   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:10.771981   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:10.771995   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:10.775196   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:10.775736   26392 node_ready.go:53] node "ha-234651-m03" has status "Ready":"False"
	I0731 17:02:11.272041   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:11.272066   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:11.272077   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:11.272081   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:11.275503   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:11.772324   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:11.772342   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:11.772349   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:11.772353   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:11.775273   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:02:12.272398   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:12.272418   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:12.272429   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:12.272436   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:12.275897   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:12.772652   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:12.772672   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:12.772680   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:12.772683   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:12.776201   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:12.776820   26392 node_ready.go:53] node "ha-234651-m03" has status "Ready":"False"
	I0731 17:02:13.272113   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:13.272137   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:13.272148   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:13.272153   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:13.275213   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:13.772656   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:13.772677   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:13.772686   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:13.772689   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:13.776164   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:14.271954   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:14.271974   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:14.271982   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:14.271986   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:14.276462   26392 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 17:02:14.772649   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:14.772671   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:14.772679   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:14.772682   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:14.775952   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:15.272585   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:15.272605   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:15.272615   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:15.272623   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:15.275473   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:02:15.276097   26392 node_ready.go:53] node "ha-234651-m03" has status "Ready":"False"
	I0731 17:02:15.772352   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:15.772372   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:15.772380   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:15.772384   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:15.776568   26392 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 17:02:16.272347   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:16.272368   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:16.272376   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:16.272382   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:16.275673   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:16.772371   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:16.772396   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:16.772406   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:16.772412   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:16.778215   26392 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 17:02:17.271608   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:17.271632   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:17.271642   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:17.271646   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:17.275386   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:17.771869   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:17.771889   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:17.771897   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:17.771903   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:17.775526   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:17.776021   26392 node_ready.go:53] node "ha-234651-m03" has status "Ready":"False"
	I0731 17:02:18.272359   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:18.272379   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:18.272390   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:18.272396   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:18.276177   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:18.772291   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:18.772313   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:18.772324   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:18.772330   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:18.775648   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:19.272391   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:19.272412   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:19.272419   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:19.272424   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:19.276147   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:19.771808   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:19.771831   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:19.771841   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:19.771846   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:19.776156   26392 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 17:02:19.777038   26392 node_ready.go:49] node "ha-234651-m03" has status "Ready":"True"
	I0731 17:02:19.777054   26392 node_ready.go:38] duration metric: took 18.005601556s for node "ha-234651-m03" to be "Ready" ...
	I0731 17:02:19.777062   26392 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 17:02:19.777111   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods
	I0731 17:02:19.777120   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:19.777127   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:19.777132   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:19.785605   26392 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0731 17:02:19.791637   26392 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-nsx9j" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:19.791706   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nsx9j
	I0731 17:02:19.791713   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:19.791721   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:19.791725   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:19.799668   26392 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0731 17:02:19.800231   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651
	I0731 17:02:19.800258   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:19.800267   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:19.800272   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:19.804395   26392 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 17:02:19.805012   26392 pod_ready.go:92] pod "coredns-7db6d8ff4d-nsx9j" in "kube-system" namespace has status "Ready":"True"
	I0731 17:02:19.805027   26392 pod_ready.go:81] duration metric: took 13.369272ms for pod "coredns-7db6d8ff4d-nsx9j" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:19.805035   26392 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-qbqb9" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:19.805079   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-qbqb9
	I0731 17:02:19.805083   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:19.805090   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:19.805094   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:19.807408   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:02:19.807984   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651
	I0731 17:02:19.808002   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:19.808011   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:19.808017   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:19.810203   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:02:19.810909   26392 pod_ready.go:92] pod "coredns-7db6d8ff4d-qbqb9" in "kube-system" namespace has status "Ready":"True"
	I0731 17:02:19.810945   26392 pod_ready.go:81] duration metric: took 5.894773ms for pod "coredns-7db6d8ff4d-qbqb9" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:19.810956   26392 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-234651" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:19.811015   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/etcd-ha-234651
	I0731 17:02:19.811024   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:19.811034   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:19.811041   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:19.813120   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:02:19.813674   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651
	I0731 17:02:19.813689   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:19.813699   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:19.813703   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:19.816056   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:02:19.816820   26392 pod_ready.go:92] pod "etcd-ha-234651" in "kube-system" namespace has status "Ready":"True"
	I0731 17:02:19.816836   26392 pod_ready.go:81] duration metric: took 5.87247ms for pod "etcd-ha-234651" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:19.816848   26392 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-234651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:19.816899   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/etcd-ha-234651-m02
	I0731 17:02:19.816909   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:19.816918   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:19.816923   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:19.819104   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:02:19.819735   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:02:19.819752   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:19.819761   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:19.819769   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:19.822242   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:02:19.822848   26392 pod_ready.go:92] pod "etcd-ha-234651-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 17:02:19.822865   26392 pod_ready.go:81] duration metric: took 6.010187ms for pod "etcd-ha-234651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:19.822876   26392 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-234651-m03" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:19.972293   26392 request.go:629] Waited for 149.323624ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/etcd-ha-234651-m03
	I0731 17:02:19.972363   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/etcd-ha-234651-m03
	I0731 17:02:19.972376   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:19.972394   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:19.972404   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:19.975294   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:02:20.172524   26392 request.go:629] Waited for 196.621388ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:20.172605   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:20.172612   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:20.172624   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:20.172634   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:20.175922   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:20.176609   26392 pod_ready.go:92] pod "etcd-ha-234651-m03" in "kube-system" namespace has status "Ready":"True"
	I0731 17:02:20.176627   26392 pod_ready.go:81] duration metric: took 353.744367ms for pod "etcd-ha-234651-m03" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:20.176643   26392 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-234651" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:20.372771   26392 request.go:629] Waited for 196.067367ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-234651
	I0731 17:02:20.372850   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-234651
	I0731 17:02:20.372861   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:20.372871   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:20.372882   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:20.375908   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:20.571830   26392 request.go:629] Waited for 195.285012ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/nodes/ha-234651
	I0731 17:02:20.571882   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651
	I0731 17:02:20.571887   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:20.571892   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:20.571896   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:20.574825   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:02:20.575459   26392 pod_ready.go:92] pod "kube-apiserver-ha-234651" in "kube-system" namespace has status "Ready":"True"
	I0731 17:02:20.575480   26392 pod_ready.go:81] duration metric: took 398.829513ms for pod "kube-apiserver-ha-234651" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:20.575494   26392 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-234651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:20.772409   26392 request.go:629] Waited for 196.847445ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-234651-m02
	I0731 17:02:20.772470   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-234651-m02
	I0731 17:02:20.772478   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:20.772489   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:20.772532   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:20.777048   26392 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 17:02:20.972000   26392 request.go:629] Waited for 194.290806ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:02:20.972050   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:02:20.972055   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:20.972070   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:20.972085   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:20.974976   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:02:20.975498   26392 pod_ready.go:92] pod "kube-apiserver-ha-234651-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 17:02:20.975519   26392 pod_ready.go:81] duration metric: took 400.017342ms for pod "kube-apiserver-ha-234651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:20.975531   26392 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-234651-m03" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:21.172594   26392 request.go:629] Waited for 196.98829ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-234651-m03
	I0731 17:02:21.172646   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-234651-m03
	I0731 17:02:21.172651   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:21.172657   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:21.172661   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:21.175522   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:02:21.372553   26392 request.go:629] Waited for 196.351874ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:21.372614   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:21.372621   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:21.372632   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:21.372638   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:21.375964   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:21.376668   26392 pod_ready.go:92] pod "kube-apiserver-ha-234651-m03" in "kube-system" namespace has status "Ready":"True"
	I0731 17:02:21.376688   26392 pod_ready.go:81] duration metric: took 401.149162ms for pod "kube-apiserver-ha-234651-m03" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:21.376700   26392 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-234651" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:21.572668   26392 request.go:629] Waited for 195.906455ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-234651
	I0731 17:02:21.572720   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-234651
	I0731 17:02:21.572726   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:21.572736   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:21.572746   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:21.576257   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:21.772283   26392 request.go:629] Waited for 195.256579ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/nodes/ha-234651
	I0731 17:02:21.772334   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651
	I0731 17:02:21.772339   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:21.772346   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:21.772353   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:21.775136   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:02:21.775626   26392 pod_ready.go:92] pod "kube-controller-manager-ha-234651" in "kube-system" namespace has status "Ready":"True"
	I0731 17:02:21.775647   26392 pod_ready.go:81] duration metric: took 398.937458ms for pod "kube-controller-manager-ha-234651" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:21.775659   26392 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-234651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:21.972798   26392 request.go:629] Waited for 197.069148ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-234651-m02
	I0731 17:02:21.972882   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-234651-m02
	I0731 17:02:21.972894   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:21.972904   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:21.972915   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:21.976110   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:22.172448   26392 request.go:629] Waited for 195.762899ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:02:22.172520   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:02:22.172532   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:22.172543   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:22.172553   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:22.175713   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:22.176373   26392 pod_ready.go:92] pod "kube-controller-manager-ha-234651-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 17:02:22.176395   26392 pod_ready.go:81] duration metric: took 400.728457ms for pod "kube-controller-manager-ha-234651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:22.176407   26392 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-234651-m03" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:22.372209   26392 request.go:629] Waited for 195.723268ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-234651-m03
	I0731 17:02:22.372284   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-234651-m03
	I0731 17:02:22.372294   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:22.372310   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:22.372315   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:22.375732   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:22.572690   26392 request.go:629] Waited for 196.364269ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:22.572766   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:22.572772   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:22.572780   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:22.572786   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:22.576137   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:22.576644   26392 pod_ready.go:92] pod "kube-controller-manager-ha-234651-m03" in "kube-system" namespace has status "Ready":"True"
	I0731 17:02:22.576660   26392 pod_ready.go:81] duration metric: took 400.245471ms for pod "kube-controller-manager-ha-234651-m03" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:22.576671   26392 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b8dcw" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:22.772103   26392 request.go:629] Waited for 195.368675ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b8dcw
	I0731 17:02:22.772183   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b8dcw
	I0731 17:02:22.772195   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:22.772256   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:22.772269   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:22.775401   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:22.972548   26392 request.go:629] Waited for 196.34366ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:02:22.972602   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:02:22.972608   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:22.972615   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:22.972619   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:22.977276   26392 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 17:02:22.977775   26392 pod_ready.go:92] pod "kube-proxy-b8dcw" in "kube-system" namespace has status "Ready":"True"
	I0731 17:02:22.977796   26392 pod_ready.go:81] duration metric: took 401.118741ms for pod "kube-proxy-b8dcw" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:22.977808   26392 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gfgjd" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:23.172827   26392 request.go:629] Waited for 194.930334ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gfgjd
	I0731 17:02:23.172883   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gfgjd
	I0731 17:02:23.172887   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:23.172895   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:23.172899   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:23.175893   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:02:23.371902   26392 request.go:629] Waited for 195.281871ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:23.371952   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:23.371957   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:23.371964   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:23.371968   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:23.374603   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:02:23.375326   26392 pod_ready.go:92] pod "kube-proxy-gfgjd" in "kube-system" namespace has status "Ready":"True"
	I0731 17:02:23.375353   26392 pod_ready.go:81] duration metric: took 397.538025ms for pod "kube-proxy-gfgjd" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:23.375362   26392 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jfgs8" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:23.572401   26392 request.go:629] Waited for 196.975032ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jfgs8
	I0731 17:02:23.572469   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jfgs8
	I0731 17:02:23.572475   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:23.572482   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:23.572488   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:23.575615   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:23.772754   26392 request.go:629] Waited for 196.370061ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/nodes/ha-234651
	I0731 17:02:23.772821   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651
	I0731 17:02:23.772831   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:23.772840   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:23.772849   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:23.775877   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:23.776390   26392 pod_ready.go:92] pod "kube-proxy-jfgs8" in "kube-system" namespace has status "Ready":"True"
	I0731 17:02:23.776408   26392 pod_ready.go:81] duration metric: took 401.039832ms for pod "kube-proxy-jfgs8" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:23.776417   26392 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-234651" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:23.971894   26392 request.go:629] Waited for 195.395031ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-234651
	I0731 17:02:23.971956   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-234651
	I0731 17:02:23.971961   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:23.971968   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:23.971972   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:23.976837   26392 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 17:02:24.172761   26392 request.go:629] Waited for 195.33399ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/nodes/ha-234651
	I0731 17:02:24.172816   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651
	I0731 17:02:24.172821   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:24.172828   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:24.172836   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:24.175689   26392 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 17:02:24.176304   26392 pod_ready.go:92] pod "kube-scheduler-ha-234651" in "kube-system" namespace has status "Ready":"True"
	I0731 17:02:24.176321   26392 pod_ready.go:81] duration metric: took 399.898252ms for pod "kube-scheduler-ha-234651" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:24.176329   26392 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-234651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:24.372431   26392 request.go:629] Waited for 196.044675ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-234651-m02
	I0731 17:02:24.372507   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-234651-m02
	I0731 17:02:24.372514   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:24.372525   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:24.372531   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:24.375686   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:24.572614   26392 request.go:629] Waited for 196.336033ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:02:24.572663   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02
	I0731 17:02:24.572668   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:24.572675   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:24.572680   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:24.576071   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:24.576616   26392 pod_ready.go:92] pod "kube-scheduler-ha-234651-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 17:02:24.576634   26392 pod_ready.go:81] duration metric: took 400.298948ms for pod "kube-scheduler-ha-234651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:24.576643   26392 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-234651-m03" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:24.772668   26392 request.go:629] Waited for 195.957361ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-234651-m03
	I0731 17:02:24.772720   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-234651-m03
	I0731 17:02:24.772725   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:24.772732   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:24.772735   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:24.776266   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:24.972184   26392 request.go:629] Waited for 195.351472ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:24.972268   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03
	I0731 17:02:24.972277   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:24.972288   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:24.972298   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:24.975807   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:24.976515   26392 pod_ready.go:92] pod "kube-scheduler-ha-234651-m03" in "kube-system" namespace has status "Ready":"True"
	I0731 17:02:24.976534   26392 pod_ready.go:81] duration metric: took 399.885145ms for pod "kube-scheduler-ha-234651-m03" in "kube-system" namespace to be "Ready" ...
	I0731 17:02:24.976544   26392 pod_ready.go:38] duration metric: took 5.199474413s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 17:02:24.976559   26392 api_server.go:52] waiting for apiserver process to appear ...
	I0731 17:02:24.976619   26392 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 17:02:24.991603   26392 api_server.go:72] duration metric: took 23.522784014s to wait for apiserver process to appear ...
	I0731 17:02:24.991627   26392 api_server.go:88] waiting for apiserver healthz status ...
	I0731 17:02:24.991648   26392 api_server.go:253] Checking apiserver healthz at https://192.168.39.243:8443/healthz ...
	I0731 17:02:24.996060   26392 api_server.go:279] https://192.168.39.243:8443/healthz returned 200:
	ok
	I0731 17:02:24.996124   26392 round_trippers.go:463] GET https://192.168.39.243:8443/version
	I0731 17:02:24.996134   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:24.996146   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:24.996153   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:24.997003   26392 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0731 17:02:24.997065   26392 api_server.go:141] control plane version: v1.30.3
	I0731 17:02:24.997080   26392 api_server.go:131] duration metric: took 5.446251ms to wait for apiserver health ...
	I0731 17:02:24.997090   26392 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 17:02:25.172062   26392 request.go:629] Waited for 174.909019ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods
	I0731 17:02:25.172129   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods
	I0731 17:02:25.172136   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:25.172144   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:25.172150   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:25.180567   26392 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0731 17:02:25.187031   26392 system_pods.go:59] 24 kube-system pods found
	I0731 17:02:25.187058   26392 system_pods.go:61] "coredns-7db6d8ff4d-nsx9j" [b2cde006-dbb7-4e6f-a5f1-cf7760740104] Running
	I0731 17:02:25.187063   26392 system_pods.go:61] "coredns-7db6d8ff4d-qbqb9" [4f76f862-d39e-4976-90e6-fb9a25cc485a] Running
	I0731 17:02:25.187067   26392 system_pods.go:61] "etcd-ha-234651" [c6fb7163-2f43-4b81-a3d9-e1550ddabc4c] Running
	I0731 17:02:25.187070   26392 system_pods.go:61] "etcd-ha-234651-m02" [ba88e585-8e9b-4177-82e0-4222210cfd2d] Running
	I0731 17:02:25.187074   26392 system_pods.go:61] "etcd-ha-234651-m03" [6d8ddabd-e7d2-48c7-93ce-ab3f68540789] Running
	I0731 17:02:25.187078   26392 system_pods.go:61] "kindnet-2xqxq" [a9eb3817-aec9-414b-80ab-665236250ab0] Running
	I0731 17:02:25.187081   26392 system_pods.go:61] "kindnet-phmdp" [f0e0736a-4005-4b0b-81fc-e4cea26a7d42] Running
	I0731 17:02:25.187084   26392 system_pods.go:61] "kindnet-wfbt4" [9eda8095-ce75-4043-8ddf-6e5663de8212] Running
	I0731 17:02:25.187088   26392 system_pods.go:61] "kube-apiserver-ha-234651" [ce3cbf02-100c-4437-8d1c-2ac4a2822866] Running
	I0731 17:02:25.187091   26392 system_pods.go:61] "kube-apiserver-ha-234651-m02" [e5eea595-36d9-497e-83be-aac5845b566a] Running
	I0731 17:02:25.187094   26392 system_pods.go:61] "kube-apiserver-ha-234651-m03" [42a6e972-6278-433a-93ea-1661c9827678] Running
	I0731 17:02:25.187098   26392 system_pods.go:61] "kube-controller-manager-ha-234651" [3d111090-8fee-40ab-8cab-df7dd984bdb3] Running
	I0731 17:02:25.187101   26392 system_pods.go:61] "kube-controller-manager-ha-234651-m02" [28a10b93-7e55-4e2d-a5be-3605fa94fdb0] Running
	I0731 17:02:25.187104   26392 system_pods.go:61] "kube-controller-manager-ha-234651-m03" [a5d2498c-f9be-4425-9a3d-570903f9f62e] Running
	I0731 17:02:25.187106   26392 system_pods.go:61] "kube-proxy-b8dcw" [cbd2a846-0490-4465-b299-76b6fb0d5710] Running
	I0731 17:02:25.187134   26392 system_pods.go:61] "kube-proxy-gfgjd" [b20d9a5c-0521-49f5-9002-74ae98e683d0] Running
	I0731 17:02:25.187138   26392 system_pods.go:61] "kube-proxy-jfgs8" [5ead85d8-0fd0-4900-8c02-2f23217ca208] Running
	I0731 17:02:25.187145   26392 system_pods.go:61] "kube-scheduler-ha-234651" [e1e2064c-17d7-4ea9-b6d3-b13c9ca34789] Running
	I0731 17:02:25.187148   26392 system_pods.go:61] "kube-scheduler-ha-234651-m02" [2ed86332-42ba-46c9-901c-38880e8ec54a] Running
	I0731 17:02:25.187152   26392 system_pods.go:61] "kube-scheduler-ha-234651-m03" [274102a7-621b-496d-95e0-6588195be8b0] Running
	I0731 17:02:25.187154   26392 system_pods.go:61] "kube-vip-ha-234651" [205b811c-93e9-4f66-9d0e-67abbc8ff1ef] Running
	I0731 17:02:25.187158   26392 system_pods.go:61] "kube-vip-ha-234651-m02" [8fc0f86c-f043-44c2-aa8b-544fd6b7de22] Running
	I0731 17:02:25.187163   26392 system_pods.go:61] "kube-vip-ha-234651-m03" [d1ca6f6b-095f-457d-a4d3-2bac916bb8ba] Running
	I0731 17:02:25.187166   26392 system_pods.go:61] "storage-provisioner" [87455537-bdb8-438b-8122-db85bed01d09] Running
	I0731 17:02:25.187171   26392 system_pods.go:74] duration metric: took 190.073866ms to wait for pod list to return data ...
	I0731 17:02:25.187180   26392 default_sa.go:34] waiting for default service account to be created ...
	I0731 17:02:25.372614   26392 request.go:629] Waited for 185.36405ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/namespaces/default/serviceaccounts
	I0731 17:02:25.372669   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/default/serviceaccounts
	I0731 17:02:25.372675   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:25.372682   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:25.372685   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:25.376402   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:25.376545   26392 default_sa.go:45] found service account: "default"
	I0731 17:02:25.376568   26392 default_sa.go:55] duration metric: took 189.37928ms for default service account to be created ...
	I0731 17:02:25.376578   26392 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 17:02:25.572785   26392 request.go:629] Waited for 196.133383ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods
	I0731 17:02:25.572863   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/namespaces/kube-system/pods
	I0731 17:02:25.572874   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:25.572884   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:25.572890   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:25.581431   26392 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0731 17:02:25.587354   26392 system_pods.go:86] 24 kube-system pods found
	I0731 17:02:25.587378   26392 system_pods.go:89] "coredns-7db6d8ff4d-nsx9j" [b2cde006-dbb7-4e6f-a5f1-cf7760740104] Running
	I0731 17:02:25.587384   26392 system_pods.go:89] "coredns-7db6d8ff4d-qbqb9" [4f76f862-d39e-4976-90e6-fb9a25cc485a] Running
	I0731 17:02:25.587388   26392 system_pods.go:89] "etcd-ha-234651" [c6fb7163-2f43-4b81-a3d9-e1550ddabc4c] Running
	I0731 17:02:25.587393   26392 system_pods.go:89] "etcd-ha-234651-m02" [ba88e585-8e9b-4177-82e0-4222210cfd2d] Running
	I0731 17:02:25.587397   26392 system_pods.go:89] "etcd-ha-234651-m03" [6d8ddabd-e7d2-48c7-93ce-ab3f68540789] Running
	I0731 17:02:25.587401   26392 system_pods.go:89] "kindnet-2xqxq" [a9eb3817-aec9-414b-80ab-665236250ab0] Running
	I0731 17:02:25.587405   26392 system_pods.go:89] "kindnet-phmdp" [f0e0736a-4005-4b0b-81fc-e4cea26a7d42] Running
	I0731 17:02:25.587409   26392 system_pods.go:89] "kindnet-wfbt4" [9eda8095-ce75-4043-8ddf-6e5663de8212] Running
	I0731 17:02:25.587413   26392 system_pods.go:89] "kube-apiserver-ha-234651" [ce3cbf02-100c-4437-8d1c-2ac4a2822866] Running
	I0731 17:02:25.587417   26392 system_pods.go:89] "kube-apiserver-ha-234651-m02" [e5eea595-36d9-497e-83be-aac5845b566a] Running
	I0731 17:02:25.587421   26392 system_pods.go:89] "kube-apiserver-ha-234651-m03" [42a6e972-6278-433a-93ea-1661c9827678] Running
	I0731 17:02:25.587426   26392 system_pods.go:89] "kube-controller-manager-ha-234651" [3d111090-8fee-40ab-8cab-df7dd984bdb3] Running
	I0731 17:02:25.587431   26392 system_pods.go:89] "kube-controller-manager-ha-234651-m02" [28a10b93-7e55-4e2d-a5be-3605fa94fdb0] Running
	I0731 17:02:25.587435   26392 system_pods.go:89] "kube-controller-manager-ha-234651-m03" [a5d2498c-f9be-4425-9a3d-570903f9f62e] Running
	I0731 17:02:25.587441   26392 system_pods.go:89] "kube-proxy-b8dcw" [cbd2a846-0490-4465-b299-76b6fb0d5710] Running
	I0731 17:02:25.587447   26392 system_pods.go:89] "kube-proxy-gfgjd" [b20d9a5c-0521-49f5-9002-74ae98e683d0] Running
	I0731 17:02:25.587451   26392 system_pods.go:89] "kube-proxy-jfgs8" [5ead85d8-0fd0-4900-8c02-2f23217ca208] Running
	I0731 17:02:25.587455   26392 system_pods.go:89] "kube-scheduler-ha-234651" [e1e2064c-17d7-4ea9-b6d3-b13c9ca34789] Running
	I0731 17:02:25.587459   26392 system_pods.go:89] "kube-scheduler-ha-234651-m02" [2ed86332-42ba-46c9-901c-38880e8ec54a] Running
	I0731 17:02:25.587466   26392 system_pods.go:89] "kube-scheduler-ha-234651-m03" [274102a7-621b-496d-95e0-6588195be8b0] Running
	I0731 17:02:25.587470   26392 system_pods.go:89] "kube-vip-ha-234651" [205b811c-93e9-4f66-9d0e-67abbc8ff1ef] Running
	I0731 17:02:25.587474   26392 system_pods.go:89] "kube-vip-ha-234651-m02" [8fc0f86c-f043-44c2-aa8b-544fd6b7de22] Running
	I0731 17:02:25.587477   26392 system_pods.go:89] "kube-vip-ha-234651-m03" [d1ca6f6b-095f-457d-a4d3-2bac916bb8ba] Running
	I0731 17:02:25.587480   26392 system_pods.go:89] "storage-provisioner" [87455537-bdb8-438b-8122-db85bed01d09] Running
	I0731 17:02:25.587486   26392 system_pods.go:126] duration metric: took 210.90289ms to wait for k8s-apps to be running ...
	I0731 17:02:25.587496   26392 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 17:02:25.587536   26392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 17:02:25.603526   26392 system_svc.go:56] duration metric: took 16.020558ms WaitForService to wait for kubelet
	I0731 17:02:25.603559   26392 kubeadm.go:582] duration metric: took 24.134740341s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 17:02:25.603585   26392 node_conditions.go:102] verifying NodePressure condition ...
	I0731 17:02:25.772812   26392 request.go:629] Waited for 169.160336ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.243:8443/api/v1/nodes
	I0731 17:02:25.772896   26392 round_trippers.go:463] GET https://192.168.39.243:8443/api/v1/nodes
	I0731 17:02:25.772907   26392 round_trippers.go:469] Request Headers:
	I0731 17:02:25.772918   26392 round_trippers.go:473]     Accept: application/json, */*
	I0731 17:02:25.772927   26392 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 17:02:25.776466   26392 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 17:02:25.777443   26392 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 17:02:25.777467   26392 node_conditions.go:123] node cpu capacity is 2
	I0731 17:02:25.777478   26392 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 17:02:25.777482   26392 node_conditions.go:123] node cpu capacity is 2
	I0731 17:02:25.777486   26392 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 17:02:25.777489   26392 node_conditions.go:123] node cpu capacity is 2
	I0731 17:02:25.777493   26392 node_conditions.go:105] duration metric: took 173.903547ms to run NodePressure ...
	I0731 17:02:25.777504   26392 start.go:241] waiting for startup goroutines ...
	I0731 17:02:25.777522   26392 start.go:255] writing updated cluster config ...
	I0731 17:02:25.777809   26392 ssh_runner.go:195] Run: rm -f paused
	I0731 17:02:25.831285   26392 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 17:02:25.833187   26392 out.go:177] * Done! kubectl is now configured to use "ha-234651" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 31 17:07:05 ha-234651 crio[683]: time="2024-07-31 17:07:05.050133458Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722445625050111831,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145867,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=15c730fa-07bc-4a2c-b0ff-92c2e7969791 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:07:05 ha-234651 crio[683]: time="2024-07-31 17:07:05.050712671Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bcb216fb-813f-4148-b68f-73c2a297ee7c name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:07:05 ha-234651 crio[683]: time="2024-07-31 17:07:05.050765747Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bcb216fb-813f-4148-b68f-73c2a297ee7c name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:07:05 ha-234651 crio[683]: time="2024-07-31 17:07:05.050983276Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5e4d66f773ff46f7a130d9359ac1316e1117516ebef8b32f9b20515706b3a2f8,PodSandboxId:754996ae28b01f43599725b23a8ef7ea035322c1da4ccbda6c379abd7e897fe0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722445212877304638,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qbqb9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f76f862-d39e-4976-90e6-fb9a25cc485a,},Annotations:map[string]string{io.kubernetes.container.hash: 9bcf3d4b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8ef655791fe4b0b98612bab1bc9fea7e42c73e5327bdd0654676979cb2e8542,PodSandboxId:1616415c6b8f6463e22fcd4bc693a293f8fdee9b60f2f3d789bc645ace8300a6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722445212817639863,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nsx9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: b2cde006-dbb7-4e6f-a5f1-cf7760740104,},Annotations:map[string]string{io.kubernetes.container.hash: 9d1dd63f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cc5e3465a8643593cfbf5871749b29c1edcf22920dda340f9e61a512894333d,PodSandboxId:1e9afabdcc733b5f6d64e03aa33527331e08c27fcc253d8f5a936ac47ba41635,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNI
NG,CreatedAt:1722445212754307746,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87455537-bdb8-438b-8122-db85bed01d09,},Annotations:map[string]string{io.kubernetes.container.hash: f1005df0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd9f6c4536535bb3eb07d6fa75fe49579d9cdee5ff6e4c440e0a58359d4f2f08,PodSandboxId:99ba869aafb121f3032d38c0a926a34605210fa5e62304b252efb3ce0f139b7c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:C
ONTAINER_RUNNING,CreatedAt:1722445201111476345,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wfbt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eda8095-ce75-4043-8ddf-6e5663de8212,},Annotations:map[string]string{io.kubernetes.container.hash: f1243e66,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:631c8cee6152ae1808419f27d4b63fa0a6bcebb15c757dd96090c6267d63e6c2,PodSandboxId:88fd41ca8aad113a54b218f3a36159bdb37464ee5f76ec8ae31f27cd31f7daa1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:17224451
97367288188,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jfgs8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ead85d8-0fd0-4900-8c02-2f23217ca208,},Annotations:map[string]string{io.kubernetes.container.hash: 85f1c355,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:639ed1a246cfddb5bcce86bc66133eebc7339f13348608e46548a872b8df456e,PodSandboxId:e715c97e96fabf0f1c9f4e6025318eda97b8f431f80df4d8ac4517b03917faad,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722445179
126430451,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 377009a00f211ea8abb80d365c74a9fd,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5b3417940cd8bf8065dfefafb918da0e5bfda9fd5f7a2f9ccea0f1615151657,PodSandboxId:8ccf20bcd63f37163f71bce3d8a364eb92d86b0694bc75949c8979e5633c5cc2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722445176479082471,Labels:map[string]string{io.kuber
netes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f8584f0ef1731ec6b6fb11b7fa84aeb,},Annotations:map[string]string{io.kubernetes.container.hash: a3feaab1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b48ac56e48fe04efb8cbe0a1f24302f4ddd366a42a7ba2d4395113bf2157a3ea,PodSandboxId:5e1eebee97f6b9d241e17235a27f64767bdc5937bd5ca3537ce4297a25812986,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722445176435123047,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.k
ubernetes.pod.name: kube-apiserver-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 399c842a2f5d3312c2f955f494ccfe00,},Annotations:map[string]string{io.kubernetes.container.hash: c9b9fb5b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ae09638d5d5724f3e27352156c272f710b70f2c3319adc46eb518d357dc8e2,PodSandboxId:5c961f0d694a144ae953e67f65f86080fd9a64718dfc16e8697475eadb817895,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722445176401816670,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kube
rnetes.pod.name: kube-controller-manager-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9982d04d181fbc6333c44627c777728,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ded6421f2f11d7f64ce80d1681e3d4c9d1453ff8aecc860e184ad0865768adde,PodSandboxId:90ca479a21b17fc36c4328bfd82fe7023d144a893b3a1fbc315c66af59696ea2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722445176364748602,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.n
ame: kube-scheduler-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3defc3711c46905c8aec12eb318ecd3b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bcb216fb-813f-4148-b68f-73c2a297ee7c name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:07:05 ha-234651 crio[683]: time="2024-07-31 17:07:05.085548507Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0e0b320a-95b4-4963-be2e-a221512be5a6 name=/runtime.v1.RuntimeService/Version
	Jul 31 17:07:05 ha-234651 crio[683]: time="2024-07-31 17:07:05.085624901Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0e0b320a-95b4-4963-be2e-a221512be5a6 name=/runtime.v1.RuntimeService/Version
	Jul 31 17:07:05 ha-234651 crio[683]: time="2024-07-31 17:07:05.086542485Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=919c72cc-4392-470b-a40e-7cd91ce1cf56 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:07:05 ha-234651 crio[683]: time="2024-07-31 17:07:05.087551966Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722445625087517104,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145867,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=919c72cc-4392-470b-a40e-7cd91ce1cf56 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:07:05 ha-234651 crio[683]: time="2024-07-31 17:07:05.091216294Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2d236382-67b5-439e-a0a7-3f9736c4fa77 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:07:05 ha-234651 crio[683]: time="2024-07-31 17:07:05.091320009Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2d236382-67b5-439e-a0a7-3f9736c4fa77 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:07:05 ha-234651 crio[683]: time="2024-07-31 17:07:05.091618313Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5e4d66f773ff46f7a130d9359ac1316e1117516ebef8b32f9b20515706b3a2f8,PodSandboxId:754996ae28b01f43599725b23a8ef7ea035322c1da4ccbda6c379abd7e897fe0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722445212877304638,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qbqb9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f76f862-d39e-4976-90e6-fb9a25cc485a,},Annotations:map[string]string{io.kubernetes.container.hash: 9bcf3d4b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8ef655791fe4b0b98612bab1bc9fea7e42c73e5327bdd0654676979cb2e8542,PodSandboxId:1616415c6b8f6463e22fcd4bc693a293f8fdee9b60f2f3d789bc645ace8300a6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722445212817639863,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nsx9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: b2cde006-dbb7-4e6f-a5f1-cf7760740104,},Annotations:map[string]string{io.kubernetes.container.hash: 9d1dd63f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cc5e3465a8643593cfbf5871749b29c1edcf22920dda340f9e61a512894333d,PodSandboxId:1e9afabdcc733b5f6d64e03aa33527331e08c27fcc253d8f5a936ac47ba41635,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNI
NG,CreatedAt:1722445212754307746,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87455537-bdb8-438b-8122-db85bed01d09,},Annotations:map[string]string{io.kubernetes.container.hash: f1005df0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd9f6c4536535bb3eb07d6fa75fe49579d9cdee5ff6e4c440e0a58359d4f2f08,PodSandboxId:99ba869aafb121f3032d38c0a926a34605210fa5e62304b252efb3ce0f139b7c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:C
ONTAINER_RUNNING,CreatedAt:1722445201111476345,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wfbt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eda8095-ce75-4043-8ddf-6e5663de8212,},Annotations:map[string]string{io.kubernetes.container.hash: f1243e66,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:631c8cee6152ae1808419f27d4b63fa0a6bcebb15c757dd96090c6267d63e6c2,PodSandboxId:88fd41ca8aad113a54b218f3a36159bdb37464ee5f76ec8ae31f27cd31f7daa1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:17224451
97367288188,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jfgs8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ead85d8-0fd0-4900-8c02-2f23217ca208,},Annotations:map[string]string{io.kubernetes.container.hash: 85f1c355,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:639ed1a246cfddb5bcce86bc66133eebc7339f13348608e46548a872b8df456e,PodSandboxId:e715c97e96fabf0f1c9f4e6025318eda97b8f431f80df4d8ac4517b03917faad,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722445179
126430451,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 377009a00f211ea8abb80d365c74a9fd,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5b3417940cd8bf8065dfefafb918da0e5bfda9fd5f7a2f9ccea0f1615151657,PodSandboxId:8ccf20bcd63f37163f71bce3d8a364eb92d86b0694bc75949c8979e5633c5cc2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722445176479082471,Labels:map[string]string{io.kuber
netes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f8584f0ef1731ec6b6fb11b7fa84aeb,},Annotations:map[string]string{io.kubernetes.container.hash: a3feaab1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b48ac56e48fe04efb8cbe0a1f24302f4ddd366a42a7ba2d4395113bf2157a3ea,PodSandboxId:5e1eebee97f6b9d241e17235a27f64767bdc5937bd5ca3537ce4297a25812986,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722445176435123047,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.k
ubernetes.pod.name: kube-apiserver-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 399c842a2f5d3312c2f955f494ccfe00,},Annotations:map[string]string{io.kubernetes.container.hash: c9b9fb5b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ae09638d5d5724f3e27352156c272f710b70f2c3319adc46eb518d357dc8e2,PodSandboxId:5c961f0d694a144ae953e67f65f86080fd9a64718dfc16e8697475eadb817895,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722445176401816670,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kube
rnetes.pod.name: kube-controller-manager-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9982d04d181fbc6333c44627c777728,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ded6421f2f11d7f64ce80d1681e3d4c9d1453ff8aecc860e184ad0865768adde,PodSandboxId:90ca479a21b17fc36c4328bfd82fe7023d144a893b3a1fbc315c66af59696ea2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722445176364748602,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.n
ame: kube-scheduler-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3defc3711c46905c8aec12eb318ecd3b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2d236382-67b5-439e-a0a7-3f9736c4fa77 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:07:05 ha-234651 crio[683]: time="2024-07-31 17:07:05.125572344Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=887a8e1d-206e-4de1-8259-80feee9a7144 name=/runtime.v1.RuntimeService/Version
	Jul 31 17:07:05 ha-234651 crio[683]: time="2024-07-31 17:07:05.125657190Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=887a8e1d-206e-4de1-8259-80feee9a7144 name=/runtime.v1.RuntimeService/Version
	Jul 31 17:07:05 ha-234651 crio[683]: time="2024-07-31 17:07:05.127499373Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=426a2b75-651f-4bdf-bfc8-4bd527d3be7a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:07:05 ha-234651 crio[683]: time="2024-07-31 17:07:05.127904468Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722445625127884500,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145867,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=426a2b75-651f-4bdf-bfc8-4bd527d3be7a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:07:05 ha-234651 crio[683]: time="2024-07-31 17:07:05.128475896Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=57330326-ca45-4e43-9c40-2531686e2ac8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:07:05 ha-234651 crio[683]: time="2024-07-31 17:07:05.128536799Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=57330326-ca45-4e43-9c40-2531686e2ac8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:07:05 ha-234651 crio[683]: time="2024-07-31 17:07:05.128761136Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5e4d66f773ff46f7a130d9359ac1316e1117516ebef8b32f9b20515706b3a2f8,PodSandboxId:754996ae28b01f43599725b23a8ef7ea035322c1da4ccbda6c379abd7e897fe0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722445212877304638,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qbqb9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f76f862-d39e-4976-90e6-fb9a25cc485a,},Annotations:map[string]string{io.kubernetes.container.hash: 9bcf3d4b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8ef655791fe4b0b98612bab1bc9fea7e42c73e5327bdd0654676979cb2e8542,PodSandboxId:1616415c6b8f6463e22fcd4bc693a293f8fdee9b60f2f3d789bc645ace8300a6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722445212817639863,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nsx9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: b2cde006-dbb7-4e6f-a5f1-cf7760740104,},Annotations:map[string]string{io.kubernetes.container.hash: 9d1dd63f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cc5e3465a8643593cfbf5871749b29c1edcf22920dda340f9e61a512894333d,PodSandboxId:1e9afabdcc733b5f6d64e03aa33527331e08c27fcc253d8f5a936ac47ba41635,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNI
NG,CreatedAt:1722445212754307746,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87455537-bdb8-438b-8122-db85bed01d09,},Annotations:map[string]string{io.kubernetes.container.hash: f1005df0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd9f6c4536535bb3eb07d6fa75fe49579d9cdee5ff6e4c440e0a58359d4f2f08,PodSandboxId:99ba869aafb121f3032d38c0a926a34605210fa5e62304b252efb3ce0f139b7c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:C
ONTAINER_RUNNING,CreatedAt:1722445201111476345,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wfbt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eda8095-ce75-4043-8ddf-6e5663de8212,},Annotations:map[string]string{io.kubernetes.container.hash: f1243e66,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:631c8cee6152ae1808419f27d4b63fa0a6bcebb15c757dd96090c6267d63e6c2,PodSandboxId:88fd41ca8aad113a54b218f3a36159bdb37464ee5f76ec8ae31f27cd31f7daa1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:17224451
97367288188,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jfgs8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ead85d8-0fd0-4900-8c02-2f23217ca208,},Annotations:map[string]string{io.kubernetes.container.hash: 85f1c355,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:639ed1a246cfddb5bcce86bc66133eebc7339f13348608e46548a872b8df456e,PodSandboxId:e715c97e96fabf0f1c9f4e6025318eda97b8f431f80df4d8ac4517b03917faad,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722445179
126430451,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 377009a00f211ea8abb80d365c74a9fd,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5b3417940cd8bf8065dfefafb918da0e5bfda9fd5f7a2f9ccea0f1615151657,PodSandboxId:8ccf20bcd63f37163f71bce3d8a364eb92d86b0694bc75949c8979e5633c5cc2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722445176479082471,Labels:map[string]string{io.kuber
netes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f8584f0ef1731ec6b6fb11b7fa84aeb,},Annotations:map[string]string{io.kubernetes.container.hash: a3feaab1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b48ac56e48fe04efb8cbe0a1f24302f4ddd366a42a7ba2d4395113bf2157a3ea,PodSandboxId:5e1eebee97f6b9d241e17235a27f64767bdc5937bd5ca3537ce4297a25812986,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722445176435123047,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.k
ubernetes.pod.name: kube-apiserver-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 399c842a2f5d3312c2f955f494ccfe00,},Annotations:map[string]string{io.kubernetes.container.hash: c9b9fb5b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ae09638d5d5724f3e27352156c272f710b70f2c3319adc46eb518d357dc8e2,PodSandboxId:5c961f0d694a144ae953e67f65f86080fd9a64718dfc16e8697475eadb817895,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722445176401816670,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kube
rnetes.pod.name: kube-controller-manager-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9982d04d181fbc6333c44627c777728,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ded6421f2f11d7f64ce80d1681e3d4c9d1453ff8aecc860e184ad0865768adde,PodSandboxId:90ca479a21b17fc36c4328bfd82fe7023d144a893b3a1fbc315c66af59696ea2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722445176364748602,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.n
ame: kube-scheduler-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3defc3711c46905c8aec12eb318ecd3b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=57330326-ca45-4e43-9c40-2531686e2ac8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:07:05 ha-234651 crio[683]: time="2024-07-31 17:07:05.176014442Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=63980917-2ac5-4a27-9f15-eb8a33ce0f6c name=/runtime.v1.RuntimeService/Version
	Jul 31 17:07:05 ha-234651 crio[683]: time="2024-07-31 17:07:05.176093763Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=63980917-2ac5-4a27-9f15-eb8a33ce0f6c name=/runtime.v1.RuntimeService/Version
	Jul 31 17:07:05 ha-234651 crio[683]: time="2024-07-31 17:07:05.177250571Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8e692cfb-9505-4c15-822d-b1e3f4895411 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:07:05 ha-234651 crio[683]: time="2024-07-31 17:07:05.177794785Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722445625177771908,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145867,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8e692cfb-9505-4c15-822d-b1e3f4895411 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:07:05 ha-234651 crio[683]: time="2024-07-31 17:07:05.178377551Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=174e2fc8-a126-41ca-af86-35bc0b2a3c2f name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:07:05 ha-234651 crio[683]: time="2024-07-31 17:07:05.178425465Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=174e2fc8-a126-41ca-af86-35bc0b2a3c2f name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:07:05 ha-234651 crio[683]: time="2024-07-31 17:07:05.178657464Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5e4d66f773ff46f7a130d9359ac1316e1117516ebef8b32f9b20515706b3a2f8,PodSandboxId:754996ae28b01f43599725b23a8ef7ea035322c1da4ccbda6c379abd7e897fe0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722445212877304638,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qbqb9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f76f862-d39e-4976-90e6-fb9a25cc485a,},Annotations:map[string]string{io.kubernetes.container.hash: 9bcf3d4b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8ef655791fe4b0b98612bab1bc9fea7e42c73e5327bdd0654676979cb2e8542,PodSandboxId:1616415c6b8f6463e22fcd4bc693a293f8fdee9b60f2f3d789bc645ace8300a6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722445212817639863,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nsx9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: b2cde006-dbb7-4e6f-a5f1-cf7760740104,},Annotations:map[string]string{io.kubernetes.container.hash: 9d1dd63f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cc5e3465a8643593cfbf5871749b29c1edcf22920dda340f9e61a512894333d,PodSandboxId:1e9afabdcc733b5f6d64e03aa33527331e08c27fcc253d8f5a936ac47ba41635,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNI
NG,CreatedAt:1722445212754307746,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87455537-bdb8-438b-8122-db85bed01d09,},Annotations:map[string]string{io.kubernetes.container.hash: f1005df0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd9f6c4536535bb3eb07d6fa75fe49579d9cdee5ff6e4c440e0a58359d4f2f08,PodSandboxId:99ba869aafb121f3032d38c0a926a34605210fa5e62304b252efb3ce0f139b7c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:C
ONTAINER_RUNNING,CreatedAt:1722445201111476345,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wfbt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eda8095-ce75-4043-8ddf-6e5663de8212,},Annotations:map[string]string{io.kubernetes.container.hash: f1243e66,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:631c8cee6152ae1808419f27d4b63fa0a6bcebb15c757dd96090c6267d63e6c2,PodSandboxId:88fd41ca8aad113a54b218f3a36159bdb37464ee5f76ec8ae31f27cd31f7daa1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:17224451
97367288188,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jfgs8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ead85d8-0fd0-4900-8c02-2f23217ca208,},Annotations:map[string]string{io.kubernetes.container.hash: 85f1c355,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:639ed1a246cfddb5bcce86bc66133eebc7339f13348608e46548a872b8df456e,PodSandboxId:e715c97e96fabf0f1c9f4e6025318eda97b8f431f80df4d8ac4517b03917faad,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722445179
126430451,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 377009a00f211ea8abb80d365c74a9fd,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5b3417940cd8bf8065dfefafb918da0e5bfda9fd5f7a2f9ccea0f1615151657,PodSandboxId:8ccf20bcd63f37163f71bce3d8a364eb92d86b0694bc75949c8979e5633c5cc2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722445176479082471,Labels:map[string]string{io.kuber
netes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f8584f0ef1731ec6b6fb11b7fa84aeb,},Annotations:map[string]string{io.kubernetes.container.hash: a3feaab1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b48ac56e48fe04efb8cbe0a1f24302f4ddd366a42a7ba2d4395113bf2157a3ea,PodSandboxId:5e1eebee97f6b9d241e17235a27f64767bdc5937bd5ca3537ce4297a25812986,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722445176435123047,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.k
ubernetes.pod.name: kube-apiserver-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 399c842a2f5d3312c2f955f494ccfe00,},Annotations:map[string]string{io.kubernetes.container.hash: c9b9fb5b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ae09638d5d5724f3e27352156c272f710b70f2c3319adc46eb518d357dc8e2,PodSandboxId:5c961f0d694a144ae953e67f65f86080fd9a64718dfc16e8697475eadb817895,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722445176401816670,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kube
rnetes.pod.name: kube-controller-manager-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9982d04d181fbc6333c44627c777728,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ded6421f2f11d7f64ce80d1681e3d4c9d1453ff8aecc860e184ad0865768adde,PodSandboxId:90ca479a21b17fc36c4328bfd82fe7023d144a893b3a1fbc315c66af59696ea2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722445176364748602,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.n
ame: kube-scheduler-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3defc3711c46905c8aec12eb318ecd3b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=174e2fc8-a126-41ca-af86-35bc0b2a3c2f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5e4d66f773ff4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                     6 minutes ago       Running             coredns                   0                   754996ae28b01       coredns-7db6d8ff4d-qbqb9
	e8ef655791fe4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                     6 minutes ago       Running             coredns                   0                   1616415c6b8f6       coredns-7db6d8ff4d-nsx9j
	0cc5e3465a864       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                     6 minutes ago       Running             storage-provisioner       0                   1e9afabdcc733       storage-provisioner
	dd9f6c4536535       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9   7 minutes ago       Running             kindnet-cni               0                   99ba869aafb12       kindnet-wfbt4
	631c8cee6152a       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                     7 minutes ago       Running             kube-proxy                0                   88fd41ca8aad1       kube-proxy-jfgs8
	639ed1a246cfd       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f    7 minutes ago       Running             kube-vip                  0                   e715c97e96fab       kube-vip-ha-234651
	e5b3417940cd8       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                     7 minutes ago       Running             etcd                      0                   8ccf20bcd63f3       etcd-ha-234651
	b48ac56e48fe0       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                     7 minutes ago       Running             kube-apiserver            0                   5e1eebee97f6b       kube-apiserver-ha-234651
	e3ae09638d5d5       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                     7 minutes ago       Running             kube-controller-manager   0                   5c961f0d694a1       kube-controller-manager-ha-234651
	ded6421f2f11d       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                     7 minutes ago       Running             kube-scheduler            0                   90ca479a21b17       kube-scheduler-ha-234651
	
	
	==> coredns [5e4d66f773ff46f7a130d9359ac1316e1117516ebef8b32f9b20515706b3a2f8] <==
	[INFO] 10.244.2.2:36295 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000163438s
	[INFO] 10.244.2.2:39212 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000147577s
	[INFO] 10.244.1.3:41084 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003668415s
	[INFO] 10.244.1.3:58419 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000240845s
	[INFO] 10.244.1.3:46572 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000149759s
	[INFO] 10.244.1.3:46716 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000125267s
	[INFO] 10.244.2.2:44128 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00011516s
	[INFO] 10.244.2.2:51451 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000094315s
	[INFO] 10.244.2.2:36147 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001399s
	[INFO] 10.244.2.2:36545 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001276628s
	[INFO] 10.244.1.2:43917 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113879s
	[INFO] 10.244.1.2:52270 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00173961s
	[INFO] 10.244.1.2:43272 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000090127s
	[INFO] 10.244.1.2:40969 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001253454s
	[INFO] 10.244.1.2:36005 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000101429s
	[INFO] 10.244.1.3:57882 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000155324s
	[INFO] 10.244.1.3:52921 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104436s
	[INFO] 10.244.1.3:53848 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000118293s
	[INFO] 10.244.1.2:59324 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114877s
	[INFO] 10.244.1.2:35559 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080871s
	[INFO] 10.244.1.3:36523 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000149158s
	[INFO] 10.244.1.3:43713 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000113949s
	[INFO] 10.244.2.2:57100 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000104476s
	[INFO] 10.244.2.2:36343 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000075949s
	[INFO] 10.244.1.2:36593 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000110887s
	
	
	==> coredns [e8ef655791fe4b0b98612bab1bc9fea7e42c73e5327bdd0654676979cb2e8542] <==
	[INFO] 10.244.1.3:39272 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000221307s
	[INFO] 10.244.1.3:45451 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001416s
	[INFO] 10.244.1.3:35968 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003019868s
	[INFO] 10.244.1.3:50760 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000096087s
	[INFO] 10.244.2.2:47184 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001873446s
	[INFO] 10.244.2.2:52684 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000141574s
	[INFO] 10.244.2.2:55915 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000097985s
	[INFO] 10.244.2.2:37641 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000064285s
	[INFO] 10.244.1.2:44538 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000098479s
	[INFO] 10.244.1.2:51050 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000063987s
	[INFO] 10.244.1.2:53102 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117625s
	[INFO] 10.244.1.3:34472 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000093028s
	[INFO] 10.244.2.2:50493 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000198464s
	[INFO] 10.244.2.2:59387 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000091819s
	[INFO] 10.244.2.2:46587 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000140652s
	[INFO] 10.244.2.2:44332 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000062045s
	[INFO] 10.244.1.2:56100 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000129501s
	[INFO] 10.244.1.2:52904 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075504s
	[INFO] 10.244.1.3:45513 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000201365s
	[INFO] 10.244.1.3:56964 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000220702s
	[INFO] 10.244.2.2:52612 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000221354s
	[INFO] 10.244.2.2:34847 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000096723s
	[INFO] 10.244.1.2:54098 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00017441s
	[INFO] 10.244.1.2:35429 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000097269s
	[INFO] 10.244.1.2:35606 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000150264s
	
	
	==> describe nodes <==
	Name:               ha-234651
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-234651
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1d737dad7efa60c56d30434fcd857dd3b14c91d9
	                    minikube.k8s.io/name=ha-234651
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T16_59_43_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 16:59:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-234651
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 17:07:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 17:05:19 +0000   Wed, 31 Jul 2024 16:59:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 17:05:19 +0000   Wed, 31 Jul 2024 16:59:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 17:05:19 +0000   Wed, 31 Jul 2024 16:59:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 17:05:19 +0000   Wed, 31 Jul 2024 17:00:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.243
	  Hostname:    ha-234651
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 78c611c203cf48ab9dc710fc8d4b3901
	  System UUID:                78c611c2-03cf-48ab-9dc7-10fc8d4b3901
	  Boot ID:                    7f43c774-6026-42b9-978d-915af2f564da
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-nsx9j             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m9s
	  kube-system                 coredns-7db6d8ff4d-qbqb9             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m9s
	  kube-system                 etcd-ha-234651                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m23s
	  kube-system                 kindnet-wfbt4                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m9s
	  kube-system                 kube-apiserver-ha-234651             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m23s
	  kube-system                 kube-controller-manager-ha-234651    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m23s
	  kube-system                 kube-proxy-jfgs8                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m9s
	  kube-system                 kube-scheduler-ha-234651             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m23s
	  kube-system                 kube-vip-ha-234651                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m23s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m7s   kube-proxy       
	  Normal  Starting                 7m23s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m23s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m23s  kubelet          Node ha-234651 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m23s  kubelet          Node ha-234651 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m23s  kubelet          Node ha-234651 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m9s   node-controller  Node ha-234651 event: Registered Node ha-234651 in Controller
	  Normal  NodeReady                6m53s  kubelet          Node ha-234651 status is now: NodeReady
	  Normal  RegisteredNode           6m5s   node-controller  Node ha-234651 event: Registered Node ha-234651 in Controller
	  Normal  RegisteredNode           4m49s  node-controller  Node ha-234651 event: Registered Node ha-234651 in Controller
	
	
	Name:               ha-234651-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-234651-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1d737dad7efa60c56d30434fcd857dd3b14c91d9
	                    minikube.k8s.io/name=ha-234651
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T17_00_46_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 17:00:43 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-234651-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 17:03:38 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 31 Jul 2024 17:02:47 +0000   Wed, 31 Jul 2024 17:04:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 31 Jul 2024 17:02:47 +0000   Wed, 31 Jul 2024 17:04:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 31 Jul 2024 17:02:47 +0000   Wed, 31 Jul 2024 17:04:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 31 Jul 2024 17:02:47 +0000   Wed, 31 Jul 2024 17:04:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.235
	  Hostname:    ha-234651-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f48a6b3aa33049d58a0ceaa57200b934
	  System UUID:                f48a6b3a-a330-49d5-8a0c-eaa57200b934
	  Boot ID:                    4f0c8ea9-325a-45d8-974f-4ccdaaffa5ca
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-2w6fp                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m39s
	  default                     busybox-fc5497c4f-qw457                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m39s
	  kube-system                 etcd-ha-234651-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m20s
	  kube-system                 kindnet-phmdp                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m22s
	  kube-system                 kube-apiserver-ha-234651-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m21s
	  kube-system                 kube-controller-manager-ha-234651-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m21s
	  kube-system                 kube-proxy-b8dcw                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m22s
	  kube-system                 kube-scheduler-ha-234651-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m21s
	  kube-system                 kube-vip-ha-234651-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m17s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m22s (x8 over 6m22s)  kubelet          Node ha-234651-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m22s (x8 over 6m22s)  kubelet          Node ha-234651-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m22s (x7 over 6m22s)  kubelet          Node ha-234651-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m19s                  node-controller  Node ha-234651-m02 event: Registered Node ha-234651-m02 in Controller
	  Normal  RegisteredNode           6m5s                   node-controller  Node ha-234651-m02 event: Registered Node ha-234651-m02 in Controller
	  Normal  RegisteredNode           4m49s                  node-controller  Node ha-234651-m02 event: Registered Node ha-234651-m02 in Controller
	  Normal  NodeNotReady             2m44s                  node-controller  Node ha-234651-m02 status is now: NodeNotReady
	
	
	Name:               ha-234651-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-234651-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1d737dad7efa60c56d30434fcd857dd3b14c91d9
	                    minikube.k8s.io/name=ha-234651
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T17_02_01_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 17:01:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-234651-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 17:07:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 17:03:00 +0000   Wed, 31 Jul 2024 17:01:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 17:03:00 +0000   Wed, 31 Jul 2024 17:01:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 17:03:00 +0000   Wed, 31 Jul 2024 17:01:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 17:03:00 +0000   Wed, 31 Jul 2024 17:02:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.139
	  Hostname:    ha-234651-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 bedcbc00eaa142208b6f46ab90ace771
	  System UUID:                bedcbc00-eaa1-4220-8b6f-46ab90ace771
	  Boot ID:                    8963cccd-a585-41ce-80e2-aa6c1268ee1b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-fdmbt                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m39s
	  kube-system                 etcd-ha-234651-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m59s
	  kube-system                 kindnet-2xqxq                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m7s
	  kube-system                 kube-apiserver-ha-234651-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m5s
	  kube-system                 kube-controller-manager-ha-234651-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m58s
	  kube-system                 kube-proxy-gfgjd                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m7s
	  kube-system                 kube-scheduler-ha-234651-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m
	  kube-system                 kube-vip-ha-234651-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m3s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  5m7s (x8 over 5m7s)  kubelet          Node ha-234651-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m7s (x8 over 5m7s)  kubelet          Node ha-234651-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m7s (x7 over 5m7s)  kubelet          Node ha-234651-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m5s                 node-controller  Node ha-234651-m03 event: Registered Node ha-234651-m03 in Controller
	  Normal  RegisteredNode           5m4s                 node-controller  Node ha-234651-m03 event: Registered Node ha-234651-m03 in Controller
	  Normal  RegisteredNode           4m49s                node-controller  Node ha-234651-m03 event: Registered Node ha-234651-m03 in Controller
	
	
	Name:               ha-234651-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-234651-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1d737dad7efa60c56d30434fcd857dd3b14c91d9
	                    minikube.k8s.io/name=ha-234651
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T17_03_01_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 17:03:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-234651-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 17:06:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 17:03:31 +0000   Wed, 31 Jul 2024 17:03:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 17:03:31 +0000   Wed, 31 Jul 2024 17:03:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 17:03:31 +0000   Wed, 31 Jul 2024 17:03:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 17:03:31 +0000   Wed, 31 Jul 2024 17:03:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.216
	  Hostname:    ha-234651-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6cc10919121c4c939afe8d5b5f293c45
	  System UUID:                6cc10919-121c-4c93-9afe-8d5b5f293c45
	  Boot ID:                    ad8f4b1b-56d6-4379-babe-ef3d0a8d6eef
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-qnml8       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m5s
	  kube-system                 kube-proxy-4b8gn    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m59s                kube-proxy       
	  Normal  RegisteredNode           4m5s                 node-controller  Node ha-234651-m04 event: Registered Node ha-234651-m04 in Controller
	  Normal  NodeHasSufficientMemory  4m5s (x3 over 4m5s)  kubelet          Node ha-234651-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m5s (x3 over 4m5s)  kubelet          Node ha-234651-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m5s (x3 over 4m5s)  kubelet          Node ha-234651-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m4s                 node-controller  Node ha-234651-m04 event: Registered Node ha-234651-m04 in Controller
	  Normal  RegisteredNode           4m4s                 node-controller  Node ha-234651-m04 event: Registered Node ha-234651-m04 in Controller
	  Normal  NodeReady                3m44s                kubelet          Node ha-234651-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jul31 16:59] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050822] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036777] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.665799] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.749974] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.543243] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.022347] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.054568] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.050413] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.168871] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.140385] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.264161] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +3.967546] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +4.664986] systemd-fstab-generator[951]: Ignoring "noauto" option for root device
	[  +0.073312] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.322437] systemd-fstab-generator[1369]: Ignoring "noauto" option for root device
	[  +0.079901] kauditd_printk_skb: 79 callbacks suppressed
	[ +13.802679] kauditd_printk_skb: 21 callbacks suppressed
	[Jul31 17:00] kauditd_printk_skb: 34 callbacks suppressed
	[ +47.907295] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [e5b3417940cd8bf8065dfefafb918da0e5bfda9fd5f7a2f9ccea0f1615151657] <==
	{"level":"warn","ts":"2024-07-31T17:07:05.339828Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4d6f7e7e767b3ff3","from":"4d6f7e7e767b3ff3","remote-peer-id":"6d43d0f719899a7e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T17:07:05.41273Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4d6f7e7e767b3ff3","from":"4d6f7e7e767b3ff3","remote-peer-id":"6d43d0f719899a7e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T17:07:05.421045Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4d6f7e7e767b3ff3","from":"4d6f7e7e767b3ff3","remote-peer-id":"6d43d0f719899a7e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T17:07:05.4245Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4d6f7e7e767b3ff3","from":"4d6f7e7e767b3ff3","remote-peer-id":"6d43d0f719899a7e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T17:07:05.436203Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4d6f7e7e767b3ff3","from":"4d6f7e7e767b3ff3","remote-peer-id":"6d43d0f719899a7e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T17:07:05.438952Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4d6f7e7e767b3ff3","from":"4d6f7e7e767b3ff3","remote-peer-id":"6d43d0f719899a7e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T17:07:05.439599Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4d6f7e7e767b3ff3","from":"4d6f7e7e767b3ff3","remote-peer-id":"6d43d0f719899a7e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T17:07:05.44395Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4d6f7e7e767b3ff3","from":"4d6f7e7e767b3ff3","remote-peer-id":"6d43d0f719899a7e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T17:07:05.449376Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4d6f7e7e767b3ff3","from":"4d6f7e7e767b3ff3","remote-peer-id":"6d43d0f719899a7e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T17:07:05.452425Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4d6f7e7e767b3ff3","from":"4d6f7e7e767b3ff3","remote-peer-id":"6d43d0f719899a7e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T17:07:05.455924Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4d6f7e7e767b3ff3","from":"4d6f7e7e767b3ff3","remote-peer-id":"6d43d0f719899a7e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T17:07:05.467767Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4d6f7e7e767b3ff3","from":"4d6f7e7e767b3ff3","remote-peer-id":"6d43d0f719899a7e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T17:07:05.475534Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4d6f7e7e767b3ff3","from":"4d6f7e7e767b3ff3","remote-peer-id":"6d43d0f719899a7e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T17:07:05.481096Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4d6f7e7e767b3ff3","from":"4d6f7e7e767b3ff3","remote-peer-id":"6d43d0f719899a7e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T17:07:05.484011Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4d6f7e7e767b3ff3","from":"4d6f7e7e767b3ff3","remote-peer-id":"6d43d0f719899a7e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T17:07:05.486739Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4d6f7e7e767b3ff3","from":"4d6f7e7e767b3ff3","remote-peer-id":"6d43d0f719899a7e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T17:07:05.49803Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4d6f7e7e767b3ff3","from":"4d6f7e7e767b3ff3","remote-peer-id":"6d43d0f719899a7e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T17:07:05.50615Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4d6f7e7e767b3ff3","from":"4d6f7e7e767b3ff3","remote-peer-id":"6d43d0f719899a7e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T17:07:05.514315Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4d6f7e7e767b3ff3","from":"4d6f7e7e767b3ff3","remote-peer-id":"6d43d0f719899a7e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T17:07:05.517978Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4d6f7e7e767b3ff3","from":"4d6f7e7e767b3ff3","remote-peer-id":"6d43d0f719899a7e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T17:07:05.521203Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4d6f7e7e767b3ff3","from":"4d6f7e7e767b3ff3","remote-peer-id":"6d43d0f719899a7e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T17:07:05.528506Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4d6f7e7e767b3ff3","from":"4d6f7e7e767b3ff3","remote-peer-id":"6d43d0f719899a7e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T17:07:05.533512Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4d6f7e7e767b3ff3","from":"4d6f7e7e767b3ff3","remote-peer-id":"6d43d0f719899a7e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T17:07:05.539805Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4d6f7e7e767b3ff3","from":"4d6f7e7e767b3ff3","remote-peer-id":"6d43d0f719899a7e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T17:07:05.540125Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4d6f7e7e767b3ff3","from":"4d6f7e7e767b3ff3","remote-peer-id":"6d43d0f719899a7e","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 17:07:05 up 7 min,  0 users,  load average: 0.31, 0.49, 0.27
	Linux ha-234651 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [dd9f6c4536535bb3eb07d6fa75fe49579d9cdee5ff6e4c440e0a58359d4f2f08] <==
	I0731 17:06:32.128046       1 main.go:322] Node ha-234651-m04 has CIDR [10.244.3.0/24] 
	I0731 17:06:42.128723       1 main.go:295] Handling node with IPs: map[192.168.39.243:{}]
	I0731 17:06:42.128906       1 main.go:299] handling current node
	I0731 17:06:42.128943       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0731 17:06:42.128971       1 main.go:322] Node ha-234651-m02 has CIDR [10.244.1.0/24] 
	I0731 17:06:42.129174       1 main.go:295] Handling node with IPs: map[192.168.39.139:{}]
	I0731 17:06:42.129245       1 main.go:322] Node ha-234651-m03 has CIDR [10.244.2.0/24] 
	I0731 17:06:42.129390       1 main.go:295] Handling node with IPs: map[192.168.39.216:{}]
	I0731 17:06:42.129426       1 main.go:322] Node ha-234651-m04 has CIDR [10.244.3.0/24] 
	I0731 17:06:52.127785       1 main.go:295] Handling node with IPs: map[192.168.39.139:{}]
	I0731 17:06:52.127829       1 main.go:322] Node ha-234651-m03 has CIDR [10.244.2.0/24] 
	I0731 17:06:52.127994       1 main.go:295] Handling node with IPs: map[192.168.39.216:{}]
	I0731 17:06:52.128011       1 main.go:322] Node ha-234651-m04 has CIDR [10.244.3.0/24] 
	I0731 17:06:52.128091       1 main.go:295] Handling node with IPs: map[192.168.39.243:{}]
	I0731 17:06:52.128098       1 main.go:299] handling current node
	I0731 17:06:52.128124       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0731 17:06:52.128129       1 main.go:322] Node ha-234651-m02 has CIDR [10.244.1.0/24] 
	I0731 17:07:02.126765       1 main.go:295] Handling node with IPs: map[192.168.39.139:{}]
	I0731 17:07:02.126830       1 main.go:322] Node ha-234651-m03 has CIDR [10.244.2.0/24] 
	I0731 17:07:02.127007       1 main.go:295] Handling node with IPs: map[192.168.39.216:{}]
	I0731 17:07:02.127032       1 main.go:322] Node ha-234651-m04 has CIDR [10.244.3.0/24] 
	I0731 17:07:02.127139       1 main.go:295] Handling node with IPs: map[192.168.39.243:{}]
	I0731 17:07:02.127158       1 main.go:299] handling current node
	I0731 17:07:02.127171       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0731 17:07:02.127176       1 main.go:322] Node ha-234651-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [b48ac56e48fe04efb8cbe0a1f24302f4ddd366a42a7ba2d4395113bf2157a3ea] <==
	I0731 16:59:41.278961       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0731 16:59:41.285624       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.243]
	I0731 16:59:41.286548       1 controller.go:615] quota admission added evaluator for: endpoints
	I0731 16:59:41.291521       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0731 16:59:41.491319       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0731 16:59:42.740597       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0731 16:59:42.753582       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0731 16:59:42.898693       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0731 16:59:56.022208       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0731 16:59:56.106082       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0731 17:02:31.881906       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60088: use of closed network connection
	E0731 17:02:32.047811       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60106: use of closed network connection
	E0731 17:02:32.230182       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60124: use of closed network connection
	E0731 17:02:32.409711       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60154: use of closed network connection
	E0731 17:02:32.586020       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60168: use of closed network connection
	E0731 17:02:32.760544       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60198: use of closed network connection
	E0731 17:02:32.934938       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60210: use of closed network connection
	E0731 17:02:33.105728       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60236: use of closed network connection
	E0731 17:02:33.377977       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60254: use of closed network connection
	E0731 17:02:33.539826       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60266: use of closed network connection
	E0731 17:02:33.706202       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60292: use of closed network connection
	E0731 17:02:33.871884       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60304: use of closed network connection
	E0731 17:02:34.042800       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60330: use of closed network connection
	E0731 17:02:34.212508       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60342: use of closed network connection
	W0731 17:04:01.298435       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.139 192.168.39.243]
	
	
	==> kube-controller-manager [e3ae09638d5d5724f3e27352156c272f710b70f2c3319adc46eb518d357dc8e2] <==
	I0731 17:02:27.105508       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.125µs"
	I0731 17:02:27.543579       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.434µs"
	I0731 17:02:28.548910       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="73.962µs"
	I0731 17:02:28.559909       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.406µs"
	I0731 17:02:28.564489       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.591µs"
	I0731 17:02:28.686722       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="83.291µs"
	I0731 17:02:30.263248       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.859513ms"
	I0731 17:02:30.263688       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="60.971µs"
	I0731 17:02:30.957804       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.563351ms"
	I0731 17:02:30.958323       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="81.378µs"
	I0731 17:02:31.272030       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.121275ms"
	I0731 17:02:31.272231       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="76.813µs"
	E0731 17:03:00.315712       1 certificate_controller.go:146] Sync csr-z4x7k failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-z4x7k": the object has been modified; please apply your changes to the latest version and try again
	E0731 17:03:00.573677       1 certificate_controller.go:146] Sync csr-z4x7k failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-z4x7k": the object has been modified; please apply your changes to the latest version and try again
	I0731 17:03:00.609984       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-234651-m04\" does not exist"
	I0731 17:03:00.657440       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-234651-m04" podCIDRs=["10.244.3.0/24"]
	I0731 17:03:01.076771       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-234651-m04"
	I0731 17:03:21.239060       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-234651-m04"
	I0731 17:04:21.105420       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-234651-m04"
	I0731 17:04:21.185436       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.413544ms"
	I0731 17:04:21.185518       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.667µs"
	I0731 17:04:21.208326       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.195491ms"
	I0731 17:04:21.208513       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.429µs"
	I0731 17:04:21.245645       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.319898ms"
	I0731 17:04:21.246538       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.344µs"
	
	
	==> kube-proxy [631c8cee6152ae1808419f27d4b63fa0a6bcebb15c757dd96090c6267d63e6c2] <==
	I0731 16:59:57.547806       1 server_linux.go:69] "Using iptables proxy"
	I0731 16:59:57.565295       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.243"]
	I0731 16:59:57.614393       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 16:59:57.614454       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 16:59:57.614473       1 server_linux.go:165] "Using iptables Proxier"
	I0731 16:59:57.617425       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 16:59:57.617906       1 server.go:872] "Version info" version="v1.30.3"
	I0731 16:59:57.617937       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 16:59:57.619968       1 config.go:192] "Starting service config controller"
	I0731 16:59:57.620464       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 16:59:57.620514       1 config.go:101] "Starting endpoint slice config controller"
	I0731 16:59:57.620520       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 16:59:57.621534       1 config.go:319] "Starting node config controller"
	I0731 16:59:57.621562       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 16:59:57.720963       1 shared_informer.go:320] Caches are synced for service config
	I0731 16:59:57.720972       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 16:59:57.721723       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [ded6421f2f11d7f64ce80d1681e3d4c9d1453ff8aecc860e184ad0865768adde] <==
	I0731 17:02:26.714143       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="b1c66d4c-91d3-498a-b3ae-c705ae28c8fa" pod="default/busybox-fc5497c4f-fdmbt" assumedNode="ha-234651-m03" currentNode="ha-234651"
	E0731 17:02:26.725726       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-qw457\": pod busybox-fc5497c4f-qw457 is already assigned to node \"ha-234651-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-qw457" node="ha-234651-m03"
	E0731 17:02:26.725803       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod c4caccfd-f3bd-402f-8d90-0ed9d02f5c2d(default/busybox-fc5497c4f-qw457) was assumed on ha-234651-m03 but assigned to ha-234651-m02" pod="default/busybox-fc5497c4f-qw457"
	E0731 17:02:26.725828       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-qw457\": pod busybox-fc5497c4f-qw457 is already assigned to node \"ha-234651-m02\"" pod="default/busybox-fc5497c4f-qw457"
	I0731 17:02:26.725861       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-qw457" node="ha-234651-m02"
	E0731 17:02:26.727014       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-fdmbt\": pod busybox-fc5497c4f-fdmbt is already assigned to node \"ha-234651-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-fdmbt" node="ha-234651"
	E0731 17:02:26.727069       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod b1c66d4c-91d3-498a-b3ae-c705ae28c8fa(default/busybox-fc5497c4f-fdmbt) was assumed on ha-234651 but assigned to ha-234651-m03" pod="default/busybox-fc5497c4f-fdmbt"
	E0731 17:02:26.727084       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-fdmbt\": pod busybox-fc5497c4f-fdmbt is already assigned to node \"ha-234651-m03\"" pod="default/busybox-fc5497c4f-fdmbt"
	I0731 17:02:26.727098       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-fdmbt" node="ha-234651-m03"
	E0731 17:03:00.685760       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-z8qd9\": pod kube-proxy-z8qd9 is already assigned to node \"ha-234651-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-z8qd9" node="ha-234651-m04"
	E0731 17:03:00.685947       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod bb0abc33-d04d-41ad-adc9-39420f19a821(kube-system/kube-proxy-z8qd9) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-z8qd9"
	E0731 17:03:00.686961       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-z8qd9\": pod kube-proxy-z8qd9 is already assigned to node \"ha-234651-m04\"" pod="kube-system/kube-proxy-z8qd9"
	I0731 17:03:00.687075       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-z8qd9" node="ha-234651-m04"
	E0731 17:03:00.685858       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-xlhp4\": pod kindnet-xlhp4 is already assigned to node \"ha-234651-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-xlhp4" node="ha-234651-m04"
	E0731 17:03:00.689443       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 1685cdf9-de2d-4ad0-bb7e-2b2cd5fc6cba(kube-system/kindnet-xlhp4) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-xlhp4"
	E0731 17:03:00.689565       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-xlhp4\": pod kindnet-xlhp4 is already assigned to node \"ha-234651-m04\"" pod="kube-system/kindnet-xlhp4"
	I0731 17:03:00.689662       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-xlhp4" node="ha-234651-m04"
	E0731 17:03:00.721461       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-qnml8\": pod kindnet-qnml8 is already assigned to node \"ha-234651-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-qnml8" node="ha-234651-m04"
	E0731 17:03:00.721533       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 254727bb-4578-4f88-8838-553d196d806d(kube-system/kindnet-qnml8) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-qnml8"
	E0731 17:03:00.721554       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-qnml8\": pod kindnet-qnml8 is already assigned to node \"ha-234651-m04\"" pod="kube-system/kindnet-qnml8"
	I0731 17:03:00.721578       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-qnml8" node="ha-234651-m04"
	E0731 17:03:00.721806       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-4b8gn\": pod kube-proxy-4b8gn is already assigned to node \"ha-234651-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-4b8gn" node="ha-234651-m04"
	E0731 17:03:00.722071       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 402fc7f7-8d84-4dc0-936c-e39d5411430a(kube-system/kube-proxy-4b8gn) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-4b8gn"
	E0731 17:03:00.722583       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-4b8gn\": pod kube-proxy-4b8gn is already assigned to node \"ha-234651-m04\"" pod="kube-system/kube-proxy-4b8gn"
	I0731 17:03:00.722794       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-4b8gn" node="ha-234651-m04"
	
	
	==> kubelet <==
	Jul 31 17:02:42 ha-234651 kubelet[1377]: E0731 17:02:42.881806    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 17:02:42 ha-234651 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 17:02:42 ha-234651 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 17:02:42 ha-234651 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 17:02:42 ha-234651 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 17:03:42 ha-234651 kubelet[1377]: E0731 17:03:42.887541    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 17:03:42 ha-234651 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 17:03:42 ha-234651 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 17:03:42 ha-234651 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 17:03:42 ha-234651 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 17:04:42 ha-234651 kubelet[1377]: E0731 17:04:42.892127    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 17:04:42 ha-234651 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 17:04:42 ha-234651 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 17:04:42 ha-234651 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 17:04:42 ha-234651 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 17:05:42 ha-234651 kubelet[1377]: E0731 17:05:42.882489    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 17:05:42 ha-234651 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 17:05:42 ha-234651 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 17:05:42 ha-234651 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 17:05:42 ha-234651 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 17:06:42 ha-234651 kubelet[1377]: E0731 17:06:42.881534    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 17:06:42 ha-234651 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 17:06:42 ha-234651 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 17:06:42 ha-234651 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 17:06:42 ha-234651 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-234651 -n ha-234651
helpers_test.go:261: (dbg) Run:  kubectl --context ha-234651 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (60.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (295.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-234651 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-234651 -v=7 --alsologtostderr
E0731 17:07:57.004850   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/functional-496242/client.crt: no such file or directory
E0731 17:08:24.690428   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/functional-496242/client.crt: no such file or directory
E0731 17:09:05.346603   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-234651 -v=7 --alsologtostderr: exit status 82 (2m1.788632561s)

                                                
                                                
-- stdout --
	* Stopping node "ha-234651-m04"  ...
	* Stopping node "ha-234651-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 17:07:06.924492   32232 out.go:291] Setting OutFile to fd 1 ...
	I0731 17:07:06.924616   32232 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 17:07:06.924628   32232 out.go:304] Setting ErrFile to fd 2...
	I0731 17:07:06.924635   32232 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 17:07:06.924823   32232 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19349-8084/.minikube/bin
	I0731 17:07:06.925037   32232 out.go:298] Setting JSON to false
	I0731 17:07:06.925124   32232 mustload.go:65] Loading cluster: ha-234651
	I0731 17:07:06.925501   32232 config.go:182] Loaded profile config "ha-234651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 17:07:06.925581   32232 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/config.json ...
	I0731 17:07:06.925758   32232 mustload.go:65] Loading cluster: ha-234651
	I0731 17:07:06.925880   32232 config.go:182] Loaded profile config "ha-234651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 17:07:06.925913   32232 stop.go:39] StopHost: ha-234651-m04
	I0731 17:07:06.926252   32232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:07:06.926296   32232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:07:06.943221   32232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39535
	I0731 17:07:06.943718   32232 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:07:06.944300   32232 main.go:141] libmachine: Using API Version  1
	I0731 17:07:06.944325   32232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:07:06.944623   32232 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:07:06.947021   32232 out.go:177] * Stopping node "ha-234651-m04"  ...
	I0731 17:07:06.948311   32232 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0731 17:07:06.948350   32232 main.go:141] libmachine: (ha-234651-m04) Calling .DriverName
	I0731 17:07:06.948597   32232 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0731 17:07:06.948621   32232 main.go:141] libmachine: (ha-234651-m04) Calling .GetSSHHostname
	I0731 17:07:06.951471   32232 main.go:141] libmachine: (ha-234651-m04) DBG | domain ha-234651-m04 has defined MAC address 52:54:00:25:f7:22 in network mk-ha-234651
	I0731 17:07:06.951836   32232 main.go:141] libmachine: (ha-234651-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f7:22", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:02:48 +0000 UTC Type:0 Mac:52:54:00:25:f7:22 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-234651-m04 Clientid:01:52:54:00:25:f7:22}
	I0731 17:07:06.951862   32232 main.go:141] libmachine: (ha-234651-m04) DBG | domain ha-234651-m04 has defined IP address 192.168.39.216 and MAC address 52:54:00:25:f7:22 in network mk-ha-234651
	I0731 17:07:06.951994   32232 main.go:141] libmachine: (ha-234651-m04) Calling .GetSSHPort
	I0731 17:07:06.952183   32232 main.go:141] libmachine: (ha-234651-m04) Calling .GetSSHKeyPath
	I0731 17:07:06.952309   32232 main.go:141] libmachine: (ha-234651-m04) Calling .GetSSHUsername
	I0731 17:07:06.952463   32232 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m04/id_rsa Username:docker}
	I0731 17:07:07.037476   32232 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0731 17:07:07.089789   32232 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0731 17:07:07.142164   32232 main.go:141] libmachine: Stopping "ha-234651-m04"...
	I0731 17:07:07.142192   32232 main.go:141] libmachine: (ha-234651-m04) Calling .GetState
	I0731 17:07:07.143652   32232 main.go:141] libmachine: (ha-234651-m04) Calling .Stop
	I0731 17:07:07.146939   32232 main.go:141] libmachine: (ha-234651-m04) Waiting for machine to stop 0/120
	I0731 17:07:08.262879   32232 main.go:141] libmachine: (ha-234651-m04) Calling .GetState
	I0731 17:07:08.264170   32232 main.go:141] libmachine: Machine "ha-234651-m04" was stopped.
	I0731 17:07:08.264185   32232 stop.go:75] duration metric: took 1.315877408s to stop
	I0731 17:07:08.264201   32232 stop.go:39] StopHost: ha-234651-m03
	I0731 17:07:08.264472   32232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:07:08.264511   32232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:07:08.279304   32232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39297
	I0731 17:07:08.279706   32232 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:07:08.280161   32232 main.go:141] libmachine: Using API Version  1
	I0731 17:07:08.280180   32232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:07:08.280475   32232 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:07:08.282519   32232 out.go:177] * Stopping node "ha-234651-m03"  ...
	I0731 17:07:08.283691   32232 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0731 17:07:08.283715   32232 main.go:141] libmachine: (ha-234651-m03) Calling .DriverName
	I0731 17:07:08.283932   32232 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0731 17:07:08.283952   32232 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHHostname
	I0731 17:07:08.286710   32232 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:07:08.287204   32232 main.go:141] libmachine: (ha-234651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:c0:cf", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:01:23 +0000 UTC Type:0 Mac:52:54:00:ac:c0:cf Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-234651-m03 Clientid:01:52:54:00:ac:c0:cf}
	I0731 17:07:08.287258   32232 main.go:141] libmachine: (ha-234651-m03) DBG | domain ha-234651-m03 has defined IP address 192.168.39.139 and MAC address 52:54:00:ac:c0:cf in network mk-ha-234651
	I0731 17:07:08.287392   32232 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHPort
	I0731 17:07:08.287555   32232 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHKeyPath
	I0731 17:07:08.287692   32232 main.go:141] libmachine: (ha-234651-m03) Calling .GetSSHUsername
	I0731 17:07:08.287821   32232 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m03/id_rsa Username:docker}
	I0731 17:07:08.370118   32232 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0731 17:07:08.423194   32232 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0731 17:07:08.476452   32232 main.go:141] libmachine: Stopping "ha-234651-m03"...
	I0731 17:07:08.476477   32232 main.go:141] libmachine: (ha-234651-m03) Calling .GetState
	I0731 17:07:08.477922   32232 main.go:141] libmachine: (ha-234651-m03) Calling .Stop
	I0731 17:07:08.481241   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 0/120
	I0731 17:07:09.482465   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 1/120
	I0731 17:07:10.483932   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 2/120
	I0731 17:07:11.485400   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 3/120
	I0731 17:07:12.486715   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 4/120
	I0731 17:07:13.488704   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 5/120
	I0731 17:07:14.490060   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 6/120
	I0731 17:07:15.491721   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 7/120
	I0731 17:07:16.493527   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 8/120
	I0731 17:07:17.495223   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 9/120
	I0731 17:07:18.497109   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 10/120
	I0731 17:07:19.498755   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 11/120
	I0731 17:07:20.500182   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 12/120
	I0731 17:07:21.501462   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 13/120
	I0731 17:07:22.502740   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 14/120
	I0731 17:07:23.504109   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 15/120
	I0731 17:07:24.505286   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 16/120
	I0731 17:07:25.506601   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 17/120
	I0731 17:07:26.508019   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 18/120
	I0731 17:07:27.509395   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 19/120
	I0731 17:07:28.511376   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 20/120
	I0731 17:07:29.512586   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 21/120
	I0731 17:07:30.514350   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 22/120
	I0731 17:07:31.515857   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 23/120
	I0731 17:07:32.518229   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 24/120
	I0731 17:07:33.520232   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 25/120
	I0731 17:07:34.522008   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 26/120
	I0731 17:07:35.523631   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 27/120
	I0731 17:07:36.524977   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 28/120
	I0731 17:07:37.526482   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 29/120
	I0731 17:07:38.528651   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 30/120
	I0731 17:07:39.530184   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 31/120
	I0731 17:07:40.531817   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 32/120
	I0731 17:07:41.533184   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 33/120
	I0731 17:07:42.534856   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 34/120
	I0731 17:07:43.536550   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 35/120
	I0731 17:07:44.537731   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 36/120
	I0731 17:07:45.539308   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 37/120
	I0731 17:07:46.540712   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 38/120
	I0731 17:07:47.542262   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 39/120
	I0731 17:07:48.544412   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 40/120
	I0731 17:07:49.545825   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 41/120
	I0731 17:07:50.547235   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 42/120
	I0731 17:07:51.548564   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 43/120
	I0731 17:07:52.549969   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 44/120
	I0731 17:07:53.552358   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 45/120
	I0731 17:07:54.553664   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 46/120
	I0731 17:07:55.554854   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 47/120
	I0731 17:07:56.556183   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 48/120
	I0731 17:07:57.557723   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 49/120
	I0731 17:07:58.560020   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 50/120
	I0731 17:07:59.561550   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 51/120
	I0731 17:08:00.563469   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 52/120
	I0731 17:08:01.565543   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 53/120
	I0731 17:08:02.566930   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 54/120
	I0731 17:08:03.568930   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 55/120
	I0731 17:08:04.570213   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 56/120
	I0731 17:08:05.571784   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 57/120
	I0731 17:08:06.573364   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 58/120
	I0731 17:08:07.574906   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 59/120
	I0731 17:08:08.577088   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 60/120
	I0731 17:08:09.578371   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 61/120
	I0731 17:08:10.579874   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 62/120
	I0731 17:08:11.581119   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 63/120
	I0731 17:08:12.582747   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 64/120
	I0731 17:08:13.583968   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 65/120
	I0731 17:08:14.585544   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 66/120
	I0731 17:08:15.587021   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 67/120
	I0731 17:08:16.588639   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 68/120
	I0731 17:08:17.589906   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 69/120
	I0731 17:08:18.591304   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 70/120
	I0731 17:08:19.593699   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 71/120
	I0731 17:08:20.594920   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 72/120
	I0731 17:08:21.596632   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 73/120
	I0731 17:08:22.597900   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 74/120
	I0731 17:08:23.599613   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 75/120
	I0731 17:08:24.601061   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 76/120
	I0731 17:08:25.602355   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 77/120
	I0731 17:08:26.603810   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 78/120
	I0731 17:08:27.605466   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 79/120
	I0731 17:08:28.606944   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 80/120
	I0731 17:08:29.608249   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 81/120
	I0731 17:08:30.609562   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 82/120
	I0731 17:08:31.610877   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 83/120
	I0731 17:08:32.612168   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 84/120
	I0731 17:08:33.613557   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 85/120
	I0731 17:08:34.615299   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 86/120
	I0731 17:08:35.616637   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 87/120
	I0731 17:08:36.617946   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 88/120
	I0731 17:08:37.619090   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 89/120
	I0731 17:08:38.621006   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 90/120
	I0731 17:08:39.622342   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 91/120
	I0731 17:08:40.623828   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 92/120
	I0731 17:08:41.625431   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 93/120
	I0731 17:08:42.626801   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 94/120
	I0731 17:08:43.628833   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 95/120
	I0731 17:08:44.630223   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 96/120
	I0731 17:08:45.631668   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 97/120
	I0731 17:08:46.632967   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 98/120
	I0731 17:08:47.634349   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 99/120
	I0731 17:08:48.635709   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 100/120
	I0731 17:08:49.637119   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 101/120
	I0731 17:08:50.638378   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 102/120
	I0731 17:08:51.640143   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 103/120
	I0731 17:08:52.641641   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 104/120
	I0731 17:08:53.643831   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 105/120
	I0731 17:08:54.645128   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 106/120
	I0731 17:08:55.647079   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 107/120
	I0731 17:08:56.648384   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 108/120
	I0731 17:08:57.649728   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 109/120
	I0731 17:08:58.651292   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 110/120
	I0731 17:08:59.653564   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 111/120
	I0731 17:09:00.654837   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 112/120
	I0731 17:09:01.657346   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 113/120
	I0731 17:09:02.658533   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 114/120
	I0731 17:09:03.660379   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 115/120
	I0731 17:09:04.661594   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 116/120
	I0731 17:09:05.663215   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 117/120
	I0731 17:09:06.664843   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 118/120
	I0731 17:09:07.666189   32232 main.go:141] libmachine: (ha-234651-m03) Waiting for machine to stop 119/120
	I0731 17:09:08.667079   32232 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0731 17:09:08.667156   32232 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0731 17:09:08.669034   32232 out.go:177] 
	W0731 17:09:08.670388   32232 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0731 17:09:08.670399   32232 out.go:239] * 
	* 
	W0731 17:09:08.672676   32232 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 17:09:08.674095   32232 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-234651 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-234651 --wait=true -v=7 --alsologtostderr
E0731 17:10:28.391924   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-234651 --wait=true -v=7 --alsologtostderr: (2m50.927093696s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-234651
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-234651 -n ha-234651
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-234651 logs -n 25: (1.654798284s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-234651 cp ha-234651-m03:/home/docker/cp-test.txt                              | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | ha-234651-m02:/home/docker/cp-test_ha-234651-m03_ha-234651-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-234651 ssh -n                                                                 | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | ha-234651-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-234651 ssh -n ha-234651-m02 sudo cat                                          | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | /home/docker/cp-test_ha-234651-m03_ha-234651-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-234651 cp ha-234651-m03:/home/docker/cp-test.txt                              | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | ha-234651-m04:/home/docker/cp-test_ha-234651-m03_ha-234651-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-234651 ssh -n                                                                 | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | ha-234651-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-234651 ssh -n ha-234651-m04 sudo cat                                          | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | /home/docker/cp-test_ha-234651-m03_ha-234651-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-234651 cp testdata/cp-test.txt                                                | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | ha-234651-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-234651 ssh -n                                                                 | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | ha-234651-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-234651 cp ha-234651-m04:/home/docker/cp-test.txt                              | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2373933749/001/cp-test_ha-234651-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-234651 ssh -n                                                                 | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | ha-234651-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-234651 cp ha-234651-m04:/home/docker/cp-test.txt                              | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | ha-234651:/home/docker/cp-test_ha-234651-m04_ha-234651.txt                       |           |         |         |                     |                     |
	| ssh     | ha-234651 ssh -n                                                                 | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | ha-234651-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-234651 ssh -n ha-234651 sudo cat                                              | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | /home/docker/cp-test_ha-234651-m04_ha-234651.txt                                 |           |         |         |                     |                     |
	| cp      | ha-234651 cp ha-234651-m04:/home/docker/cp-test.txt                              | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | ha-234651-m02:/home/docker/cp-test_ha-234651-m04_ha-234651-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-234651 ssh -n                                                                 | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | ha-234651-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-234651 ssh -n ha-234651-m02 sudo cat                                          | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | /home/docker/cp-test_ha-234651-m04_ha-234651-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-234651 cp ha-234651-m04:/home/docker/cp-test.txt                              | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | ha-234651-m03:/home/docker/cp-test_ha-234651-m04_ha-234651-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-234651 ssh -n                                                                 | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | ha-234651-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-234651 ssh -n ha-234651-m03 sudo cat                                          | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | /home/docker/cp-test_ha-234651-m04_ha-234651-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-234651 node stop m02 -v=7                                                     | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-234651 node start m02 -v=7                                                    | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:06 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-234651 -v=7                                                           | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:07 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-234651 -v=7                                                                | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:07 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-234651 --wait=true -v=7                                                    | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:09 UTC | 31 Jul 24 17:11 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-234651                                                                | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:11 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 17:09:08
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 17:09:08.717790   32734 out.go:291] Setting OutFile to fd 1 ...
	I0731 17:09:08.718017   32734 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 17:09:08.718026   32734 out.go:304] Setting ErrFile to fd 2...
	I0731 17:09:08.718029   32734 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 17:09:08.718188   32734 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19349-8084/.minikube/bin
	I0731 17:09:08.718712   32734 out.go:298] Setting JSON to false
	I0731 17:09:08.719579   32734 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3093,"bootTime":1722442656,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 17:09:08.719641   32734 start.go:139] virtualization: kvm guest
	I0731 17:09:08.721842   32734 out.go:177] * [ha-234651] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 17:09:08.723171   32734 notify.go:220] Checking for updates...
	I0731 17:09:08.723182   32734 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 17:09:08.724464   32734 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 17:09:08.725704   32734 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 17:09:08.727306   32734 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19349-8084/.minikube
	I0731 17:09:08.728843   32734 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 17:09:08.730226   32734 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 17:09:08.732147   32734 config.go:182] Loaded profile config "ha-234651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 17:09:08.732326   32734 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 17:09:08.732956   32734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:09:08.733018   32734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:09:08.748449   32734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41611
	I0731 17:09:08.748836   32734 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:09:08.749561   32734 main.go:141] libmachine: Using API Version  1
	I0731 17:09:08.749589   32734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:09:08.749965   32734 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:09:08.750164   32734 main.go:141] libmachine: (ha-234651) Calling .DriverName
	I0731 17:09:08.784306   32734 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 17:09:08.785661   32734 start.go:297] selected driver: kvm2
	I0731 17:09:08.785673   32734 start.go:901] validating driver "kvm2" against &{Name:ha-234651 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-234651 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.243 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.235 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.139 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.216 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false e
fk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 17:09:08.785806   32734 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 17:09:08.786152   32734 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 17:09:08.786236   32734 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19349-8084/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 17:09:08.800568   32734 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 17:09:08.801221   32734 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 17:09:08.801261   32734 cni.go:84] Creating CNI manager for ""
	I0731 17:09:08.801273   32734 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0731 17:09:08.801360   32734 start.go:340] cluster config:
	{Name:ha-234651 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-234651 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.243 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.235 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.139 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.216 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-ti
ller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 17:09:08.801491   32734 iso.go:125] acquiring lock: {Name:mk4494bb5a63902bf12a909c3109db85229bc516 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 17:09:08.803772   32734 out.go:177] * Starting "ha-234651" primary control-plane node in "ha-234651" cluster
	I0731 17:09:08.804901   32734 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 17:09:08.804932   32734 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0731 17:09:08.804944   32734 cache.go:56] Caching tarball of preloaded images
	I0731 17:09:08.805041   32734 preload.go:172] Found /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 17:09:08.805052   32734 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 17:09:08.805167   32734 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/config.json ...
	I0731 17:09:08.805373   32734 start.go:360] acquireMachinesLock for ha-234651: {Name:mkd78eacfab7d6c3058c6674434b3d889ec957e0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 17:09:08.805419   32734 start.go:364] duration metric: took 27.67µs to acquireMachinesLock for "ha-234651"
	I0731 17:09:08.805438   32734 start.go:96] Skipping create...Using existing machine configuration
	I0731 17:09:08.805447   32734 fix.go:54] fixHost starting: 
	I0731 17:09:08.805714   32734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:09:08.805750   32734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:09:08.819617   32734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39891
	I0731 17:09:08.819979   32734 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:09:08.820450   32734 main.go:141] libmachine: Using API Version  1
	I0731 17:09:08.820473   32734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:09:08.820815   32734 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:09:08.821006   32734 main.go:141] libmachine: (ha-234651) Calling .DriverName
	I0731 17:09:08.821156   32734 main.go:141] libmachine: (ha-234651) Calling .GetState
	I0731 17:09:08.822967   32734 fix.go:112] recreateIfNeeded on ha-234651: state=Running err=<nil>
	W0731 17:09:08.822985   32734 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 17:09:08.824970   32734 out.go:177] * Updating the running kvm2 "ha-234651" VM ...
	I0731 17:09:08.826235   32734 machine.go:94] provisionDockerMachine start ...
	I0731 17:09:08.826253   32734 main.go:141] libmachine: (ha-234651) Calling .DriverName
	I0731 17:09:08.826448   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 17:09:08.829388   32734 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:09:08.829875   32734 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 17:09:08.829893   32734 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:09:08.830126   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 17:09:08.830272   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 17:09:08.830409   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 17:09:08.830509   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 17:09:08.830683   32734 main.go:141] libmachine: Using SSH client type: native
	I0731 17:09:08.830853   32734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0731 17:09:08.830863   32734 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 17:09:08.940858   32734 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-234651
	
	I0731 17:09:08.940884   32734 main.go:141] libmachine: (ha-234651) Calling .GetMachineName
	I0731 17:09:08.941089   32734 buildroot.go:166] provisioning hostname "ha-234651"
	I0731 17:09:08.941110   32734 main.go:141] libmachine: (ha-234651) Calling .GetMachineName
	I0731 17:09:08.941288   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 17:09:08.943973   32734 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:09:08.944392   32734 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 17:09:08.944431   32734 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:09:08.944549   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 17:09:08.944732   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 17:09:08.944880   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 17:09:08.945046   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 17:09:08.945208   32734 main.go:141] libmachine: Using SSH client type: native
	I0731 17:09:08.945396   32734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0731 17:09:08.945410   32734 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-234651 && echo "ha-234651" | sudo tee /etc/hostname
	I0731 17:09:09.069576   32734 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-234651
	
	I0731 17:09:09.069616   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 17:09:09.072262   32734 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:09:09.072635   32734 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 17:09:09.072657   32734 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:09:09.072862   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 17:09:09.073044   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 17:09:09.073210   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 17:09:09.073324   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 17:09:09.073479   32734 main.go:141] libmachine: Using SSH client type: native
	I0731 17:09:09.073646   32734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0731 17:09:09.073668   32734 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-234651' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-234651/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-234651' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 17:09:09.188806   32734 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 17:09:09.188830   32734 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19349-8084/.minikube CaCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19349-8084/.minikube}
	I0731 17:09:09.188861   32734 buildroot.go:174] setting up certificates
	I0731 17:09:09.188871   32734 provision.go:84] configureAuth start
	I0731 17:09:09.188885   32734 main.go:141] libmachine: (ha-234651) Calling .GetMachineName
	I0731 17:09:09.189124   32734 main.go:141] libmachine: (ha-234651) Calling .GetIP
	I0731 17:09:09.191631   32734 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:09:09.192134   32734 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 17:09:09.192161   32734 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:09:09.192303   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 17:09:09.194709   32734 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:09:09.195079   32734 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 17:09:09.195100   32734 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:09:09.195267   32734 provision.go:143] copyHostCerts
	I0731 17:09:09.195290   32734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem
	I0731 17:09:09.195318   32734 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem, removing ...
	I0731 17:09:09.195327   32734 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem
	I0731 17:09:09.195391   32734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem (1082 bytes)
	I0731 17:09:09.195464   32734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem
	I0731 17:09:09.195485   32734 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem, removing ...
	I0731 17:09:09.195491   32734 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem
	I0731 17:09:09.195515   32734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem (1123 bytes)
	I0731 17:09:09.195554   32734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem
	I0731 17:09:09.195570   32734 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem, removing ...
	I0731 17:09:09.195576   32734 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem
	I0731 17:09:09.195601   32734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem (1675 bytes)
	I0731 17:09:09.195644   32734 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem org=jenkins.ha-234651 san=[127.0.0.1 192.168.39.243 ha-234651 localhost minikube]
	I0731 17:09:09.303470   32734 provision.go:177] copyRemoteCerts
	I0731 17:09:09.303528   32734 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 17:09:09.303554   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 17:09:09.306262   32734 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:09:09.306656   32734 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 17:09:09.306676   32734 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:09:09.306933   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 17:09:09.307131   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 17:09:09.307290   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 17:09:09.307445   32734 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651/id_rsa Username:docker}
	I0731 17:09:09.389166   32734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0731 17:09:09.389229   32734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 17:09:09.412205   32734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0731 17:09:09.412281   32734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 17:09:09.434220   32734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0731 17:09:09.434296   32734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0731 17:09:09.457865   32734 provision.go:87] duration metric: took 268.960736ms to configureAuth
	I0731 17:09:09.457895   32734 buildroot.go:189] setting minikube options for container-runtime
	I0731 17:09:09.458146   32734 config.go:182] Loaded profile config "ha-234651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 17:09:09.458238   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 17:09:09.460867   32734 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:09:09.461212   32734 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 17:09:09.461238   32734 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:09:09.461454   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 17:09:09.461607   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 17:09:09.461847   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 17:09:09.461983   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 17:09:09.462159   32734 main.go:141] libmachine: Using SSH client type: native
	I0731 17:09:09.462325   32734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0731 17:09:09.462338   32734 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 17:09:15.084797   32734 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 17:09:15.084823   32734 machine.go:97] duration metric: took 6.258576547s to provisionDockerMachine
	I0731 17:09:15.084837   32734 start.go:293] postStartSetup for "ha-234651" (driver="kvm2")
	I0731 17:09:15.084852   32734 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 17:09:15.084886   32734 main.go:141] libmachine: (ha-234651) Calling .DriverName
	I0731 17:09:15.085227   32734 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 17:09:15.085262   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 17:09:15.088116   32734 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:09:15.088527   32734 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 17:09:15.088555   32734 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:09:15.088675   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 17:09:15.088853   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 17:09:15.089022   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 17:09:15.089139   32734 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651/id_rsa Username:docker}
	I0731 17:09:15.173391   32734 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 17:09:15.177247   32734 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 17:09:15.177277   32734 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/addons for local assets ...
	I0731 17:09:15.177333   32734 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/files for local assets ...
	I0731 17:09:15.177443   32734 filesync.go:149] local asset: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem -> 152592.pem in /etc/ssl/certs
	I0731 17:09:15.177455   32734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem -> /etc/ssl/certs/152592.pem
	I0731 17:09:15.177544   32734 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 17:09:15.186304   32734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /etc/ssl/certs/152592.pem (1708 bytes)
	I0731 17:09:15.208295   32734 start.go:296] duration metric: took 123.442529ms for postStartSetup
	I0731 17:09:15.208363   32734 main.go:141] libmachine: (ha-234651) Calling .DriverName
	I0731 17:09:15.208642   32734 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0731 17:09:15.208667   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 17:09:15.211473   32734 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:09:15.211840   32734 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 17:09:15.211864   32734 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:09:15.212011   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 17:09:15.212185   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 17:09:15.212358   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 17:09:15.212492   32734 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651/id_rsa Username:docker}
	W0731 17:09:15.293108   32734 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0731 17:09:15.293131   32734 fix.go:56] duration metric: took 6.487685825s for fixHost
	I0731 17:09:15.293151   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 17:09:15.295915   32734 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:09:15.296261   32734 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 17:09:15.296285   32734 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:09:15.296476   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 17:09:15.296674   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 17:09:15.296838   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 17:09:15.296980   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 17:09:15.297126   32734 main.go:141] libmachine: Using SSH client type: native
	I0731 17:09:15.297385   32734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0731 17:09:15.297399   32734 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 17:09:15.407492   32734 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722445755.370029814
	
	I0731 17:09:15.407513   32734 fix.go:216] guest clock: 1722445755.370029814
	I0731 17:09:15.407520   32734 fix.go:229] Guest: 2024-07-31 17:09:15.370029814 +0000 UTC Remote: 2024-07-31 17:09:15.293137867 +0000 UTC m=+6.609119522 (delta=76.891947ms)
	I0731 17:09:15.407537   32734 fix.go:200] guest clock delta is within tolerance: 76.891947ms
	I0731 17:09:15.407541   32734 start.go:83] releasing machines lock for "ha-234651", held for 6.602112249s
	I0731 17:09:15.407558   32734 main.go:141] libmachine: (ha-234651) Calling .DriverName
	I0731 17:09:15.407825   32734 main.go:141] libmachine: (ha-234651) Calling .GetIP
	I0731 17:09:15.410601   32734 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:09:15.410985   32734 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 17:09:15.411012   32734 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:09:15.411197   32734 main.go:141] libmachine: (ha-234651) Calling .DriverName
	I0731 17:09:15.411652   32734 main.go:141] libmachine: (ha-234651) Calling .DriverName
	I0731 17:09:15.411800   32734 main.go:141] libmachine: (ha-234651) Calling .DriverName
	I0731 17:09:15.411895   32734 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 17:09:15.411931   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 17:09:15.412019   32734 ssh_runner.go:195] Run: cat /version.json
	I0731 17:09:15.412044   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 17:09:15.414601   32734 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:09:15.414703   32734 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:09:15.414930   32734 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 17:09:15.414959   32734 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:09:15.414987   32734 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 17:09:15.415001   32734 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:09:15.415033   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 17:09:15.415223   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 17:09:15.415300   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 17:09:15.415393   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 17:09:15.415453   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 17:09:15.415526   32734 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651/id_rsa Username:docker}
	I0731 17:09:15.415590   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 17:09:15.415728   32734 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651/id_rsa Username:docker}
	I0731 17:09:15.532253   32734 ssh_runner.go:195] Run: systemctl --version
	I0731 17:09:15.538160   32734 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 17:09:15.695517   32734 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 17:09:15.701921   32734 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 17:09:15.701976   32734 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 17:09:15.710737   32734 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0731 17:09:15.710759   32734 start.go:495] detecting cgroup driver to use...
	I0731 17:09:15.710817   32734 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 17:09:15.727388   32734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 17:09:15.740721   32734 docker.go:217] disabling cri-docker service (if available) ...
	I0731 17:09:15.740788   32734 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 17:09:15.753435   32734 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 17:09:15.766082   32734 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 17:09:15.903783   32734 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 17:09:16.046196   32734 docker.go:233] disabling docker service ...
	I0731 17:09:16.046272   32734 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 17:09:16.063469   32734 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 17:09:16.076872   32734 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 17:09:16.213795   32734 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 17:09:16.351321   32734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 17:09:16.364865   32734 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 17:09:16.382292   32734 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 17:09:16.382382   32734 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:09:16.391983   32734 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 17:09:16.392039   32734 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:09:16.401399   32734 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:09:16.410650   32734 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:09:16.420090   32734 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 17:09:16.429604   32734 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:09:16.439009   32734 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:09:16.448918   32734 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:09:16.458264   32734 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 17:09:16.466590   32734 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 17:09:16.474941   32734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 17:09:16.609915   32734 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 17:09:23.853312   32734 ssh_runner.go:235] Completed: sudo systemctl restart crio: (7.243356887s)
	I0731 17:09:23.853342   32734 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 17:09:23.853395   32734 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 17:09:23.858115   32734 start.go:563] Will wait 60s for crictl version
	I0731 17:09:23.858167   32734 ssh_runner.go:195] Run: which crictl
	I0731 17:09:23.861681   32734 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 17:09:23.896810   32734 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 17:09:23.896892   32734 ssh_runner.go:195] Run: crio --version
	I0731 17:09:23.930309   32734 ssh_runner.go:195] Run: crio --version
	I0731 17:09:23.958897   32734 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 17:09:23.960154   32734 main.go:141] libmachine: (ha-234651) Calling .GetIP
	I0731 17:09:23.963105   32734 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:09:23.963461   32734 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 17:09:23.963485   32734 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:09:23.963699   32734 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 17:09:23.968369   32734 kubeadm.go:883] updating cluster {Name:ha-234651 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-234651 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.243 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.235 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.139 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.216 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fr
eshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 17:09:23.968489   32734 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 17:09:23.968534   32734 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 17:09:24.023060   32734 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 17:09:24.023079   32734 crio.go:433] Images already preloaded, skipping extraction
	I0731 17:09:24.023150   32734 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 17:09:24.054846   32734 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 17:09:24.054870   32734 cache_images.go:84] Images are preloaded, skipping loading
	I0731 17:09:24.054881   32734 kubeadm.go:934] updating node { 192.168.39.243 8443 v1.30.3 crio true true} ...
	I0731 17:09:24.055008   32734 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-234651 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.243
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-234651 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 17:09:24.055094   32734 ssh_runner.go:195] Run: crio config
	I0731 17:09:24.108571   32734 cni.go:84] Creating CNI manager for ""
	I0731 17:09:24.108594   32734 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0731 17:09:24.108608   32734 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 17:09:24.108648   32734 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.243 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-234651 NodeName:ha-234651 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.243"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.243 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 17:09:24.108819   32734 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.243
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-234651"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.243
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.243"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 17:09:24.108844   32734 kube-vip.go:115] generating kube-vip config ...
	I0731 17:09:24.108915   32734 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0731 17:09:24.119465   32734 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0731 17:09:24.119563   32734 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0731 17:09:24.119619   32734 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 17:09:24.128151   32734 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 17:09:24.128199   32734 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0731 17:09:24.136567   32734 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0731 17:09:24.151554   32734 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 17:09:24.166604   32734 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0731 17:09:24.181883   32734 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0731 17:09:24.197927   32734 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0731 17:09:24.202365   32734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 17:09:24.335043   32734 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 17:09:24.349103   32734 certs.go:68] Setting up /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651 for IP: 192.168.39.243
	I0731 17:09:24.349125   32734 certs.go:194] generating shared ca certs ...
	I0731 17:09:24.349145   32734 certs.go:226] acquiring lock for ca certs: {Name:mk49eed439085f9c30746706460ce213570d1997 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 17:09:24.349308   32734 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key
	I0731 17:09:24.349364   32734 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key
	I0731 17:09:24.349375   32734 certs.go:256] generating profile certs ...
	I0731 17:09:24.349477   32734 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/client.key
	I0731 17:09:24.349512   32734 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.key.ebf2528d
	I0731 17:09:24.349537   32734 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.crt.ebf2528d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.243 192.168.39.235 192.168.39.139 192.168.39.254]
	I0731 17:09:24.405668   32734 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.crt.ebf2528d ...
	I0731 17:09:24.405699   32734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.crt.ebf2528d: {Name:mk62af08812ccea9aa161fffb4d843357d3b7fba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 17:09:24.405875   32734 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.key.ebf2528d ...
	I0731 17:09:24.405888   32734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.key.ebf2528d: {Name:mkeb68c589be161c2a7ec2258557e2505fc47d80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 17:09:24.405962   32734 certs.go:381] copying /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.crt.ebf2528d -> /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.crt
	I0731 17:09:24.406151   32734 certs.go:385] copying /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.key.ebf2528d -> /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.key
	I0731 17:09:24.406295   32734 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/proxy-client.key
	I0731 17:09:24.406310   32734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 17:09:24.406324   32734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0731 17:09:24.406342   32734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 17:09:24.406359   32734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 17:09:24.406374   32734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 17:09:24.406389   32734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 17:09:24.406403   32734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 17:09:24.406417   32734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 17:09:24.406472   32734 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem (1338 bytes)
	W0731 17:09:24.406505   32734 certs.go:480] ignoring /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259_empty.pem, impossibly tiny 0 bytes
	I0731 17:09:24.406516   32734 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 17:09:24.406551   32734 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem (1082 bytes)
	I0731 17:09:24.406576   32734 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem (1123 bytes)
	I0731 17:09:24.406600   32734 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem (1675 bytes)
	I0731 17:09:24.406640   32734 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem (1708 bytes)
	I0731 17:09:24.406672   32734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem -> /usr/share/ca-certificates/15259.pem
	I0731 17:09:24.406696   32734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem -> /usr/share/ca-certificates/152592.pem
	I0731 17:09:24.406711   32734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 17:09:24.407332   32734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 17:09:24.430792   32734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 17:09:24.452679   32734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 17:09:24.474405   32734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 17:09:24.495683   32734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0731 17:09:24.517072   32734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 17:09:24.538126   32734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 17:09:24.560173   32734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 17:09:24.581503   32734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem --> /usr/share/ca-certificates/15259.pem (1338 bytes)
	I0731 17:09:24.603405   32734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /usr/share/ca-certificates/152592.pem (1708 bytes)
	I0731 17:09:24.625241   32734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 17:09:24.646748   32734 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 17:09:24.661823   32734 ssh_runner.go:195] Run: openssl version
	I0731 17:09:24.667159   32734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15259.pem && ln -fs /usr/share/ca-certificates/15259.pem /etc/ssl/certs/15259.pem"
	I0731 17:09:24.676800   32734 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15259.pem
	I0731 17:09:24.680598   32734 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 16:54 /usr/share/ca-certificates/15259.pem
	I0731 17:09:24.680635   32734 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15259.pem
	I0731 17:09:24.686287   32734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15259.pem /etc/ssl/certs/51391683.0"
	I0731 17:09:24.694849   32734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/152592.pem && ln -fs /usr/share/ca-certificates/152592.pem /etc/ssl/certs/152592.pem"
	I0731 17:09:24.704231   32734 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/152592.pem
	I0731 17:09:24.708308   32734 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 16:54 /usr/share/ca-certificates/152592.pem
	I0731 17:09:24.708361   32734 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/152592.pem
	I0731 17:09:24.713520   32734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/152592.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 17:09:24.721797   32734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 17:09:24.731260   32734 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 17:09:24.735179   32734 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 16:42 /usr/share/ca-certificates/minikubeCA.pem
	I0731 17:09:24.735222   32734 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 17:09:24.740327   32734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 17:09:24.748945   32734 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 17:09:24.753014   32734 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 17:09:24.758189   32734 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 17:09:24.764247   32734 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 17:09:24.769339   32734 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 17:09:24.774540   32734 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 17:09:24.779705   32734 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 17:09:24.784914   32734 kubeadm.go:392] StartCluster: {Name:ha-234651 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-234651 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.243 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.235 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.139 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.216 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 17:09:24.785020   32734 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 17:09:24.785077   32734 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 17:09:24.820286   32734 cri.go:89] found id: "09c1fa932954d5765a611e206a5253d821dc0f2181c24e463a1ba6e7ed54a1a0"
	I0731 17:09:24.820304   32734 cri.go:89] found id: "0509b6238779790d1e64bfb61c9fc8ae3dc4fae67353192749c1b11636b82330"
	I0731 17:09:24.820308   32734 cri.go:89] found id: "8de99430711a8f55b548d2738f7eced3012b6d52c2c9faaae56b3e46319ac1e3"
	I0731 17:09:24.820311   32734 cri.go:89] found id: "d4dd0e0cc1ff97ea83c6b7a5f1f719d210801aec9cb468e32188c6c4096b4483"
	I0731 17:09:24.820314   32734 cri.go:89] found id: "5e4d66f773ff46f7a130d9359ac1316e1117516ebef8b32f9b20515706b3a2f8"
	I0731 17:09:24.820316   32734 cri.go:89] found id: "e8ef655791fe4b0b98612bab1bc9fea7e42c73e5327bdd0654676979cb2e8542"
	I0731 17:09:24.820319   32734 cri.go:89] found id: "dd9f6c4536535bb3eb07d6fa75fe49579d9cdee5ff6e4c440e0a58359d4f2f08"
	I0731 17:09:24.820321   32734 cri.go:89] found id: "631c8cee6152ae1808419f27d4b63fa0a6bcebb15c757dd96090c6267d63e6c2"
	I0731 17:09:24.820324   32734 cri.go:89] found id: "639ed1a246cfddb5bcce86bc66133eebc7339f13348608e46548a872b8df456e"
	I0731 17:09:24.820329   32734 cri.go:89] found id: "e5b3417940cd8bf8065dfefafb918da0e5bfda9fd5f7a2f9ccea0f1615151657"
	I0731 17:09:24.820333   32734 cri.go:89] found id: "b48ac56e48fe04efb8cbe0a1f24302f4ddd366a42a7ba2d4395113bf2157a3ea"
	I0731 17:09:24.820337   32734 cri.go:89] found id: "e3ae09638d5d5724f3e27352156c272f710b70f2c3319adc46eb518d357dc8e2"
	I0731 17:09:24.820343   32734 cri.go:89] found id: "ded6421f2f11d7f64ce80d1681e3d4c9d1453ff8aecc860e184ad0865768adde"
	I0731 17:09:24.820346   32734 cri.go:89] found id: ""
	I0731 17:09:24.820394   32734 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 31 17:12:00 ha-234651 crio[3611]: time="2024-07-31 17:12:00.246539005Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722445920246513168,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145867,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d63a5377-5dae-4d7e-bbe0-3416640cd178 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:12:00 ha-234651 crio[3611]: time="2024-07-31 17:12:00.247024291Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fe6f51da-30fa-4825-8ab0-51ff072aa1c1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:12:00 ha-234651 crio[3611]: time="2024-07-31 17:12:00.247078556Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fe6f51da-30fa-4825-8ab0-51ff072aa1c1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:12:00 ha-234651 crio[3611]: time="2024-07-31 17:12:00.247486425Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e254329cb95e187e2c887d98abcd0e7789ef5b45371d528f1b02e554af8fa7d8,PodSandboxId:636290b7ff37db3c7b09bfcf1187777bdc384f8d48b91a63f5b87fb894604abb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722445878885734851,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87455537-bdb8-438b-8122-db85bed01d09,},Annotations:map[string]string{io.kubernetes.container.hash: f1005df0,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f674e9fa7967bc4f5ac1136de9ccd505e7fe0f1638629fda624b6de0e2587f08,PodSandboxId:dd5c3cc8dfa5b5df9f8ef10a50841655cb93beea4fbf4be59c4e3709a91a67e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722445811872570760,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 399c842a2f5d3312c2f955f494ccfe00,},Annotations:map[string]string{io.kubernetes.container.hash: c9b9fb5b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3c496d013e95de87039db5d01f03c4419be22469ae9a37ce56d28e927b11599,PodSandboxId:fe52abfdd68d1b413668bc51f2e6f0a739aa55ede4d310878d640c81581d43fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722445808880587985,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9982d04d181fbc6333c44627c777728,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b33c5656b949f0abefff07127c4c158dae9bc422f9bfc155428533b75399e7b9,PodSandboxId:636290b7ff37db3c7b09bfcf1187777bdc384f8d48b91a63f5b87fb894604abb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722445791877819714,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87455537-bdb8-438b-8122-db85bed01d09,},Annotations:map[string]string{io.kubernetes.container.hash: f1005df0,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fb8c0770f47453fe0f2c6f7122df80e417aaafa2536b243fbfa363d7bb7f8f9,PodSandboxId:7b03581dae9766a752753718256518416c0d25615764fbf9757a9900acc1a591,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722445778774855214,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ec63e59dba1cb738446a43a6fa45bd3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aad4a823ee5ed456a27996750e3b981bb7867983b27a76041fde18d92f1aff17,PodSandboxId:2db80018f77dc188d1f4ac6d3682ebf3d2b34cb3f84d5555586e48a727d443cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722445768062018813,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nsx9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2cde006-dbb7-4e6f-a5f1-cf7760740104,},Annotations:map[string]string{io.kubernetes.container.hash: 9d1dd63f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"na
me\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11197df475b3001b3fd4ae5f027e3f3d3524805332a4f3556041ef5a33fba07d,PodSandboxId:6fc26849386a99727929b3f7e5f13ae61d2191960f680dc2e43f447c05053383,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722445767604809778,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jfgs8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ead85d8-0fd0-4900-8c02-2f23217ca208,},Annotations:map[string]string{io.kubern
etes.container.hash: 85f1c355,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97b26172311050320f93611435daf7f831aac7a433159e06423c34df3f98184f,PodSandboxId:cf494ddbf7d6d9905f5a17873a3ca1a29aa2bd0465775fc735a7c40618618e96,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722445767627236576,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wfbt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eda8095-ce75-4043-8ddf-6e5663de8212,},Annotations:map[string]string{io.kubernetes.container.hash: f1243e66,io.k
ubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a243770babc3658aea247fc713c71fc93fa1ffeb129584a57f6edb33a122803e,PodSandboxId:6041bfee3d69efce4e7651c2704eda209c766de18ba9cbc9d375b709f37a8765,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722445767742379835,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qbqb9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f76f862-d39e-4976-90e6-fb9a25cc485a,},Annotations:map[string]string{io.kubernetes.container.hash: 9bcf3d4b,io.kubernetes.container.ports: [{\"
name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3146e46f5c51a3c541231e2c641bbf4c5545f0a0fd2b57835029e0dc9ab29d5,PodSandboxId:dd5c3cc8dfa5b5df9f8ef10a50841655cb93beea4fbf4be59c4e3709a91a67e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722445767650964693,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-234651
,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 399c842a2f5d3312c2f955f494ccfe00,},Annotations:map[string]string{io.kubernetes.container.hash: c9b9fb5b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3d0e33db247e8b10836bed4eff51af3c70dbab815033d0f99c72fab296601fe,PodSandboxId:fe52abfdd68d1b413668bc51f2e6f0a739aa55ede4d310878d640c81581d43fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722445767509992622,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-2
34651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9982d04d181fbc6333c44627c777728,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d677e670bb4261a20839c274a89255a77d159fe3e55246ea9089551953087862,PodSandboxId:26976089b091358d1c26e7a6dbd31734c34e960ba5f378698d657955d6735c3f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722445767451944101,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-234651,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 7f8584f0ef1731ec6b6fb11b7fa84aeb,},Annotations:map[string]string{io.kubernetes.container.hash: a3feaab1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99fe8054c8b333dba8f752ca3b7d5a59656d3429dcd52f299ba255df7a8f6e23,PodSandboxId:83eb828e8f77a4ab85e195c75183e35d71fb7f704a6e5392079d8fcda89ed43a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722445767438737352,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d
efc3711c46905c8aec12eb318ecd3b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e4d66f773ff46f7a130d9359ac1316e1117516ebef8b32f9b20515706b3a2f8,PodSandboxId:754996ae28b01f43599725b23a8ef7ea035322c1da4ccbda6c379abd7e897fe0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722445212877412151,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qbqb9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f76f862-d39e-4976-90e6-fb9a25cc485a,}
,Annotations:map[string]string{io.kubernetes.container.hash: 9bcf3d4b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8ef655791fe4b0b98612bab1bc9fea7e42c73e5327bdd0654676979cb2e8542,PodSandboxId:1616415c6b8f6463e22fcd4bc693a293f8fdee9b60f2f3d789bc645ace8300a6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722445212817731273,Labels:map[string]st
ring{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nsx9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2cde006-dbb7-4e6f-a5f1-cf7760740104,},Annotations:map[string]string{io.kubernetes.container.hash: 9d1dd63f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd9f6c4536535bb3eb07d6fa75fe49579d9cdee5ff6e4c440e0a58359d4f2f08,PodSandboxId:99ba869aafb121f3032d38c0a926a34605210fa5e62304b252efb3ce0f139b7c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa223193655711
3757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722445201111519856,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wfbt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eda8095-ce75-4043-8ddf-6e5663de8212,},Annotations:map[string]string{io.kubernetes.container.hash: f1243e66,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:631c8cee6152ae1808419f27d4b63fa0a6bcebb15c757dd96090c6267d63e6c2,PodSandboxId:88fd41ca8aad113a54b218f3a36159bdb37464ee5f76ec8ae31f27cd31f7daa1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722445197367295822,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jfgs8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ead85d8-0fd0-4900-8c02-2f23217ca208,},Annotations:map[string]string{io.kubernetes.container.hash: 85f1c355,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5b3417940cd8bf8065dfefafb918da0e5bfda9fd5f7a2f9ccea0f1615151657,PodSandboxId:8ccf20bcd63f37163f71bce3d8a364eb92d86b0694bc75949c8979e5633c5cc2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHa
ndler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722445176479177868,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f8584f0ef1731ec6b6fb11b7fa84aeb,},Annotations:map[string]string{io.kubernetes.container.hash: a3feaab1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ded6421f2f11d7f64ce80d1681e3d4c9d1453ff8aecc860e184ad0865768adde,PodSandboxId:90ca479a21b17fc36c4328bfd82fe7023d144a893b3a1fbc315c66af59696ea2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a08
58c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722445176364856785,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3defc3711c46905c8aec12eb318ecd3b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fe6f51da-30fa-4825-8ab0-51ff072aa1c1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:12:00 ha-234651 crio[3611]: time="2024-07-31 17:12:00.288265263Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=28e58c79-43bc-47a1-b717-1b498fda5f5e name=/runtime.v1.RuntimeService/Version
	Jul 31 17:12:00 ha-234651 crio[3611]: time="2024-07-31 17:12:00.288374611Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=28e58c79-43bc-47a1-b717-1b498fda5f5e name=/runtime.v1.RuntimeService/Version
	Jul 31 17:12:00 ha-234651 crio[3611]: time="2024-07-31 17:12:00.289301787Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8a3984ac-5d21-44bd-9a60-bc76114ba09c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:12:00 ha-234651 crio[3611]: time="2024-07-31 17:12:00.289873531Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722445920289845629,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145867,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8a3984ac-5d21-44bd-9a60-bc76114ba09c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:12:00 ha-234651 crio[3611]: time="2024-07-31 17:12:00.290549178Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=27b4e479-59d6-4099-8bba-51ab11398b2c name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:12:00 ha-234651 crio[3611]: time="2024-07-31 17:12:00.290616581Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=27b4e479-59d6-4099-8bba-51ab11398b2c name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:12:00 ha-234651 crio[3611]: time="2024-07-31 17:12:00.291135412Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e254329cb95e187e2c887d98abcd0e7789ef5b45371d528f1b02e554af8fa7d8,PodSandboxId:636290b7ff37db3c7b09bfcf1187777bdc384f8d48b91a63f5b87fb894604abb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722445878885734851,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87455537-bdb8-438b-8122-db85bed01d09,},Annotations:map[string]string{io.kubernetes.container.hash: f1005df0,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f674e9fa7967bc4f5ac1136de9ccd505e7fe0f1638629fda624b6de0e2587f08,PodSandboxId:dd5c3cc8dfa5b5df9f8ef10a50841655cb93beea4fbf4be59c4e3709a91a67e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722445811872570760,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 399c842a2f5d3312c2f955f494ccfe00,},Annotations:map[string]string{io.kubernetes.container.hash: c9b9fb5b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3c496d013e95de87039db5d01f03c4419be22469ae9a37ce56d28e927b11599,PodSandboxId:fe52abfdd68d1b413668bc51f2e6f0a739aa55ede4d310878d640c81581d43fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722445808880587985,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9982d04d181fbc6333c44627c777728,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b33c5656b949f0abefff07127c4c158dae9bc422f9bfc155428533b75399e7b9,PodSandboxId:636290b7ff37db3c7b09bfcf1187777bdc384f8d48b91a63f5b87fb894604abb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722445791877819714,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87455537-bdb8-438b-8122-db85bed01d09,},Annotations:map[string]string{io.kubernetes.container.hash: f1005df0,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fb8c0770f47453fe0f2c6f7122df80e417aaafa2536b243fbfa363d7bb7f8f9,PodSandboxId:7b03581dae9766a752753718256518416c0d25615764fbf9757a9900acc1a591,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722445778774855214,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ec63e59dba1cb738446a43a6fa45bd3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aad4a823ee5ed456a27996750e3b981bb7867983b27a76041fde18d92f1aff17,PodSandboxId:2db80018f77dc188d1f4ac6d3682ebf3d2b34cb3f84d5555586e48a727d443cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722445768062018813,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nsx9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2cde006-dbb7-4e6f-a5f1-cf7760740104,},Annotations:map[string]string{io.kubernetes.container.hash: 9d1dd63f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"na
me\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11197df475b3001b3fd4ae5f027e3f3d3524805332a4f3556041ef5a33fba07d,PodSandboxId:6fc26849386a99727929b3f7e5f13ae61d2191960f680dc2e43f447c05053383,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722445767604809778,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jfgs8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ead85d8-0fd0-4900-8c02-2f23217ca208,},Annotations:map[string]string{io.kubern
etes.container.hash: 85f1c355,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97b26172311050320f93611435daf7f831aac7a433159e06423c34df3f98184f,PodSandboxId:cf494ddbf7d6d9905f5a17873a3ca1a29aa2bd0465775fc735a7c40618618e96,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722445767627236576,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wfbt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eda8095-ce75-4043-8ddf-6e5663de8212,},Annotations:map[string]string{io.kubernetes.container.hash: f1243e66,io.k
ubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a243770babc3658aea247fc713c71fc93fa1ffeb129584a57f6edb33a122803e,PodSandboxId:6041bfee3d69efce4e7651c2704eda209c766de18ba9cbc9d375b709f37a8765,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722445767742379835,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qbqb9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f76f862-d39e-4976-90e6-fb9a25cc485a,},Annotations:map[string]string{io.kubernetes.container.hash: 9bcf3d4b,io.kubernetes.container.ports: [{\"
name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3146e46f5c51a3c541231e2c641bbf4c5545f0a0fd2b57835029e0dc9ab29d5,PodSandboxId:dd5c3cc8dfa5b5df9f8ef10a50841655cb93beea4fbf4be59c4e3709a91a67e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722445767650964693,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-234651
,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 399c842a2f5d3312c2f955f494ccfe00,},Annotations:map[string]string{io.kubernetes.container.hash: c9b9fb5b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3d0e33db247e8b10836bed4eff51af3c70dbab815033d0f99c72fab296601fe,PodSandboxId:fe52abfdd68d1b413668bc51f2e6f0a739aa55ede4d310878d640c81581d43fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722445767509992622,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-2
34651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9982d04d181fbc6333c44627c777728,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d677e670bb4261a20839c274a89255a77d159fe3e55246ea9089551953087862,PodSandboxId:26976089b091358d1c26e7a6dbd31734c34e960ba5f378698d657955d6735c3f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722445767451944101,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-234651,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 7f8584f0ef1731ec6b6fb11b7fa84aeb,},Annotations:map[string]string{io.kubernetes.container.hash: a3feaab1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99fe8054c8b333dba8f752ca3b7d5a59656d3429dcd52f299ba255df7a8f6e23,PodSandboxId:83eb828e8f77a4ab85e195c75183e35d71fb7f704a6e5392079d8fcda89ed43a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722445767438737352,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d
efc3711c46905c8aec12eb318ecd3b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e4d66f773ff46f7a130d9359ac1316e1117516ebef8b32f9b20515706b3a2f8,PodSandboxId:754996ae28b01f43599725b23a8ef7ea035322c1da4ccbda6c379abd7e897fe0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722445212877412151,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qbqb9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f76f862-d39e-4976-90e6-fb9a25cc485a,}
,Annotations:map[string]string{io.kubernetes.container.hash: 9bcf3d4b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8ef655791fe4b0b98612bab1bc9fea7e42c73e5327bdd0654676979cb2e8542,PodSandboxId:1616415c6b8f6463e22fcd4bc693a293f8fdee9b60f2f3d789bc645ace8300a6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722445212817731273,Labels:map[string]st
ring{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nsx9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2cde006-dbb7-4e6f-a5f1-cf7760740104,},Annotations:map[string]string{io.kubernetes.container.hash: 9d1dd63f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd9f6c4536535bb3eb07d6fa75fe49579d9cdee5ff6e4c440e0a58359d4f2f08,PodSandboxId:99ba869aafb121f3032d38c0a926a34605210fa5e62304b252efb3ce0f139b7c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa223193655711
3757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722445201111519856,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wfbt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eda8095-ce75-4043-8ddf-6e5663de8212,},Annotations:map[string]string{io.kubernetes.container.hash: f1243e66,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:631c8cee6152ae1808419f27d4b63fa0a6bcebb15c757dd96090c6267d63e6c2,PodSandboxId:88fd41ca8aad113a54b218f3a36159bdb37464ee5f76ec8ae31f27cd31f7daa1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722445197367295822,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jfgs8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ead85d8-0fd0-4900-8c02-2f23217ca208,},Annotations:map[string]string{io.kubernetes.container.hash: 85f1c355,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5b3417940cd8bf8065dfefafb918da0e5bfda9fd5f7a2f9ccea0f1615151657,PodSandboxId:8ccf20bcd63f37163f71bce3d8a364eb92d86b0694bc75949c8979e5633c5cc2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHa
ndler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722445176479177868,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f8584f0ef1731ec6b6fb11b7fa84aeb,},Annotations:map[string]string{io.kubernetes.container.hash: a3feaab1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ded6421f2f11d7f64ce80d1681e3d4c9d1453ff8aecc860e184ad0865768adde,PodSandboxId:90ca479a21b17fc36c4328bfd82fe7023d144a893b3a1fbc315c66af59696ea2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a08
58c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722445176364856785,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3defc3711c46905c8aec12eb318ecd3b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=27b4e479-59d6-4099-8bba-51ab11398b2c name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:12:00 ha-234651 crio[3611]: time="2024-07-31 17:12:00.332245112Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6d363fcb-c48e-4f45-83c7-98d96e387fc5 name=/runtime.v1.RuntimeService/Version
	Jul 31 17:12:00 ha-234651 crio[3611]: time="2024-07-31 17:12:00.332320593Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6d363fcb-c48e-4f45-83c7-98d96e387fc5 name=/runtime.v1.RuntimeService/Version
	Jul 31 17:12:00 ha-234651 crio[3611]: time="2024-07-31 17:12:00.333444941Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=74db179e-6a94-49f5-969f-50dc24a12608 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:12:00 ha-234651 crio[3611]: time="2024-07-31 17:12:00.333848560Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722445920333827684,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145867,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=74db179e-6a94-49f5-969f-50dc24a12608 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:12:00 ha-234651 crio[3611]: time="2024-07-31 17:12:00.334486176Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=227dbf37-8f53-4c75-b12a-9ce7769b77a4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:12:00 ha-234651 crio[3611]: time="2024-07-31 17:12:00.334538537Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=227dbf37-8f53-4c75-b12a-9ce7769b77a4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:12:00 ha-234651 crio[3611]: time="2024-07-31 17:12:00.338320271Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e254329cb95e187e2c887d98abcd0e7789ef5b45371d528f1b02e554af8fa7d8,PodSandboxId:636290b7ff37db3c7b09bfcf1187777bdc384f8d48b91a63f5b87fb894604abb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722445878885734851,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87455537-bdb8-438b-8122-db85bed01d09,},Annotations:map[string]string{io.kubernetes.container.hash: f1005df0,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f674e9fa7967bc4f5ac1136de9ccd505e7fe0f1638629fda624b6de0e2587f08,PodSandboxId:dd5c3cc8dfa5b5df9f8ef10a50841655cb93beea4fbf4be59c4e3709a91a67e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722445811872570760,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 399c842a2f5d3312c2f955f494ccfe00,},Annotations:map[string]string{io.kubernetes.container.hash: c9b9fb5b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3c496d013e95de87039db5d01f03c4419be22469ae9a37ce56d28e927b11599,PodSandboxId:fe52abfdd68d1b413668bc51f2e6f0a739aa55ede4d310878d640c81581d43fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722445808880587985,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9982d04d181fbc6333c44627c777728,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b33c5656b949f0abefff07127c4c158dae9bc422f9bfc155428533b75399e7b9,PodSandboxId:636290b7ff37db3c7b09bfcf1187777bdc384f8d48b91a63f5b87fb894604abb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722445791877819714,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87455537-bdb8-438b-8122-db85bed01d09,},Annotations:map[string]string{io.kubernetes.container.hash: f1005df0,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fb8c0770f47453fe0f2c6f7122df80e417aaafa2536b243fbfa363d7bb7f8f9,PodSandboxId:7b03581dae9766a752753718256518416c0d25615764fbf9757a9900acc1a591,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722445778774855214,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ec63e59dba1cb738446a43a6fa45bd3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aad4a823ee5ed456a27996750e3b981bb7867983b27a76041fde18d92f1aff17,PodSandboxId:2db80018f77dc188d1f4ac6d3682ebf3d2b34cb3f84d5555586e48a727d443cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722445768062018813,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nsx9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2cde006-dbb7-4e6f-a5f1-cf7760740104,},Annotations:map[string]string{io.kubernetes.container.hash: 9d1dd63f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"na
me\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11197df475b3001b3fd4ae5f027e3f3d3524805332a4f3556041ef5a33fba07d,PodSandboxId:6fc26849386a99727929b3f7e5f13ae61d2191960f680dc2e43f447c05053383,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722445767604809778,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jfgs8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ead85d8-0fd0-4900-8c02-2f23217ca208,},Annotations:map[string]string{io.kubern
etes.container.hash: 85f1c355,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97b26172311050320f93611435daf7f831aac7a433159e06423c34df3f98184f,PodSandboxId:cf494ddbf7d6d9905f5a17873a3ca1a29aa2bd0465775fc735a7c40618618e96,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722445767627236576,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wfbt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eda8095-ce75-4043-8ddf-6e5663de8212,},Annotations:map[string]string{io.kubernetes.container.hash: f1243e66,io.k
ubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a243770babc3658aea247fc713c71fc93fa1ffeb129584a57f6edb33a122803e,PodSandboxId:6041bfee3d69efce4e7651c2704eda209c766de18ba9cbc9d375b709f37a8765,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722445767742379835,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qbqb9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f76f862-d39e-4976-90e6-fb9a25cc485a,},Annotations:map[string]string{io.kubernetes.container.hash: 9bcf3d4b,io.kubernetes.container.ports: [{\"
name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3146e46f5c51a3c541231e2c641bbf4c5545f0a0fd2b57835029e0dc9ab29d5,PodSandboxId:dd5c3cc8dfa5b5df9f8ef10a50841655cb93beea4fbf4be59c4e3709a91a67e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722445767650964693,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-234651
,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 399c842a2f5d3312c2f955f494ccfe00,},Annotations:map[string]string{io.kubernetes.container.hash: c9b9fb5b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3d0e33db247e8b10836bed4eff51af3c70dbab815033d0f99c72fab296601fe,PodSandboxId:fe52abfdd68d1b413668bc51f2e6f0a739aa55ede4d310878d640c81581d43fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722445767509992622,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-2
34651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9982d04d181fbc6333c44627c777728,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d677e670bb4261a20839c274a89255a77d159fe3e55246ea9089551953087862,PodSandboxId:26976089b091358d1c26e7a6dbd31734c34e960ba5f378698d657955d6735c3f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722445767451944101,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-234651,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 7f8584f0ef1731ec6b6fb11b7fa84aeb,},Annotations:map[string]string{io.kubernetes.container.hash: a3feaab1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99fe8054c8b333dba8f752ca3b7d5a59656d3429dcd52f299ba255df7a8f6e23,PodSandboxId:83eb828e8f77a4ab85e195c75183e35d71fb7f704a6e5392079d8fcda89ed43a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722445767438737352,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d
efc3711c46905c8aec12eb318ecd3b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e4d66f773ff46f7a130d9359ac1316e1117516ebef8b32f9b20515706b3a2f8,PodSandboxId:754996ae28b01f43599725b23a8ef7ea035322c1da4ccbda6c379abd7e897fe0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722445212877412151,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qbqb9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f76f862-d39e-4976-90e6-fb9a25cc485a,}
,Annotations:map[string]string{io.kubernetes.container.hash: 9bcf3d4b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8ef655791fe4b0b98612bab1bc9fea7e42c73e5327bdd0654676979cb2e8542,PodSandboxId:1616415c6b8f6463e22fcd4bc693a293f8fdee9b60f2f3d789bc645ace8300a6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722445212817731273,Labels:map[string]st
ring{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nsx9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2cde006-dbb7-4e6f-a5f1-cf7760740104,},Annotations:map[string]string{io.kubernetes.container.hash: 9d1dd63f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd9f6c4536535bb3eb07d6fa75fe49579d9cdee5ff6e4c440e0a58359d4f2f08,PodSandboxId:99ba869aafb121f3032d38c0a926a34605210fa5e62304b252efb3ce0f139b7c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa223193655711
3757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722445201111519856,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wfbt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eda8095-ce75-4043-8ddf-6e5663de8212,},Annotations:map[string]string{io.kubernetes.container.hash: f1243e66,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:631c8cee6152ae1808419f27d4b63fa0a6bcebb15c757dd96090c6267d63e6c2,PodSandboxId:88fd41ca8aad113a54b218f3a36159bdb37464ee5f76ec8ae31f27cd31f7daa1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722445197367295822,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jfgs8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ead85d8-0fd0-4900-8c02-2f23217ca208,},Annotations:map[string]string{io.kubernetes.container.hash: 85f1c355,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5b3417940cd8bf8065dfefafb918da0e5bfda9fd5f7a2f9ccea0f1615151657,PodSandboxId:8ccf20bcd63f37163f71bce3d8a364eb92d86b0694bc75949c8979e5633c5cc2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHa
ndler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722445176479177868,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f8584f0ef1731ec6b6fb11b7fa84aeb,},Annotations:map[string]string{io.kubernetes.container.hash: a3feaab1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ded6421f2f11d7f64ce80d1681e3d4c9d1453ff8aecc860e184ad0865768adde,PodSandboxId:90ca479a21b17fc36c4328bfd82fe7023d144a893b3a1fbc315c66af59696ea2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a08
58c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722445176364856785,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3defc3711c46905c8aec12eb318ecd3b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=227dbf37-8f53-4c75-b12a-9ce7769b77a4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:12:00 ha-234651 crio[3611]: time="2024-07-31 17:12:00.385637284Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6c8dae4c-f429-4992-9e31-5f78d8558d7c name=/runtime.v1.RuntimeService/Version
	Jul 31 17:12:00 ha-234651 crio[3611]: time="2024-07-31 17:12:00.385716098Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6c8dae4c-f429-4992-9e31-5f78d8558d7c name=/runtime.v1.RuntimeService/Version
	Jul 31 17:12:00 ha-234651 crio[3611]: time="2024-07-31 17:12:00.387410008Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=463ea159-cced-411b-9e8f-d29fdb513361 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:12:00 ha-234651 crio[3611]: time="2024-07-31 17:12:00.387854913Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722445920387822102,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145867,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=463ea159-cced-411b-9e8f-d29fdb513361 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:12:00 ha-234651 crio[3611]: time="2024-07-31 17:12:00.388447621Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5e2bfbde-fd25-460f-9190-eadd55f68ccf name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:12:00 ha-234651 crio[3611]: time="2024-07-31 17:12:00.388503168Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5e2bfbde-fd25-460f-9190-eadd55f68ccf name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:12:00 ha-234651 crio[3611]: time="2024-07-31 17:12:00.389059200Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e254329cb95e187e2c887d98abcd0e7789ef5b45371d528f1b02e554af8fa7d8,PodSandboxId:636290b7ff37db3c7b09bfcf1187777bdc384f8d48b91a63f5b87fb894604abb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722445878885734851,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87455537-bdb8-438b-8122-db85bed01d09,},Annotations:map[string]string{io.kubernetes.container.hash: f1005df0,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f674e9fa7967bc4f5ac1136de9ccd505e7fe0f1638629fda624b6de0e2587f08,PodSandboxId:dd5c3cc8dfa5b5df9f8ef10a50841655cb93beea4fbf4be59c4e3709a91a67e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722445811872570760,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 399c842a2f5d3312c2f955f494ccfe00,},Annotations:map[string]string{io.kubernetes.container.hash: c9b9fb5b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3c496d013e95de87039db5d01f03c4419be22469ae9a37ce56d28e927b11599,PodSandboxId:fe52abfdd68d1b413668bc51f2e6f0a739aa55ede4d310878d640c81581d43fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722445808880587985,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9982d04d181fbc6333c44627c777728,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b33c5656b949f0abefff07127c4c158dae9bc422f9bfc155428533b75399e7b9,PodSandboxId:636290b7ff37db3c7b09bfcf1187777bdc384f8d48b91a63f5b87fb894604abb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722445791877819714,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87455537-bdb8-438b-8122-db85bed01d09,},Annotations:map[string]string{io.kubernetes.container.hash: f1005df0,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fb8c0770f47453fe0f2c6f7122df80e417aaafa2536b243fbfa363d7bb7f8f9,PodSandboxId:7b03581dae9766a752753718256518416c0d25615764fbf9757a9900acc1a591,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722445778774855214,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ec63e59dba1cb738446a43a6fa45bd3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aad4a823ee5ed456a27996750e3b981bb7867983b27a76041fde18d92f1aff17,PodSandboxId:2db80018f77dc188d1f4ac6d3682ebf3d2b34cb3f84d5555586e48a727d443cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722445768062018813,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nsx9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2cde006-dbb7-4e6f-a5f1-cf7760740104,},Annotations:map[string]string{io.kubernetes.container.hash: 9d1dd63f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"na
me\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11197df475b3001b3fd4ae5f027e3f3d3524805332a4f3556041ef5a33fba07d,PodSandboxId:6fc26849386a99727929b3f7e5f13ae61d2191960f680dc2e43f447c05053383,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722445767604809778,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jfgs8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ead85d8-0fd0-4900-8c02-2f23217ca208,},Annotations:map[string]string{io.kubern
etes.container.hash: 85f1c355,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97b26172311050320f93611435daf7f831aac7a433159e06423c34df3f98184f,PodSandboxId:cf494ddbf7d6d9905f5a17873a3ca1a29aa2bd0465775fc735a7c40618618e96,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722445767627236576,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wfbt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eda8095-ce75-4043-8ddf-6e5663de8212,},Annotations:map[string]string{io.kubernetes.container.hash: f1243e66,io.k
ubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a243770babc3658aea247fc713c71fc93fa1ffeb129584a57f6edb33a122803e,PodSandboxId:6041bfee3d69efce4e7651c2704eda209c766de18ba9cbc9d375b709f37a8765,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722445767742379835,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qbqb9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f76f862-d39e-4976-90e6-fb9a25cc485a,},Annotations:map[string]string{io.kubernetes.container.hash: 9bcf3d4b,io.kubernetes.container.ports: [{\"
name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3146e46f5c51a3c541231e2c641bbf4c5545f0a0fd2b57835029e0dc9ab29d5,PodSandboxId:dd5c3cc8dfa5b5df9f8ef10a50841655cb93beea4fbf4be59c4e3709a91a67e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722445767650964693,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-234651
,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 399c842a2f5d3312c2f955f494ccfe00,},Annotations:map[string]string{io.kubernetes.container.hash: c9b9fb5b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3d0e33db247e8b10836bed4eff51af3c70dbab815033d0f99c72fab296601fe,PodSandboxId:fe52abfdd68d1b413668bc51f2e6f0a739aa55ede4d310878d640c81581d43fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722445767509992622,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-2
34651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9982d04d181fbc6333c44627c777728,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d677e670bb4261a20839c274a89255a77d159fe3e55246ea9089551953087862,PodSandboxId:26976089b091358d1c26e7a6dbd31734c34e960ba5f378698d657955d6735c3f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722445767451944101,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-234651,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 7f8584f0ef1731ec6b6fb11b7fa84aeb,},Annotations:map[string]string{io.kubernetes.container.hash: a3feaab1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99fe8054c8b333dba8f752ca3b7d5a59656d3429dcd52f299ba255df7a8f6e23,PodSandboxId:83eb828e8f77a4ab85e195c75183e35d71fb7f704a6e5392079d8fcda89ed43a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722445767438737352,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d
efc3711c46905c8aec12eb318ecd3b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e4d66f773ff46f7a130d9359ac1316e1117516ebef8b32f9b20515706b3a2f8,PodSandboxId:754996ae28b01f43599725b23a8ef7ea035322c1da4ccbda6c379abd7e897fe0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722445212877412151,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qbqb9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f76f862-d39e-4976-90e6-fb9a25cc485a,}
,Annotations:map[string]string{io.kubernetes.container.hash: 9bcf3d4b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8ef655791fe4b0b98612bab1bc9fea7e42c73e5327bdd0654676979cb2e8542,PodSandboxId:1616415c6b8f6463e22fcd4bc693a293f8fdee9b60f2f3d789bc645ace8300a6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722445212817731273,Labels:map[string]st
ring{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nsx9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2cde006-dbb7-4e6f-a5f1-cf7760740104,},Annotations:map[string]string{io.kubernetes.container.hash: 9d1dd63f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd9f6c4536535bb3eb07d6fa75fe49579d9cdee5ff6e4c440e0a58359d4f2f08,PodSandboxId:99ba869aafb121f3032d38c0a926a34605210fa5e62304b252efb3ce0f139b7c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa223193655711
3757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722445201111519856,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wfbt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eda8095-ce75-4043-8ddf-6e5663de8212,},Annotations:map[string]string{io.kubernetes.container.hash: f1243e66,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:631c8cee6152ae1808419f27d4b63fa0a6bcebb15c757dd96090c6267d63e6c2,PodSandboxId:88fd41ca8aad113a54b218f3a36159bdb37464ee5f76ec8ae31f27cd31f7daa1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722445197367295822,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jfgs8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ead85d8-0fd0-4900-8c02-2f23217ca208,},Annotations:map[string]string{io.kubernetes.container.hash: 85f1c355,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5b3417940cd8bf8065dfefafb918da0e5bfda9fd5f7a2f9ccea0f1615151657,PodSandboxId:8ccf20bcd63f37163f71bce3d8a364eb92d86b0694bc75949c8979e5633c5cc2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHa
ndler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722445176479177868,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f8584f0ef1731ec6b6fb11b7fa84aeb,},Annotations:map[string]string{io.kubernetes.container.hash: a3feaab1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ded6421f2f11d7f64ce80d1681e3d4c9d1453ff8aecc860e184ad0865768adde,PodSandboxId:90ca479a21b17fc36c4328bfd82fe7023d144a893b3a1fbc315c66af59696ea2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a08
58c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722445176364856785,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3defc3711c46905c8aec12eb318ecd3b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5e2bfbde-fd25-460f-9190-eadd55f68ccf name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	e254329cb95e1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                     41 seconds ago       Running             storage-provisioner       5                   636290b7ff37d       storage-provisioner
	f674e9fa7967b       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                     About a minute ago   Running             kube-apiserver            3                   dd5c3cc8dfa5b       kube-apiserver-ha-234651
	d3c496d013e95       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                     About a minute ago   Running             kube-controller-manager   2                   fe52abfdd68d1       kube-controller-manager-ha-234651
	b33c5656b949f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                     2 minutes ago        Exited              storage-provisioner       4                   636290b7ff37d       storage-provisioner
	5fb8c0770f474       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                     2 minutes ago        Running             kube-vip                  0                   7b03581dae976       kube-vip-ha-234651
	aad4a823ee5ed       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                     2 minutes ago        Running             coredns                   1                   2db80018f77dc       coredns-7db6d8ff4d-nsx9j
	a243770babc36       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                     2 minutes ago        Running             coredns                   1                   6041bfee3d69e       coredns-7db6d8ff4d-qbqb9
	b3146e46f5c51       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                     2 minutes ago        Exited              kube-apiserver            2                   dd5c3cc8dfa5b       kube-apiserver-ha-234651
	97b2617231105       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                     2 minutes ago        Running             kindnet-cni               1                   cf494ddbf7d6d       kindnet-wfbt4
	11197df475b30       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                     2 minutes ago        Running             kube-proxy                1                   6fc26849386a9       kube-proxy-jfgs8
	e3d0e33db247e       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                     2 minutes ago        Exited              kube-controller-manager   1                   fe52abfdd68d1       kube-controller-manager-ha-234651
	d677e670bb426       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                     2 minutes ago        Running             etcd                      1                   26976089b0913       etcd-ha-234651
	99fe8054c8b33       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                     2 minutes ago        Running             kube-scheduler            1                   83eb828e8f77a       kube-scheduler-ha-234651
	5e4d66f773ff4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                     11 minutes ago       Exited              coredns                   0                   754996ae28b01       coredns-7db6d8ff4d-qbqb9
	e8ef655791fe4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                     11 minutes ago       Exited              coredns                   0                   1616415c6b8f6       coredns-7db6d8ff4d-nsx9j
	dd9f6c4536535       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9   11 minutes ago       Exited              kindnet-cni               0                   99ba869aafb12       kindnet-wfbt4
	631c8cee6152a       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                     12 minutes ago       Exited              kube-proxy                0                   88fd41ca8aad1       kube-proxy-jfgs8
	e5b3417940cd8       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                     12 minutes ago       Exited              etcd                      0                   8ccf20bcd63f3       etcd-ha-234651
	ded6421f2f11d       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                     12 minutes ago       Exited              kube-scheduler            0                   90ca479a21b17       kube-scheduler-ha-234651
	
	
	==> coredns [5e4d66f773ff46f7a130d9359ac1316e1117516ebef8b32f9b20515706b3a2f8] <==
	[INFO] 10.244.1.3:46716 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000125267s
	[INFO] 10.244.2.2:44128 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00011516s
	[INFO] 10.244.2.2:51451 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000094315s
	[INFO] 10.244.2.2:36147 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001399s
	[INFO] 10.244.2.2:36545 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001276628s
	[INFO] 10.244.1.2:43917 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113879s
	[INFO] 10.244.1.2:52270 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00173961s
	[INFO] 10.244.1.2:43272 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000090127s
	[INFO] 10.244.1.2:40969 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001253454s
	[INFO] 10.244.1.2:36005 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000101429s
	[INFO] 10.244.1.3:57882 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000155324s
	[INFO] 10.244.1.3:52921 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104436s
	[INFO] 10.244.1.3:53848 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000118293s
	[INFO] 10.244.1.2:59324 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114877s
	[INFO] 10.244.1.2:35559 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080871s
	[INFO] 10.244.1.3:36523 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000149158s
	[INFO] 10.244.1.3:43713 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000113949s
	[INFO] 10.244.2.2:57100 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000104476s
	[INFO] 10.244.2.2:36343 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000075949s
	[INFO] 10.244.1.2:36593 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000110887s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	
	
	==> coredns [a243770babc3658aea247fc713c71fc93fa1ffeb129584a57f6edb33a122803e] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.4:59116->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.4:59116->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [aad4a823ee5ed456a27996750e3b981bb7867983b27a76041fde18d92f1aff17] <==
	[INFO] plugin/kubernetes: Trace[985434580]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Jul-2024 17:09:41.343) (total time: 10566ms):
	Trace[985434580]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:33240->10.96.0.1:443: read: connection reset by peer 10566ms (17:09:51.910)
	Trace[985434580]: [10.566644285s] [10.566644285s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:33240->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:33242->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:33242->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [e8ef655791fe4b0b98612bab1bc9fea7e42c73e5327bdd0654676979cb2e8542] <==
	[INFO] 10.244.1.3:35968 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003019868s
	[INFO] 10.244.1.3:50760 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000096087s
	[INFO] 10.244.2.2:47184 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001873446s
	[INFO] 10.244.2.2:52684 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000141574s
	[INFO] 10.244.2.2:55915 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000097985s
	[INFO] 10.244.2.2:37641 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000064285s
	[INFO] 10.244.1.2:44538 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000098479s
	[INFO] 10.244.1.2:51050 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000063987s
	[INFO] 10.244.1.2:53102 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117625s
	[INFO] 10.244.1.3:34472 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000093028s
	[INFO] 10.244.2.2:50493 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000198464s
	[INFO] 10.244.2.2:59387 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000091819s
	[INFO] 10.244.2.2:46587 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000140652s
	[INFO] 10.244.2.2:44332 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000062045s
	[INFO] 10.244.1.2:56100 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000129501s
	[INFO] 10.244.1.2:52904 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075504s
	[INFO] 10.244.1.3:45513 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000201365s
	[INFO] 10.244.1.3:56964 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000220702s
	[INFO] 10.244.2.2:52612 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000221354s
	[INFO] 10.244.2.2:34847 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000096723s
	[INFO] 10.244.1.2:54098 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00017441s
	[INFO] 10.244.1.2:35429 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000097269s
	[INFO] 10.244.1.2:35606 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000150264s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-234651
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-234651
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1d737dad7efa60c56d30434fcd857dd3b14c91d9
	                    minikube.k8s.io/name=ha-234651
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T16_59_43_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 16:59:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-234651
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 17:11:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 17:10:26 +0000   Wed, 31 Jul 2024 16:59:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 17:10:26 +0000   Wed, 31 Jul 2024 16:59:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 17:10:26 +0000   Wed, 31 Jul 2024 16:59:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 17:10:26 +0000   Wed, 31 Jul 2024 17:00:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.243
	  Hostname:    ha-234651
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 78c611c203cf48ab9dc710fc8d4b3901
	  System UUID:                78c611c2-03cf-48ab-9dc7-10fc8d4b3901
	  Boot ID:                    7f43c774-6026-42b9-978d-915af2f564da
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-nsx9j             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     12m
	  kube-system                 coredns-7db6d8ff4d-qbqb9             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     12m
	  kube-system                 etcd-ha-234651                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-wfbt4                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-234651             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-234651    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-jfgs8                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-234651             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-234651                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 12m    kube-proxy       
	  Normal  Starting                 111s   kube-proxy       
	  Normal  NodeHasNoDiskPressure    12m    kubelet          Node ha-234651 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 12m    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m    kubelet          Node ha-234651 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     12m    kubelet          Node ha-234651 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m    node-controller  Node ha-234651 event: Registered Node ha-234651 in Controller
	  Normal  NodeReady                11m    kubelet          Node ha-234651 status is now: NodeReady
	  Normal  RegisteredNode           11m    node-controller  Node ha-234651 event: Registered Node ha-234651 in Controller
	  Normal  RegisteredNode           9m44s  node-controller  Node ha-234651 event: Registered Node ha-234651 in Controller
	  Normal  RegisteredNode           99s    node-controller  Node ha-234651 event: Registered Node ha-234651 in Controller
	  Normal  RegisteredNode           93s    node-controller  Node ha-234651 event: Registered Node ha-234651 in Controller
	  Normal  RegisteredNode           27s    node-controller  Node ha-234651 event: Registered Node ha-234651 in Controller
	
	
	Name:               ha-234651-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-234651-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1d737dad7efa60c56d30434fcd857dd3b14c91d9
	                    minikube.k8s.io/name=ha-234651
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T17_00_46_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 17:00:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-234651-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 17:11:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 17:10:57 +0000   Wed, 31 Jul 2024 17:10:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 17:10:57 +0000   Wed, 31 Jul 2024 17:10:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 17:10:57 +0000   Wed, 31 Jul 2024 17:10:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 17:10:57 +0000   Wed, 31 Jul 2024 17:10:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.235
	  Hostname:    ha-234651-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f48a6b3aa33049d58a0ceaa57200b934
	  System UUID:                f48a6b3a-a330-49d5-8a0c-eaa57200b934
	  Boot ID:                    d27f7a8f-d22a-4c7b-89b0-2bce0e57cd25
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-2w6fp                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m34s
	  default                     busybox-fc5497c4f-qw457                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m34s
	  kube-system                 etcd-ha-234651-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-phmdp                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-234651-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-234651-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-b8dcw                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-234651-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-234651-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 89s                    kube-proxy       
	  Normal  Starting                 11m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  11m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)      kubelet          Node ha-234651-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)      kubelet          Node ha-234651-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)      kubelet          Node ha-234651-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           11m                    node-controller  Node ha-234651-m02 event: Registered Node ha-234651-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-234651-m02 event: Registered Node ha-234651-m02 in Controller
	  Normal  RegisteredNode           9m44s                  node-controller  Node ha-234651-m02 event: Registered Node ha-234651-m02 in Controller
	  Normal  NodeNotReady             7m39s                  node-controller  Node ha-234651-m02 status is now: NodeNotReady
	  Normal  Starting                 2m12s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m12s (x8 over 2m12s)  kubelet          Node ha-234651-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m12s (x8 over 2m12s)  kubelet          Node ha-234651-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m12s (x7 over 2m12s)  kubelet          Node ha-234651-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           99s                    node-controller  Node ha-234651-m02 event: Registered Node ha-234651-m02 in Controller
	  Normal  RegisteredNode           93s                    node-controller  Node ha-234651-m02 event: Registered Node ha-234651-m02 in Controller
	  Normal  RegisteredNode           27s                    node-controller  Node ha-234651-m02 event: Registered Node ha-234651-m02 in Controller
	
	
	Name:               ha-234651-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-234651-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1d737dad7efa60c56d30434fcd857dd3b14c91d9
	                    minikube.k8s.io/name=ha-234651
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T17_02_01_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 17:01:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-234651-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 17:11:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 17:11:35 +0000   Wed, 31 Jul 2024 17:11:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 17:11:35 +0000   Wed, 31 Jul 2024 17:11:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 17:11:35 +0000   Wed, 31 Jul 2024 17:11:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 17:11:35 +0000   Wed, 31 Jul 2024 17:11:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.139
	  Hostname:    ha-234651-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 bedcbc00eaa142208b6f46ab90ace771
	  System UUID:                bedcbc00-eaa1-4220-8b6f-46ab90ace771
	  Boot ID:                    0176ef6e-07f3-44fd-b66e-f8ed51afb470
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-fdmbt                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m34s
	  kube-system                 etcd-ha-234651-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m54s
	  kube-system                 kindnet-2xqxq                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-ha-234651-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-ha-234651-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m53s
	  kube-system                 kube-proxy-gfgjd                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-ha-234651-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m55s
	  kube-system                 kube-vip-ha-234651-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 9m58s              kube-proxy       
	  Normal   Starting                 39s                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node ha-234651-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node ha-234651-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node ha-234651-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-234651-m03 event: Registered Node ha-234651-m03 in Controller
	  Normal   RegisteredNode           9m59s              node-controller  Node ha-234651-m03 event: Registered Node ha-234651-m03 in Controller
	  Normal   RegisteredNode           9m44s              node-controller  Node ha-234651-m03 event: Registered Node ha-234651-m03 in Controller
	  Normal   RegisteredNode           99s                node-controller  Node ha-234651-m03 event: Registered Node ha-234651-m03 in Controller
	  Normal   RegisteredNode           93s                node-controller  Node ha-234651-m03 event: Registered Node ha-234651-m03 in Controller
	  Normal   NodeNotReady             59s                node-controller  Node ha-234651-m03 status is now: NodeNotReady
	  Normal   Starting                 56s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  56s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  56s (x2 over 56s)  kubelet          Node ha-234651-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    56s (x2 over 56s)  kubelet          Node ha-234651-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     56s (x2 over 56s)  kubelet          Node ha-234651-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 56s                kubelet          Node ha-234651-m03 has been rebooted, boot id: 0176ef6e-07f3-44fd-b66e-f8ed51afb470
	  Normal   NodeReady                56s                kubelet          Node ha-234651-m03 status is now: NodeReady
	  Normal   RegisteredNode           27s                node-controller  Node ha-234651-m03 event: Registered Node ha-234651-m03 in Controller
	
	
	Name:               ha-234651-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-234651-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1d737dad7efa60c56d30434fcd857dd3b14c91d9
	                    minikube.k8s.io/name=ha-234651
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T17_03_01_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 17:03:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-234651-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 17:11:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 17:11:52 +0000   Wed, 31 Jul 2024 17:11:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 17:11:52 +0000   Wed, 31 Jul 2024 17:11:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 17:11:52 +0000   Wed, 31 Jul 2024 17:11:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 17:11:52 +0000   Wed, 31 Jul 2024 17:11:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.216
	  Hostname:    ha-234651-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6cc10919121c4c939afe8d5b5f293c45
	  System UUID:                6cc10919-121c-4c93-9afe-8d5b5f293c45
	  Boot ID:                    e3f3e4ee-b5fd-45c5-bb72-394e03c66cac
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-qnml8       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m
	  kube-system                 kube-proxy-4b8gn    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age              From             Message
	  ----     ------                   ----             ----             -------
	  Normal   Starting                 5s               kube-proxy       
	  Normal   Starting                 8m55s            kube-proxy       
	  Normal   RegisteredNode           9m               node-controller  Node ha-234651-m04 event: Registered Node ha-234651-m04 in Controller
	  Normal   NodeHasSufficientMemory  9m (x3 over 9m)  kubelet          Node ha-234651-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m (x3 over 9m)  kubelet          Node ha-234651-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m (x3 over 9m)  kubelet          Node ha-234651-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  9m               kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           8m59s            node-controller  Node ha-234651-m04 event: Registered Node ha-234651-m04 in Controller
	  Normal   RegisteredNode           8m59s            node-controller  Node ha-234651-m04 event: Registered Node ha-234651-m04 in Controller
	  Normal   NodeReady                8m39s            kubelet          Node ha-234651-m04 status is now: NodeReady
	  Normal   RegisteredNode           99s              node-controller  Node ha-234651-m04 event: Registered Node ha-234651-m04 in Controller
	  Normal   RegisteredNode           93s              node-controller  Node ha-234651-m04 event: Registered Node ha-234651-m04 in Controller
	  Normal   NodeNotReady             59s              node-controller  Node ha-234651-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           27s              node-controller  Node ha-234651-m04 event: Registered Node ha-234651-m04 in Controller
	  Normal   Starting                 8s               kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8s               kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 8s               kubelet          Node ha-234651-m04 has been rebooted, boot id: e3f3e4ee-b5fd-45c5-bb72-394e03c66cac
	  Normal   NodeHasSufficientMemory  8s (x2 over 8s)  kubelet          Node ha-234651-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x2 over 8s)  kubelet          Node ha-234651-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x2 over 8s)  kubelet          Node ha-234651-m04 status is now: NodeHasSufficientPID
	  Normal   NodeReady                8s               kubelet          Node ha-234651-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.022347] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.054568] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.050413] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.168871] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.140385] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.264161] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +3.967546] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +4.664986] systemd-fstab-generator[951]: Ignoring "noauto" option for root device
	[  +0.073312] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.322437] systemd-fstab-generator[1369]: Ignoring "noauto" option for root device
	[  +0.079901] kauditd_printk_skb: 79 callbacks suppressed
	[ +13.802679] kauditd_printk_skb: 21 callbacks suppressed
	[Jul31 17:00] kauditd_printk_skb: 34 callbacks suppressed
	[ +47.907295] kauditd_printk_skb: 26 callbacks suppressed
	[Jul31 17:07] kauditd_printk_skb: 1 callbacks suppressed
	[Jul31 17:09] systemd-fstab-generator[3531]: Ignoring "noauto" option for root device
	[  +0.140478] systemd-fstab-generator[3543]: Ignoring "noauto" option for root device
	[  +0.172285] systemd-fstab-generator[3557]: Ignoring "noauto" option for root device
	[  +0.137076] systemd-fstab-generator[3570]: Ignoring "noauto" option for root device
	[  +0.256187] systemd-fstab-generator[3598]: Ignoring "noauto" option for root device
	[  +7.725144] systemd-fstab-generator[3693]: Ignoring "noauto" option for root device
	[  +0.082305] kauditd_printk_skb: 100 callbacks suppressed
	[ +14.434425] kauditd_printk_skb: 102 callbacks suppressed
	
	
	==> etcd [d677e670bb4261a20839c274a89255a77d159fe3e55246ea9089551953087862] <==
	{"level":"warn","ts":"2024-07-31T17:10:59.22849Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4d6f7e7e767b3ff3","from":"4d6f7e7e767b3ff3","remote-peer-id":"3a31efdaa5cc8d31","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T17:10:59.328408Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4d6f7e7e767b3ff3","from":"4d6f7e7e767b3ff3","remote-peer-id":"3a31efdaa5cc8d31","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T17:10:59.351699Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.139:2380/version","remote-member-id":"3a31efdaa5cc8d31","error":"Get \"https://192.168.39.139:2380/version\": dial tcp 192.168.39.139:2380: i/o timeout"}
	{"level":"warn","ts":"2024-07-31T17:10:59.351783Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"3a31efdaa5cc8d31","error":"Get \"https://192.168.39.139:2380/version\": dial tcp 192.168.39.139:2380: i/o timeout"}
	{"level":"warn","ts":"2024-07-31T17:10:59.428198Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"4d6f7e7e767b3ff3","from":"4d6f7e7e767b3ff3","remote-peer-id":"3a31efdaa5cc8d31","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T17:11:03.353951Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.139:2380/version","remote-member-id":"3a31efdaa5cc8d31","error":"Get \"https://192.168.39.139:2380/version\": dial tcp 192.168.39.139:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-31T17:11:03.35403Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"3a31efdaa5cc8d31","error":"Get \"https://192.168.39.139:2380/version\": dial tcp 192.168.39.139:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-31T17:11:03.70949Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"3a31efdaa5cc8d31","rtt":"0s","error":"dial tcp 192.168.39.139:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-31T17:11:03.709557Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"3a31efdaa5cc8d31","rtt":"0s","error":"dial tcp 192.168.39.139:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-31T17:11:07.356024Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.139:2380/version","remote-member-id":"3a31efdaa5cc8d31","error":"Get \"https://192.168.39.139:2380/version\": dial tcp 192.168.39.139:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-31T17:11:07.356084Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"3a31efdaa5cc8d31","error":"Get \"https://192.168.39.139:2380/version\": dial tcp 192.168.39.139:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-31T17:11:08.710553Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"3a31efdaa5cc8d31","rtt":"0s","error":"dial tcp 192.168.39.139:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-31T17:11:08.710628Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"3a31efdaa5cc8d31","rtt":"0s","error":"dial tcp 192.168.39.139:2380: connect: connection refused"}
	{"level":"info","ts":"2024-07-31T17:11:10.771744Z","caller":"traceutil/trace.go:171","msg":"trace[611490852] transaction","detail":"{read_only:false; response_revision:2324; number_of_response:1; }","duration":"150.902472ms","start":"2024-07-31T17:11:10.620793Z","end":"2024-07-31T17:11:10.771695Z","steps":["trace[611490852] 'process raft request'  (duration: 150.80711ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T17:11:11.358629Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.139:2380/version","remote-member-id":"3a31efdaa5cc8d31","error":"Get \"https://192.168.39.139:2380/version\": dial tcp 192.168.39.139:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-31T17:11:11.358803Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"3a31efdaa5cc8d31","error":"Get \"https://192.168.39.139:2380/version\": dial tcp 192.168.39.139:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-31T17:11:13.711489Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"3a31efdaa5cc8d31","rtt":"0s","error":"dial tcp 192.168.39.139:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-31T17:11:13.711509Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"3a31efdaa5cc8d31","rtt":"0s","error":"dial tcp 192.168.39.139:2380: connect: connection refused"}
	{"level":"info","ts":"2024-07-31T17:11:15.059411Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"3a31efdaa5cc8d31"}
	{"level":"info","ts":"2024-07-31T17:11:15.066415Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"4d6f7e7e767b3ff3","remote-peer-id":"3a31efdaa5cc8d31"}
	{"level":"info","ts":"2024-07-31T17:11:15.066731Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"4d6f7e7e767b3ff3","remote-peer-id":"3a31efdaa5cc8d31"}
	{"level":"info","ts":"2024-07-31T17:11:15.074051Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"4d6f7e7e767b3ff3","to":"3a31efdaa5cc8d31","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-31T17:11:15.074108Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"4d6f7e7e767b3ff3","remote-peer-id":"3a31efdaa5cc8d31"}
	{"level":"info","ts":"2024-07-31T17:11:15.08678Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"4d6f7e7e767b3ff3","to":"3a31efdaa5cc8d31","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-31T17:11:15.086825Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"4d6f7e7e767b3ff3","remote-peer-id":"3a31efdaa5cc8d31"}
	
	
	==> etcd [e5b3417940cd8bf8065dfefafb918da0e5bfda9fd5f7a2f9ccea0f1615151657] <==
	2024/07/31 17:09:09 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/31 17:09:09 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/31 17:09:09 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/31 17:09:09 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/31 17:09:09 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-31T17:09:09.636276Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.243:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T17:09:09.636393Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.243:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-31T17:09:09.636593Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"4d6f7e7e767b3ff3","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-31T17:09:09.636801Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"6d43d0f719899a7e"}
	{"level":"info","ts":"2024-07-31T17:09:09.636948Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"6d43d0f719899a7e"}
	{"level":"info","ts":"2024-07-31T17:09:09.637039Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"6d43d0f719899a7e"}
	{"level":"info","ts":"2024-07-31T17:09:09.637134Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"4d6f7e7e767b3ff3","remote-peer-id":"6d43d0f719899a7e"}
	{"level":"info","ts":"2024-07-31T17:09:09.637191Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"4d6f7e7e767b3ff3","remote-peer-id":"6d43d0f719899a7e"}
	{"level":"info","ts":"2024-07-31T17:09:09.637261Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"4d6f7e7e767b3ff3","remote-peer-id":"6d43d0f719899a7e"}
	{"level":"info","ts":"2024-07-31T17:09:09.637288Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"6d43d0f719899a7e"}
	{"level":"info","ts":"2024-07-31T17:09:09.637296Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"3a31efdaa5cc8d31"}
	{"level":"info","ts":"2024-07-31T17:09:09.637307Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"3a31efdaa5cc8d31"}
	{"level":"info","ts":"2024-07-31T17:09:09.637367Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"3a31efdaa5cc8d31"}
	{"level":"info","ts":"2024-07-31T17:09:09.637492Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"4d6f7e7e767b3ff3","remote-peer-id":"3a31efdaa5cc8d31"}
	{"level":"info","ts":"2024-07-31T17:09:09.63759Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"4d6f7e7e767b3ff3","remote-peer-id":"3a31efdaa5cc8d31"}
	{"level":"info","ts":"2024-07-31T17:09:09.637726Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"4d6f7e7e767b3ff3","remote-peer-id":"3a31efdaa5cc8d31"}
	{"level":"info","ts":"2024-07-31T17:09:09.6378Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"3a31efdaa5cc8d31"}
	{"level":"info","ts":"2024-07-31T17:09:09.640565Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.243:2380"}
	{"level":"info","ts":"2024-07-31T17:09:09.640778Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.243:2380"}
	{"level":"info","ts":"2024-07-31T17:09:09.640812Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-234651","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.243:2380"],"advertise-client-urls":["https://192.168.39.243:2379"]}
	
	
	==> kernel <==
	 17:12:01 up 12 min,  0 users,  load average: 0.55, 0.73, 0.46
	Linux ha-234651 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [97b26172311050320f93611435daf7f831aac7a433159e06423c34df3f98184f] <==
	I0731 17:11:28.746392       1 main.go:322] Node ha-234651-m04 has CIDR [10.244.3.0/24] 
	I0731 17:11:38.746928       1 main.go:295] Handling node with IPs: map[192.168.39.216:{}]
	I0731 17:11:38.747071       1 main.go:322] Node ha-234651-m04 has CIDR [10.244.3.0/24] 
	I0731 17:11:38.747262       1 main.go:295] Handling node with IPs: map[192.168.39.243:{}]
	I0731 17:11:38.747288       1 main.go:299] handling current node
	I0731 17:11:38.747319       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0731 17:11:38.747422       1 main.go:322] Node ha-234651-m02 has CIDR [10.244.1.0/24] 
	I0731 17:11:38.747504       1 main.go:295] Handling node with IPs: map[192.168.39.139:{}]
	I0731 17:11:38.747524       1 main.go:322] Node ha-234651-m03 has CIDR [10.244.2.0/24] 
	I0731 17:11:48.754264       1 main.go:295] Handling node with IPs: map[192.168.39.243:{}]
	I0731 17:11:48.754441       1 main.go:299] handling current node
	I0731 17:11:48.754488       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0731 17:11:48.754494       1 main.go:322] Node ha-234651-m02 has CIDR [10.244.1.0/24] 
	I0731 17:11:48.754775       1 main.go:295] Handling node with IPs: map[192.168.39.139:{}]
	I0731 17:11:48.754808       1 main.go:322] Node ha-234651-m03 has CIDR [10.244.2.0/24] 
	I0731 17:11:48.754873       1 main.go:295] Handling node with IPs: map[192.168.39.216:{}]
	I0731 17:11:48.754889       1 main.go:322] Node ha-234651-m04 has CIDR [10.244.3.0/24] 
	I0731 17:11:58.745195       1 main.go:295] Handling node with IPs: map[192.168.39.243:{}]
	I0731 17:11:58.745233       1 main.go:299] handling current node
	I0731 17:11:58.745251       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0731 17:11:58.745256       1 main.go:322] Node ha-234651-m02 has CIDR [10.244.1.0/24] 
	I0731 17:11:58.745451       1 main.go:295] Handling node with IPs: map[192.168.39.139:{}]
	I0731 17:11:58.745472       1 main.go:322] Node ha-234651-m03 has CIDR [10.244.2.0/24] 
	I0731 17:11:58.745558       1 main.go:295] Handling node with IPs: map[192.168.39.216:{}]
	I0731 17:11:58.745577       1 main.go:322] Node ha-234651-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [dd9f6c4536535bb3eb07d6fa75fe49579d9cdee5ff6e4c440e0a58359d4f2f08] <==
	I0731 17:08:32.129403       1 main.go:299] handling current node
	I0731 17:08:42.134528       1 main.go:295] Handling node with IPs: map[192.168.39.243:{}]
	I0731 17:08:42.134597       1 main.go:299] handling current node
	I0731 17:08:42.134613       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0731 17:08:42.134618       1 main.go:322] Node ha-234651-m02 has CIDR [10.244.1.0/24] 
	I0731 17:08:42.134757       1 main.go:295] Handling node with IPs: map[192.168.39.139:{}]
	I0731 17:08:42.134778       1 main.go:322] Node ha-234651-m03 has CIDR [10.244.2.0/24] 
	I0731 17:08:42.134840       1 main.go:295] Handling node with IPs: map[192.168.39.216:{}]
	I0731 17:08:42.134844       1 main.go:322] Node ha-234651-m04 has CIDR [10.244.3.0/24] 
	I0731 17:08:52.129737       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0731 17:08:52.129782       1 main.go:322] Node ha-234651-m02 has CIDR [10.244.1.0/24] 
	I0731 17:08:52.129928       1 main.go:295] Handling node with IPs: map[192.168.39.139:{}]
	I0731 17:08:52.129948       1 main.go:322] Node ha-234651-m03 has CIDR [10.244.2.0/24] 
	I0731 17:08:52.130018       1 main.go:295] Handling node with IPs: map[192.168.39.216:{}]
	I0731 17:08:52.130036       1 main.go:322] Node ha-234651-m04 has CIDR [10.244.3.0/24] 
	I0731 17:08:52.130085       1 main.go:295] Handling node with IPs: map[192.168.39.243:{}]
	I0731 17:08:52.130100       1 main.go:299] handling current node
	I0731 17:09:02.126927       1 main.go:295] Handling node with IPs: map[192.168.39.243:{}]
	I0731 17:09:02.127018       1 main.go:299] handling current node
	I0731 17:09:02.127047       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0731 17:09:02.127066       1 main.go:322] Node ha-234651-m02 has CIDR [10.244.1.0/24] 
	I0731 17:09:02.127239       1 main.go:295] Handling node with IPs: map[192.168.39.139:{}]
	I0731 17:09:02.127262       1 main.go:322] Node ha-234651-m03 has CIDR [10.244.2.0/24] 
	I0731 17:09:02.127395       1 main.go:295] Handling node with IPs: map[192.168.39.216:{}]
	I0731 17:09:02.127429       1 main.go:322] Node ha-234651-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [b3146e46f5c51a3c541231e2c641bbf4c5545f0a0fd2b57835029e0dc9ab29d5] <==
	I0731 17:09:28.326988       1 options.go:221] external host was not specified, using 192.168.39.243
	I0731 17:09:28.330636       1 server.go:148] Version: v1.30.3
	I0731 17:09:28.330754       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 17:09:28.929181       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0731 17:09:28.935469       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0731 17:09:28.937547       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0731 17:09:28.937610       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0731 17:09:28.937799       1 instance.go:299] Using reconciler: lease
	W0731 17:09:48.928900       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0731 17:09:48.929240       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0731 17:09:48.939208       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [f674e9fa7967bc4f5ac1136de9ccd505e7fe0f1638629fda624b6de0e2587f08] <==
	I0731 17:10:13.560442       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0731 17:10:13.560527       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0731 17:10:13.652247       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0731 17:10:13.657461       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0731 17:10:13.657511       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0731 17:10:13.658314       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0731 17:10:13.658387       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0731 17:10:13.658508       1 shared_informer.go:320] Caches are synced for configmaps
	I0731 17:10:13.659959       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0731 17:10:13.661015       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0731 17:10:13.661239       1 aggregator.go:165] initial CRD sync complete...
	I0731 17:10:13.661271       1 autoregister_controller.go:141] Starting autoregister controller
	I0731 17:10:13.661276       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0731 17:10:13.661281       1 cache.go:39] Caches are synced for autoregister controller
	W0731 17:10:13.670119       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.139 192.168.39.235]
	I0731 17:10:13.686183       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0731 17:10:13.694871       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0731 17:10:13.694902       1 policy_source.go:224] refreshing policies
	I0731 17:10:13.727739       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0731 17:10:13.771989       1 controller.go:615] quota admission added evaluator for: endpoints
	I0731 17:10:13.783003       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0731 17:10:13.786265       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0731 17:10:14.560181       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0731 17:10:14.901240       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.139 192.168.39.235 192.168.39.243]
	W0731 17:10:24.902516       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.235 192.168.39.243]
	
	
	==> kube-controller-manager [d3c496d013e95de87039db5d01f03c4419be22469ae9a37ce56d28e927b11599] <==
	I0731 17:10:27.160294       1 shared_informer.go:320] Caches are synced for resource quota
	I0731 17:10:27.160909       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0731 17:10:27.161168       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0731 17:10:27.163409       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0731 17:10:27.205009       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0731 17:10:27.620166       1 shared_informer.go:320] Caches are synced for garbage collector
	I0731 17:10:27.637083       1 shared_informer.go:320] Caches are synced for garbage collector
	I0731 17:10:27.637131       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0731 17:10:36.301326       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.61166ms"
	I0731 17:10:36.301462       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.381µs"
	I0731 17:10:38.296013       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.137613ms"
	I0731 17:10:38.296143       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="86.73µs"
	I0731 17:10:40.930786       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-fp7gs EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-fp7gs\": the object has been modified; please apply your changes to the latest version and try again"
	I0731 17:10:40.931028       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"4a55a57c-e736-4e80-b6b2-15221f5c19dd", APIVersion:"v1", ResourceVersion:"242", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-fp7gs EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-fp7gs": the object has been modified; please apply your changes to the latest version and try again
	I0731 17:10:40.935718       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="44.95421ms"
	I0731 17:10:40.935974       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="97.934µs"
	I0731 17:10:42.630911       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="16.576742ms"
	I0731 17:10:42.632281       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="49.22µs"
	I0731 17:11:01.857257       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-234651-m04"
	I0731 17:11:01.918552       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.332197ms"
	I0731 17:11:01.918656       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.659µs"
	I0731 17:11:05.797968       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="66.574µs"
	I0731 17:11:24.057855       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.000053ms"
	I0731 17:11:24.058066       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="73.697µs"
	I0731 17:11:52.102265       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-234651-m04"
	
	
	==> kube-controller-manager [e3d0e33db247e8b10836bed4eff51af3c70dbab815033d0f99c72fab296601fe] <==
	I0731 17:09:28.924323       1 serving.go:380] Generated self-signed cert in-memory
	I0731 17:09:29.519803       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0731 17:09:29.519839       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 17:09:29.521449       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0731 17:09:29.521633       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0731 17:09:29.521671       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0731 17:09:29.521641       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	E0731 17:09:49.946085       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.243:8443/healthz\": dial tcp 192.168.39.243:8443: connect: connection refused"
	
	
	==> kube-proxy [11197df475b3001b3fd4ae5f027e3f3d3524805332a4f3556041ef5a33fba07d] <==
	I0731 17:10:09.439533       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 17:10:09.439886       1 server.go:872] "Version info" version="v1.30.3"
	I0731 17:10:09.439929       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 17:10:09.442119       1 config.go:192] "Starting service config controller"
	I0731 17:10:09.442151       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 17:10:09.442172       1 config.go:101] "Starting endpoint slice config controller"
	I0731 17:10:09.442187       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 17:10:09.442800       1 config.go:319] "Starting node config controller"
	I0731 17:10:09.442826       1 shared_informer.go:313] Waiting for caches to sync for node config
	E0731 17:10:12.455709       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0731 17:10:12.455838       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 17:10:12.455958       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 17:10:12.456023       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 17:10:12.456068       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 17:10:12.456144       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-234651&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 17:10:12.456184       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-234651&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 17:10:15.526122       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-234651&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 17:10:15.526319       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-234651&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 17:10:15.526618       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 17:10:15.527094       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 17:10:15.527155       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 17:10:15.527232       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	I0731 17:10:17.543805       1 shared_informer.go:320] Caches are synced for service config
	I0731 17:10:17.643116       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 17:10:18.143178       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [631c8cee6152ae1808419f27d4b63fa0a6bcebb15c757dd96090c6267d63e6c2] <==
	E0731 17:07:51.205891       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1856": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 17:07:51.205970       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-234651&resourceVersion=1793": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 17:07:51.206000       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-234651&resourceVersion=1793": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 17:07:51.206164       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1835": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 17:07:51.206227       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1835": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 17:07:57.605884       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-234651&resourceVersion=1793": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 17:07:57.605958       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-234651&resourceVersion=1793": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 17:07:57.606037       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1835": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 17:07:57.606083       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1835": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 17:07:57.606055       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1856": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 17:07:57.606143       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1856": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 17:08:09.575524       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-234651&resourceVersion=1793": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 17:08:09.576213       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-234651&resourceVersion=1793": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 17:08:09.576233       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1856": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 17:08:09.575493       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1835": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 17:08:09.576392       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1835": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 17:08:09.576440       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1856": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 17:08:28.007089       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-234651&resourceVersion=1793": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 17:08:28.007254       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-234651&resourceVersion=1793": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 17:08:28.007079       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1856": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 17:08:28.007397       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1856": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 17:08:31.078543       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1835": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 17:08:31.078697       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1835": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 17:09:07.943276       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-234651&resourceVersion=1793": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 17:09:07.943401       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-234651&resourceVersion=1793": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [99fe8054c8b333dba8f752ca3b7d5a59656d3429dcd52f299ba255df7a8f6e23] <==
	W0731 17:10:06.463652       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.243:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.243:8443: connect: connection refused
	E0731 17:10:06.463776       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.243:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.243:8443: connect: connection refused
	W0731 17:10:06.877759       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.243:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.243:8443: connect: connection refused
	E0731 17:10:06.877864       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.243:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.243:8443: connect: connection refused
	W0731 17:10:07.473453       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.243:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.243:8443: connect: connection refused
	E0731 17:10:07.473527       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.243:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.243:8443: connect: connection refused
	W0731 17:10:07.772607       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.243:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.243:8443: connect: connection refused
	E0731 17:10:07.772743       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.243:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.243:8443: connect: connection refused
	W0731 17:10:08.320132       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.243:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.243:8443: connect: connection refused
	E0731 17:10:08.320281       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.243:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.243:8443: connect: connection refused
	W0731 17:10:08.515544       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.243:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.243:8443: connect: connection refused
	E0731 17:10:08.515611       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.243:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.243:8443: connect: connection refused
	W0731 17:10:08.532154       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.243:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.243:8443: connect: connection refused
	E0731 17:10:08.532289       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.243:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.243:8443: connect: connection refused
	W0731 17:10:09.252038       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.243:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.243:8443: connect: connection refused
	E0731 17:10:09.252179       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.243:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.243:8443: connect: connection refused
	W0731 17:10:09.477829       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.243:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.243:8443: connect: connection refused
	E0731 17:10:09.477937       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.243:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.243:8443: connect: connection refused
	W0731 17:10:10.080894       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.243:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.243:8443: connect: connection refused
	E0731 17:10:10.080963       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.243:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.243:8443: connect: connection refused
	W0731 17:10:10.156109       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.243:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.243:8443: connect: connection refused
	E0731 17:10:10.156221       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.243:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.243:8443: connect: connection refused
	W0731 17:10:10.780663       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.243:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.243:8443: connect: connection refused
	E0731 17:10:10.780970       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.243:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.243:8443: connect: connection refused
	I0731 17:10:29.651462       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [ded6421f2f11d7f64ce80d1681e3d4c9d1453ff8aecc860e184ad0865768adde] <==
	W0731 17:09:04.792897       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 17:09:04.792975       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0731 17:09:04.914103       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 17:09:04.914192       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0731 17:09:07.307475       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 17:09:07.307516       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0731 17:09:07.586862       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0731 17:09:07.586994       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0731 17:09:07.868625       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 17:09:07.868673       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0731 17:09:07.954793       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0731 17:09:07.954843       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0731 17:09:08.086478       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 17:09:08.086556       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 17:09:08.135035       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0731 17:09:08.135109       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0731 17:09:08.524801       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 17:09:08.524893       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0731 17:09:08.546473       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0731 17:09:08.546565       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0731 17:09:08.658250       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 17:09:08.658437       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0731 17:09:09.119633       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 17:09:09.119662       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 17:09:09.564811       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 31 17:10:14 ha-234651 kubelet[1377]: I0731 17:10:14.862866    1377 scope.go:117] "RemoveContainer" containerID="b33c5656b949f0abefff07127c4c158dae9bc422f9bfc155428533b75399e7b9"
	Jul 31 17:10:14 ha-234651 kubelet[1377]: E0731 17:10:14.863061    1377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(87455537-bdb8-438b-8122-db85bed01d09)\"" pod="kube-system/storage-provisioner" podUID="87455537-bdb8-438b-8122-db85bed01d09"
	Jul 31 17:10:15 ha-234651 kubelet[1377]: I0731 17:10:15.525847    1377 status_manager.go:853] "Failed to get status for pod" podUID="d9982d04d181fbc6333c44627c777728" pod="kube-system/kube-controller-manager-ha-234651" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-234651\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 31 17:10:27 ha-234651 kubelet[1377]: I0731 17:10:27.863168    1377 scope.go:117] "RemoveContainer" containerID="b33c5656b949f0abefff07127c4c158dae9bc422f9bfc155428533b75399e7b9"
	Jul 31 17:10:27 ha-234651 kubelet[1377]: E0731 17:10:27.864112    1377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(87455537-bdb8-438b-8122-db85bed01d09)\"" pod="kube-system/storage-provisioner" podUID="87455537-bdb8-438b-8122-db85bed01d09"
	Jul 31 17:10:40 ha-234651 kubelet[1377]: I0731 17:10:40.866087    1377 scope.go:117] "RemoveContainer" containerID="b33c5656b949f0abefff07127c4c158dae9bc422f9bfc155428533b75399e7b9"
	Jul 31 17:10:40 ha-234651 kubelet[1377]: E0731 17:10:40.866878    1377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(87455537-bdb8-438b-8122-db85bed01d09)\"" pod="kube-system/storage-provisioner" podUID="87455537-bdb8-438b-8122-db85bed01d09"
	Jul 31 17:10:42 ha-234651 kubelet[1377]: E0731 17:10:42.887671    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 17:10:42 ha-234651 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 17:10:42 ha-234651 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 17:10:42 ha-234651 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 17:10:42 ha-234651 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 17:10:51 ha-234651 kubelet[1377]: I0731 17:10:51.863151    1377 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-vip-ha-234651" podUID="205b811c-93e9-4f66-9d0e-67abbc8ff1ef"
	Jul 31 17:10:51 ha-234651 kubelet[1377]: I0731 17:10:51.883592    1377 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-234651"
	Jul 31 17:10:52 ha-234651 kubelet[1377]: I0731 17:10:52.872404    1377 scope.go:117] "RemoveContainer" containerID="b33c5656b949f0abefff07127c4c158dae9bc422f9bfc155428533b75399e7b9"
	Jul 31 17:10:52 ha-234651 kubelet[1377]: E0731 17:10:52.872603    1377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(87455537-bdb8-438b-8122-db85bed01d09)\"" pod="kube-system/storage-provisioner" podUID="87455537-bdb8-438b-8122-db85bed01d09"
	Jul 31 17:10:56 ha-234651 kubelet[1377]: I0731 17:10:56.838925    1377 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-234651" podStartSLOduration=5.8388203050000005 podStartE2EDuration="5.838820305s" podCreationTimestamp="2024-07-31 17:10:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-31 17:10:56.837089803 +0000 UTC m=+674.138877324" watchObservedRunningTime="2024-07-31 17:10:56.838820305 +0000 UTC m=+674.140607833"
	Jul 31 17:11:04 ha-234651 kubelet[1377]: I0731 17:11:04.862493    1377 scope.go:117] "RemoveContainer" containerID="b33c5656b949f0abefff07127c4c158dae9bc422f9bfc155428533b75399e7b9"
	Jul 31 17:11:04 ha-234651 kubelet[1377]: E0731 17:11:04.862692    1377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(87455537-bdb8-438b-8122-db85bed01d09)\"" pod="kube-system/storage-provisioner" podUID="87455537-bdb8-438b-8122-db85bed01d09"
	Jul 31 17:11:18 ha-234651 kubelet[1377]: I0731 17:11:18.862819    1377 scope.go:117] "RemoveContainer" containerID="b33c5656b949f0abefff07127c4c158dae9bc422f9bfc155428533b75399e7b9"
	Jul 31 17:11:42 ha-234651 kubelet[1377]: E0731 17:11:42.882301    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 17:11:42 ha-234651 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 17:11:42 ha-234651 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 17:11:42 ha-234651 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 17:11:42 ha-234651 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 17:11:59.975638   33811 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19349-8084/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-234651 -n ha-234651
helpers_test.go:261: (dbg) Run:  kubectl --context ha-234651 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (295.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (172.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 stop -v=7 --alsologtostderr
E0731 17:12:57.004717   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/functional-496242/client.crt: no such file or directory
E0731 17:14:05.346573   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/client.crt: no such file or directory
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-234651 stop -v=7 --alsologtostderr: exit status 82 (2m1.762539909s)

                                                
                                                
-- stdout --
	* Stopping node "ha-234651-m04"  ...
	* Stopping node "ha-234651-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 17:12:19.296747   34220 out.go:291] Setting OutFile to fd 1 ...
	I0731 17:12:19.296844   34220 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 17:12:19.296851   34220 out.go:304] Setting ErrFile to fd 2...
	I0731 17:12:19.296856   34220 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 17:12:19.297021   34220 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19349-8084/.minikube/bin
	I0731 17:12:19.297217   34220 out.go:298] Setting JSON to false
	I0731 17:12:19.297317   34220 mustload.go:65] Loading cluster: ha-234651
	I0731 17:12:19.297681   34220 config.go:182] Loaded profile config "ha-234651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 17:12:19.297779   34220 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/config.json ...
	I0731 17:12:19.297966   34220 mustload.go:65] Loading cluster: ha-234651
	I0731 17:12:19.298156   34220 config.go:182] Loaded profile config "ha-234651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 17:12:19.298192   34220 stop.go:39] StopHost: ha-234651-m04
	I0731 17:12:19.298593   34220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:12:19.298647   34220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:12:19.313994   34220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39395
	I0731 17:12:19.314474   34220 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:12:19.314991   34220 main.go:141] libmachine: Using API Version  1
	I0731 17:12:19.315015   34220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:12:19.315337   34220 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:12:19.317691   34220 out.go:177] * Stopping node "ha-234651-m04"  ...
	I0731 17:12:19.319006   34220 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0731 17:12:19.319031   34220 main.go:141] libmachine: (ha-234651-m04) Calling .DriverName
	I0731 17:12:19.319260   34220 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0731 17:12:19.319281   34220 main.go:141] libmachine: (ha-234651-m04) Calling .GetSSHHostname
	I0731 17:12:19.321988   34220 main.go:141] libmachine: (ha-234651-m04) DBG | domain ha-234651-m04 has defined MAC address 52:54:00:25:f7:22 in network mk-ha-234651
	I0731 17:12:19.322390   34220 main.go:141] libmachine: (ha-234651-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f7:22", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:11:46 +0000 UTC Type:0 Mac:52:54:00:25:f7:22 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-234651-m04 Clientid:01:52:54:00:25:f7:22}
	I0731 17:12:19.322413   34220 main.go:141] libmachine: (ha-234651-m04) DBG | domain ha-234651-m04 has defined IP address 192.168.39.216 and MAC address 52:54:00:25:f7:22 in network mk-ha-234651
	I0731 17:12:19.322566   34220 main.go:141] libmachine: (ha-234651-m04) Calling .GetSSHPort
	I0731 17:12:19.322721   34220 main.go:141] libmachine: (ha-234651-m04) Calling .GetSSHKeyPath
	I0731 17:12:19.322840   34220 main.go:141] libmachine: (ha-234651-m04) Calling .GetSSHUsername
	I0731 17:12:19.322954   34220 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m04/id_rsa Username:docker}
	I0731 17:12:19.408861   34220 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0731 17:12:19.461303   34220 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0731 17:12:19.512675   34220 main.go:141] libmachine: Stopping "ha-234651-m04"...
	I0731 17:12:19.512718   34220 main.go:141] libmachine: (ha-234651-m04) Calling .GetState
	I0731 17:12:19.514182   34220 main.go:141] libmachine: (ha-234651-m04) Calling .Stop
	I0731 17:12:19.517453   34220 main.go:141] libmachine: (ha-234651-m04) Waiting for machine to stop 0/120
	I0731 17:12:20.604208   34220 main.go:141] libmachine: (ha-234651-m04) Calling .GetState
	I0731 17:12:20.605510   34220 main.go:141] libmachine: Machine "ha-234651-m04" was stopped.
	I0731 17:12:20.605539   34220 stop.go:75] duration metric: took 1.286525221s to stop
	I0731 17:12:20.605579   34220 stop.go:39] StopHost: ha-234651-m02
	I0731 17:12:20.605847   34220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:12:20.605890   34220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:12:20.621376   34220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41267
	I0731 17:12:20.621819   34220 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:12:20.622355   34220 main.go:141] libmachine: Using API Version  1
	I0731 17:12:20.622377   34220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:12:20.622698   34220 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:12:20.624476   34220 out.go:177] * Stopping node "ha-234651-m02"  ...
	I0731 17:12:20.625648   34220 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0731 17:12:20.625673   34220 main.go:141] libmachine: (ha-234651-m02) Calling .DriverName
	I0731 17:12:20.625886   34220 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0731 17:12:20.625909   34220 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHHostname
	I0731 17:12:20.628819   34220 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:12:20.629198   34220 main.go:141] libmachine: (ha-234651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:97:0e", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:09:35 +0000 UTC Type:0 Mac:52:54:00:4c:97:0e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-234651-m02 Clientid:01:52:54:00:4c:97:0e}
	I0731 17:12:20.629226   34220 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined IP address 192.168.39.235 and MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:12:20.629385   34220 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHPort
	I0731 17:12:20.629567   34220 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHKeyPath
	I0731 17:12:20.629734   34220 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHUsername
	I0731 17:12:20.629871   34220 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m02/id_rsa Username:docker}
	I0731 17:12:20.709094   34220 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0731 17:12:20.760946   34220 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0731 17:12:20.815841   34220 main.go:141] libmachine: Stopping "ha-234651-m02"...
	I0731 17:12:20.815874   34220 main.go:141] libmachine: (ha-234651-m02) Calling .GetState
	I0731 17:12:20.817602   34220 main.go:141] libmachine: (ha-234651-m02) Calling .Stop
	I0731 17:12:20.821515   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 0/120
	I0731 17:12:21.822695   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 1/120
	I0731 17:12:22.824038   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 2/120
	I0731 17:12:23.825404   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 3/120
	I0731 17:12:24.826625   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 4/120
	I0731 17:12:25.828685   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 5/120
	I0731 17:12:26.830396   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 6/120
	I0731 17:12:27.831908   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 7/120
	I0731 17:12:28.833611   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 8/120
	I0731 17:12:29.835029   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 9/120
	I0731 17:12:30.837282   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 10/120
	I0731 17:12:31.838807   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 11/120
	I0731 17:12:32.840249   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 12/120
	I0731 17:12:33.842151   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 13/120
	I0731 17:12:34.843603   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 14/120
	I0731 17:12:35.845946   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 15/120
	I0731 17:12:36.847927   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 16/120
	I0731 17:12:37.849591   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 17/120
	I0731 17:12:38.851385   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 18/120
	I0731 17:12:39.852853   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 19/120
	I0731 17:12:40.854489   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 20/120
	I0731 17:12:41.856902   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 21/120
	I0731 17:12:42.858787   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 22/120
	I0731 17:12:43.860253   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 23/120
	I0731 17:12:44.861938   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 24/120
	I0731 17:12:45.863721   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 25/120
	I0731 17:12:46.865886   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 26/120
	I0731 17:12:47.867664   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 27/120
	I0731 17:12:48.869085   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 28/120
	I0731 17:12:49.870808   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 29/120
	I0731 17:12:50.872725   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 30/120
	I0731 17:12:51.874338   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 31/120
	I0731 17:12:52.875605   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 32/120
	I0731 17:12:53.877697   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 33/120
	I0731 17:12:54.879439   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 34/120
	I0731 17:12:55.881165   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 35/120
	I0731 17:12:56.882284   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 36/120
	I0731 17:12:57.883765   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 37/120
	I0731 17:12:58.885559   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 38/120
	I0731 17:12:59.886791   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 39/120
	I0731 17:13:00.888618   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 40/120
	I0731 17:13:01.889988   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 41/120
	I0731 17:13:02.891412   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 42/120
	I0731 17:13:03.892893   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 43/120
	I0731 17:13:04.894284   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 44/120
	I0731 17:13:05.896353   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 45/120
	I0731 17:13:06.897811   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 46/120
	I0731 17:13:07.899648   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 47/120
	I0731 17:13:08.901106   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 48/120
	I0731 17:13:09.902629   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 49/120
	I0731 17:13:10.904378   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 50/120
	I0731 17:13:11.905596   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 51/120
	I0731 17:13:12.906930   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 52/120
	I0731 17:13:13.908235   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 53/120
	I0731 17:13:14.909489   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 54/120
	I0731 17:13:15.911238   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 55/120
	I0731 17:13:16.912728   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 56/120
	I0731 17:13:17.914022   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 57/120
	I0731 17:13:18.915500   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 58/120
	I0731 17:13:19.917063   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 59/120
	I0731 17:13:20.918904   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 60/120
	I0731 17:13:21.920419   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 61/120
	I0731 17:13:22.921772   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 62/120
	I0731 17:13:23.923191   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 63/120
	I0731 17:13:24.924875   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 64/120
	I0731 17:13:25.926946   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 65/120
	I0731 17:13:26.928247   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 66/120
	I0731 17:13:27.929882   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 67/120
	I0731 17:13:28.931413   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 68/120
	I0731 17:13:29.932788   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 69/120
	I0731 17:13:30.934175   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 70/120
	I0731 17:13:31.935685   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 71/120
	I0731 17:13:32.937538   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 72/120
	I0731 17:13:33.939018   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 73/120
	I0731 17:13:34.940357   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 74/120
	I0731 17:13:35.942287   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 75/120
	I0731 17:13:36.943831   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 76/120
	I0731 17:13:37.945098   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 77/120
	I0731 17:13:38.946567   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 78/120
	I0731 17:13:39.948015   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 79/120
	I0731 17:13:40.949831   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 80/120
	I0731 17:13:41.951169   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 81/120
	I0731 17:13:42.952444   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 82/120
	I0731 17:13:43.953869   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 83/120
	I0731 17:13:44.955327   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 84/120
	I0731 17:13:45.957197   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 85/120
	I0731 17:13:46.958696   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 86/120
	I0731 17:13:47.960128   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 87/120
	I0731 17:13:48.961854   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 88/120
	I0731 17:13:49.963192   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 89/120
	I0731 17:13:50.964914   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 90/120
	I0731 17:13:51.966164   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 91/120
	I0731 17:13:52.967502   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 92/120
	I0731 17:13:53.969140   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 93/120
	I0731 17:13:54.970555   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 94/120
	I0731 17:13:55.972276   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 95/120
	I0731 17:13:56.973832   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 96/120
	I0731 17:13:57.975102   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 97/120
	I0731 17:13:58.976593   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 98/120
	I0731 17:13:59.978075   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 99/120
	I0731 17:14:00.979882   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 100/120
	I0731 17:14:01.981366   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 101/120
	I0731 17:14:02.983054   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 102/120
	I0731 17:14:03.984819   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 103/120
	I0731 17:14:04.986098   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 104/120
	I0731 17:14:05.987973   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 105/120
	I0731 17:14:06.989529   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 106/120
	I0731 17:14:07.990951   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 107/120
	I0731 17:14:08.992557   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 108/120
	I0731 17:14:09.994020   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 109/120
	I0731 17:14:10.995811   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 110/120
	I0731 17:14:11.997142   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 111/120
	I0731 17:14:12.998672   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 112/120
	I0731 17:14:14.000090   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 113/120
	I0731 17:14:15.001682   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 114/120
	I0731 17:14:16.003632   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 115/120
	I0731 17:14:17.005041   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 116/120
	I0731 17:14:18.007007   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 117/120
	I0731 17:14:19.008312   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 118/120
	I0731 17:14:20.010036   34220 main.go:141] libmachine: (ha-234651-m02) Waiting for machine to stop 119/120
	I0731 17:14:21.011006   34220 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0731 17:14:21.011070   34220 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0731 17:14:21.013117   34220 out.go:177] 
	W0731 17:14:21.014894   34220 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0731 17:14:21.014912   34220 out.go:239] * 
	* 
	W0731 17:14:21.017054   34220 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 17:14:21.018490   34220 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-234651 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-234651 status -v=7 --alsologtostderr: exit status 7 (33.880937443s)

                                                
                                                
-- stdout --
	ha-234651
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-234651-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-234651-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 17:14:21.061367   34706 out.go:291] Setting OutFile to fd 1 ...
	I0731 17:14:21.061610   34706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 17:14:21.061618   34706 out.go:304] Setting ErrFile to fd 2...
	I0731 17:14:21.061622   34706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 17:14:21.061796   34706 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19349-8084/.minikube/bin
	I0731 17:14:21.061947   34706 out.go:298] Setting JSON to false
	I0731 17:14:21.061969   34706 mustload.go:65] Loading cluster: ha-234651
	I0731 17:14:21.062085   34706 notify.go:220] Checking for updates...
	I0731 17:14:21.062317   34706 config.go:182] Loaded profile config "ha-234651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 17:14:21.062329   34706 status.go:255] checking status of ha-234651 ...
	I0731 17:14:21.062675   34706 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:14:21.062733   34706 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:14:21.083140   34706 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32773
	I0731 17:14:21.083626   34706 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:14:21.084256   34706 main.go:141] libmachine: Using API Version  1
	I0731 17:14:21.084285   34706 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:14:21.084656   34706 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:14:21.084894   34706 main.go:141] libmachine: (ha-234651) Calling .GetState
	I0731 17:14:21.086499   34706 status.go:330] ha-234651 host status = "Running" (err=<nil>)
	I0731 17:14:21.086524   34706 host.go:66] Checking if "ha-234651" exists ...
	I0731 17:14:21.086786   34706 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:14:21.086831   34706 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:14:21.102375   34706 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44197
	I0731 17:14:21.102705   34706 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:14:21.103277   34706 main.go:141] libmachine: Using API Version  1
	I0731 17:14:21.103310   34706 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:14:21.103633   34706 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:14:21.103804   34706 main.go:141] libmachine: (ha-234651) Calling .GetIP
	I0731 17:14:21.106282   34706 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:14:21.106708   34706 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 17:14:21.106738   34706 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:14:21.106898   34706 host.go:66] Checking if "ha-234651" exists ...
	I0731 17:14:21.107199   34706 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:14:21.107233   34706 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:14:21.121940   34706 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38369
	I0731 17:14:21.122333   34706 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:14:21.122757   34706 main.go:141] libmachine: Using API Version  1
	I0731 17:14:21.122786   34706 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:14:21.123055   34706 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:14:21.123266   34706 main.go:141] libmachine: (ha-234651) Calling .DriverName
	I0731 17:14:21.123448   34706 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 17:14:21.123467   34706 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 17:14:21.125951   34706 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:14:21.126334   34706 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 17:14:21.126358   34706 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:14:21.126511   34706 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 17:14:21.126743   34706 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 17:14:21.126891   34706 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 17:14:21.127030   34706 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651/id_rsa Username:docker}
	I0731 17:14:21.211538   34706 ssh_runner.go:195] Run: systemctl --version
	I0731 17:14:21.217952   34706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 17:14:21.233916   34706 kubeconfig.go:125] found "ha-234651" server: "https://192.168.39.254:8443"
	I0731 17:14:21.233950   34706 api_server.go:166] Checking apiserver status ...
	I0731 17:14:21.234022   34706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 17:14:21.250784   34706 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5477/cgroup
	W0731 17:14:21.260201   34706 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5477/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 17:14:21.260276   34706 ssh_runner.go:195] Run: ls
	I0731 17:14:21.265634   34706 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 17:14:24.008341   34706 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: no route to host
	I0731 17:14:24.008409   34706 retry.go:31] will retry after 278.583405ms: state is "Stopped"
	I0731 17:14:24.287933   34706 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 17:14:27.080377   34706 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: no route to host
	I0731 17:14:27.080431   34706 retry.go:31] will retry after 350.343475ms: state is "Stopped"
	I0731 17:14:27.430933   34706 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 17:14:30.152397   34706 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: no route to host
	I0731 17:14:30.152449   34706 retry.go:31] will retry after 316.202412ms: state is "Stopped"
	I0731 17:14:30.468912   34706 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 17:14:33.219471   34706 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: no route to host
	I0731 17:14:33.219521   34706 retry.go:31] will retry after 489.362914ms: state is "Stopped"
	I0731 17:14:33.709157   34706 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 17:14:36.291436   34706 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: no route to host
	I0731 17:14:36.291480   34706 status.go:422] ha-234651 apiserver status = Running (err=<nil>)
	I0731 17:14:36.291487   34706 status.go:257] ha-234651 status: &{Name:ha-234651 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 17:14:36.291519   34706 status.go:255] checking status of ha-234651-m02 ...
	I0731 17:14:36.291809   34706 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:14:36.291846   34706 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:14:36.306908   34706 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40739
	I0731 17:14:36.307367   34706 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:14:36.307837   34706 main.go:141] libmachine: Using API Version  1
	I0731 17:14:36.307860   34706 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:14:36.308163   34706 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:14:36.308385   34706 main.go:141] libmachine: (ha-234651-m02) Calling .GetState
	I0731 17:14:36.310027   34706 status.go:330] ha-234651-m02 host status = "Running" (err=<nil>)
	I0731 17:14:36.310046   34706 host.go:66] Checking if "ha-234651-m02" exists ...
	I0731 17:14:36.310324   34706 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:14:36.310362   34706 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:14:36.325104   34706 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34347
	I0731 17:14:36.325496   34706 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:14:36.325928   34706 main.go:141] libmachine: Using API Version  1
	I0731 17:14:36.325949   34706 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:14:36.326245   34706 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:14:36.326427   34706 main.go:141] libmachine: (ha-234651-m02) Calling .GetIP
	I0731 17:14:36.329173   34706 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:14:36.329599   34706 main.go:141] libmachine: (ha-234651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:97:0e", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:09:35 +0000 UTC Type:0 Mac:52:54:00:4c:97:0e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-234651-m02 Clientid:01:52:54:00:4c:97:0e}
	I0731 17:14:36.329623   34706 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined IP address 192.168.39.235 and MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:14:36.329760   34706 host.go:66] Checking if "ha-234651-m02" exists ...
	I0731 17:14:36.330084   34706 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:14:36.330125   34706 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:14:36.345392   34706 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43141
	I0731 17:14:36.345894   34706 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:14:36.346388   34706 main.go:141] libmachine: Using API Version  1
	I0731 17:14:36.346407   34706 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:14:36.346701   34706 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:14:36.346862   34706 main.go:141] libmachine: (ha-234651-m02) Calling .DriverName
	I0731 17:14:36.347043   34706 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 17:14:36.347062   34706 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHHostname
	I0731 17:14:36.349947   34706 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:14:36.350415   34706 main.go:141] libmachine: (ha-234651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:97:0e", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 18:09:35 +0000 UTC Type:0 Mac:52:54:00:4c:97:0e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-234651-m02 Clientid:01:52:54:00:4c:97:0e}
	I0731 17:14:36.350445   34706 main.go:141] libmachine: (ha-234651-m02) DBG | domain ha-234651-m02 has defined IP address 192.168.39.235 and MAC address 52:54:00:4c:97:0e in network mk-ha-234651
	I0731 17:14:36.350612   34706 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHPort
	I0731 17:14:36.350784   34706 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHKeyPath
	I0731 17:14:36.350973   34706 main.go:141] libmachine: (ha-234651-m02) Calling .GetSSHUsername
	I0731 17:14:36.351211   34706 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651-m02/id_rsa Username:docker}
	W0731 17:14:54.883400   34706 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.235:22: connect: no route to host
	W0731 17:14:54.883494   34706 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.235:22: connect: no route to host
	E0731 17:14:54.883513   34706 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.235:22: connect: no route to host
	I0731 17:14:54.883549   34706 status.go:257] ha-234651-m02 status: &{Name:ha-234651-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0731 17:14:54.883569   34706 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.235:22: connect: no route to host
	I0731 17:14:54.883581   34706 status.go:255] checking status of ha-234651-m04 ...
	I0731 17:14:54.883881   34706 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:14:54.883938   34706 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:14:54.898799   34706 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33943
	I0731 17:14:54.899350   34706 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:14:54.899915   34706 main.go:141] libmachine: Using API Version  1
	I0731 17:14:54.899940   34706 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:14:54.900257   34706 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:14:54.900556   34706 main.go:141] libmachine: (ha-234651-m04) Calling .GetState
	I0731 17:14:54.902096   34706 status.go:330] ha-234651-m04 host status = "Stopped" (err=<nil>)
	I0731 17:14:54.902108   34706 status.go:343] host is not running, skipping remaining checks
	I0731 17:14:54.902114   34706 status.go:257] ha-234651-m04 status: &{Name:ha-234651-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:546: status says there are running hosts: args "out/minikube-linux-amd64 -p ha-234651 status -v=7 --alsologtostderr": ha-234651
type: Control Plane
host: Running
kubelet: Running
apiserver: Stopped
kubeconfig: Configured

                                                
                                                
ha-234651-m02
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-234651-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-234651 status -v=7 --alsologtostderr": ha-234651
type: Control Plane
host: Running
kubelet: Running
apiserver: Stopped
kubeconfig: Configured

                                                
                                                
ha-234651-m02
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-234651-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-234651 status -v=7 --alsologtostderr": ha-234651
type: Control Plane
host: Running
kubelet: Running
apiserver: Stopped
kubeconfig: Configured

                                                
                                                
ha-234651-m02
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-234651-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-234651 -n ha-234651
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-234651 -n ha-234651: exit status 2 (15.596257886s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-234651 logs -n 25: (1.319221272s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-234651 ssh -n ha-234651-m02 sudo cat                                          | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | /home/docker/cp-test_ha-234651-m03_ha-234651-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-234651 cp ha-234651-m03:/home/docker/cp-test.txt                              | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | ha-234651-m04:/home/docker/cp-test_ha-234651-m03_ha-234651-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-234651 ssh -n                                                                 | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | ha-234651-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-234651 ssh -n ha-234651-m04 sudo cat                                          | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | /home/docker/cp-test_ha-234651-m03_ha-234651-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-234651 cp testdata/cp-test.txt                                                | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | ha-234651-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-234651 ssh -n                                                                 | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | ha-234651-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-234651 cp ha-234651-m04:/home/docker/cp-test.txt                              | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2373933749/001/cp-test_ha-234651-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-234651 ssh -n                                                                 | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | ha-234651-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-234651 cp ha-234651-m04:/home/docker/cp-test.txt                              | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | ha-234651:/home/docker/cp-test_ha-234651-m04_ha-234651.txt                       |           |         |         |                     |                     |
	| ssh     | ha-234651 ssh -n                                                                 | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | ha-234651-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-234651 ssh -n ha-234651 sudo cat                                              | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | /home/docker/cp-test_ha-234651-m04_ha-234651.txt                                 |           |         |         |                     |                     |
	| cp      | ha-234651 cp ha-234651-m04:/home/docker/cp-test.txt                              | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | ha-234651-m02:/home/docker/cp-test_ha-234651-m04_ha-234651-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-234651 ssh -n                                                                 | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | ha-234651-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-234651 ssh -n ha-234651-m02 sudo cat                                          | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | /home/docker/cp-test_ha-234651-m04_ha-234651-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-234651 cp ha-234651-m04:/home/docker/cp-test.txt                              | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | ha-234651-m03:/home/docker/cp-test_ha-234651-m04_ha-234651-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-234651 ssh -n                                                                 | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | ha-234651-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-234651 ssh -n ha-234651-m03 sudo cat                                          | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC | 31 Jul 24 17:03 UTC |
	|         | /home/docker/cp-test_ha-234651-m04_ha-234651-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-234651 node stop m02 -v=7                                                     | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:03 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-234651 node start m02 -v=7                                                    | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:06 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-234651 -v=7                                                           | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:07 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-234651 -v=7                                                                | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:07 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-234651 --wait=true -v=7                                                    | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:09 UTC | 31 Jul 24 17:11 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-234651                                                                | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:11 UTC |                     |
	| node    | ha-234651 node delete m03 -v=7                                                   | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:12 UTC | 31 Jul 24 17:12 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-234651 stop -v=7                                                              | ha-234651 | jenkins | v1.33.1 | 31 Jul 24 17:12 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 17:09:08
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 17:09:08.717790   32734 out.go:291] Setting OutFile to fd 1 ...
	I0731 17:09:08.718017   32734 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 17:09:08.718026   32734 out.go:304] Setting ErrFile to fd 2...
	I0731 17:09:08.718029   32734 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 17:09:08.718188   32734 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19349-8084/.minikube/bin
	I0731 17:09:08.718712   32734 out.go:298] Setting JSON to false
	I0731 17:09:08.719579   32734 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3093,"bootTime":1722442656,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 17:09:08.719641   32734 start.go:139] virtualization: kvm guest
	I0731 17:09:08.721842   32734 out.go:177] * [ha-234651] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 17:09:08.723171   32734 notify.go:220] Checking for updates...
	I0731 17:09:08.723182   32734 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 17:09:08.724464   32734 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 17:09:08.725704   32734 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 17:09:08.727306   32734 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19349-8084/.minikube
	I0731 17:09:08.728843   32734 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 17:09:08.730226   32734 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 17:09:08.732147   32734 config.go:182] Loaded profile config "ha-234651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 17:09:08.732326   32734 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 17:09:08.732956   32734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:09:08.733018   32734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:09:08.748449   32734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41611
	I0731 17:09:08.748836   32734 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:09:08.749561   32734 main.go:141] libmachine: Using API Version  1
	I0731 17:09:08.749589   32734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:09:08.749965   32734 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:09:08.750164   32734 main.go:141] libmachine: (ha-234651) Calling .DriverName
	I0731 17:09:08.784306   32734 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 17:09:08.785661   32734 start.go:297] selected driver: kvm2
	I0731 17:09:08.785673   32734 start.go:901] validating driver "kvm2" against &{Name:ha-234651 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-234651 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.243 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.235 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.139 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.216 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false e
fk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 17:09:08.785806   32734 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 17:09:08.786152   32734 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 17:09:08.786236   32734 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19349-8084/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 17:09:08.800568   32734 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 17:09:08.801221   32734 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 17:09:08.801261   32734 cni.go:84] Creating CNI manager for ""
	I0731 17:09:08.801273   32734 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0731 17:09:08.801360   32734 start.go:340] cluster config:
	{Name:ha-234651 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-234651 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.243 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.235 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.139 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.216 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-ti
ller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 17:09:08.801491   32734 iso.go:125] acquiring lock: {Name:mk4494bb5a63902bf12a909c3109db85229bc516 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 17:09:08.803772   32734 out.go:177] * Starting "ha-234651" primary control-plane node in "ha-234651" cluster
	I0731 17:09:08.804901   32734 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 17:09:08.804932   32734 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0731 17:09:08.804944   32734 cache.go:56] Caching tarball of preloaded images
	I0731 17:09:08.805041   32734 preload.go:172] Found /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 17:09:08.805052   32734 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 17:09:08.805167   32734 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/config.json ...
	I0731 17:09:08.805373   32734 start.go:360] acquireMachinesLock for ha-234651: {Name:mkd78eacfab7d6c3058c6674434b3d889ec957e0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 17:09:08.805419   32734 start.go:364] duration metric: took 27.67µs to acquireMachinesLock for "ha-234651"
	I0731 17:09:08.805438   32734 start.go:96] Skipping create...Using existing machine configuration
	I0731 17:09:08.805447   32734 fix.go:54] fixHost starting: 
	I0731 17:09:08.805714   32734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:09:08.805750   32734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:09:08.819617   32734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39891
	I0731 17:09:08.819979   32734 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:09:08.820450   32734 main.go:141] libmachine: Using API Version  1
	I0731 17:09:08.820473   32734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:09:08.820815   32734 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:09:08.821006   32734 main.go:141] libmachine: (ha-234651) Calling .DriverName
	I0731 17:09:08.821156   32734 main.go:141] libmachine: (ha-234651) Calling .GetState
	I0731 17:09:08.822967   32734 fix.go:112] recreateIfNeeded on ha-234651: state=Running err=<nil>
	W0731 17:09:08.822985   32734 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 17:09:08.824970   32734 out.go:177] * Updating the running kvm2 "ha-234651" VM ...
	I0731 17:09:08.826235   32734 machine.go:94] provisionDockerMachine start ...
	I0731 17:09:08.826253   32734 main.go:141] libmachine: (ha-234651) Calling .DriverName
	I0731 17:09:08.826448   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 17:09:08.829388   32734 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:09:08.829875   32734 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 17:09:08.829893   32734 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:09:08.830126   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 17:09:08.830272   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 17:09:08.830409   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 17:09:08.830509   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 17:09:08.830683   32734 main.go:141] libmachine: Using SSH client type: native
	I0731 17:09:08.830853   32734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0731 17:09:08.830863   32734 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 17:09:08.940858   32734 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-234651
	
	I0731 17:09:08.940884   32734 main.go:141] libmachine: (ha-234651) Calling .GetMachineName
	I0731 17:09:08.941089   32734 buildroot.go:166] provisioning hostname "ha-234651"
	I0731 17:09:08.941110   32734 main.go:141] libmachine: (ha-234651) Calling .GetMachineName
	I0731 17:09:08.941288   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 17:09:08.943973   32734 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:09:08.944392   32734 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 17:09:08.944431   32734 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:09:08.944549   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 17:09:08.944732   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 17:09:08.944880   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 17:09:08.945046   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 17:09:08.945208   32734 main.go:141] libmachine: Using SSH client type: native
	I0731 17:09:08.945396   32734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0731 17:09:08.945410   32734 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-234651 && echo "ha-234651" | sudo tee /etc/hostname
	I0731 17:09:09.069576   32734 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-234651
	
	I0731 17:09:09.069616   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 17:09:09.072262   32734 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:09:09.072635   32734 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 17:09:09.072657   32734 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:09:09.072862   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 17:09:09.073044   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 17:09:09.073210   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 17:09:09.073324   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 17:09:09.073479   32734 main.go:141] libmachine: Using SSH client type: native
	I0731 17:09:09.073646   32734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0731 17:09:09.073668   32734 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-234651' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-234651/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-234651' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 17:09:09.188806   32734 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 17:09:09.188830   32734 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19349-8084/.minikube CaCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19349-8084/.minikube}
	I0731 17:09:09.188861   32734 buildroot.go:174] setting up certificates
	I0731 17:09:09.188871   32734 provision.go:84] configureAuth start
	I0731 17:09:09.188885   32734 main.go:141] libmachine: (ha-234651) Calling .GetMachineName
	I0731 17:09:09.189124   32734 main.go:141] libmachine: (ha-234651) Calling .GetIP
	I0731 17:09:09.191631   32734 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:09:09.192134   32734 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 17:09:09.192161   32734 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:09:09.192303   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 17:09:09.194709   32734 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:09:09.195079   32734 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 17:09:09.195100   32734 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:09:09.195267   32734 provision.go:143] copyHostCerts
	I0731 17:09:09.195290   32734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem
	I0731 17:09:09.195318   32734 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem, removing ...
	I0731 17:09:09.195327   32734 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem
	I0731 17:09:09.195391   32734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem (1082 bytes)
	I0731 17:09:09.195464   32734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem
	I0731 17:09:09.195485   32734 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem, removing ...
	I0731 17:09:09.195491   32734 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem
	I0731 17:09:09.195515   32734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem (1123 bytes)
	I0731 17:09:09.195554   32734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem
	I0731 17:09:09.195570   32734 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem, removing ...
	I0731 17:09:09.195576   32734 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem
	I0731 17:09:09.195601   32734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem (1675 bytes)
	I0731 17:09:09.195644   32734 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem org=jenkins.ha-234651 san=[127.0.0.1 192.168.39.243 ha-234651 localhost minikube]
	I0731 17:09:09.303470   32734 provision.go:177] copyRemoteCerts
	I0731 17:09:09.303528   32734 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 17:09:09.303554   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 17:09:09.306262   32734 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:09:09.306656   32734 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 17:09:09.306676   32734 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:09:09.306933   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 17:09:09.307131   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 17:09:09.307290   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 17:09:09.307445   32734 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651/id_rsa Username:docker}
	I0731 17:09:09.389166   32734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0731 17:09:09.389229   32734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 17:09:09.412205   32734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0731 17:09:09.412281   32734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 17:09:09.434220   32734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0731 17:09:09.434296   32734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0731 17:09:09.457865   32734 provision.go:87] duration metric: took 268.960736ms to configureAuth
	I0731 17:09:09.457895   32734 buildroot.go:189] setting minikube options for container-runtime
	I0731 17:09:09.458146   32734 config.go:182] Loaded profile config "ha-234651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 17:09:09.458238   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 17:09:09.460867   32734 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:09:09.461212   32734 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 17:09:09.461238   32734 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:09:09.461454   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 17:09:09.461607   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 17:09:09.461847   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 17:09:09.461983   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 17:09:09.462159   32734 main.go:141] libmachine: Using SSH client type: native
	I0731 17:09:09.462325   32734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0731 17:09:09.462338   32734 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 17:09:15.084797   32734 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 17:09:15.084823   32734 machine.go:97] duration metric: took 6.258576547s to provisionDockerMachine
	I0731 17:09:15.084837   32734 start.go:293] postStartSetup for "ha-234651" (driver="kvm2")
	I0731 17:09:15.084852   32734 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 17:09:15.084886   32734 main.go:141] libmachine: (ha-234651) Calling .DriverName
	I0731 17:09:15.085227   32734 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 17:09:15.085262   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 17:09:15.088116   32734 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:09:15.088527   32734 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 17:09:15.088555   32734 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:09:15.088675   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 17:09:15.088853   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 17:09:15.089022   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 17:09:15.089139   32734 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651/id_rsa Username:docker}
	I0731 17:09:15.173391   32734 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 17:09:15.177247   32734 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 17:09:15.177277   32734 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/addons for local assets ...
	I0731 17:09:15.177333   32734 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/files for local assets ...
	I0731 17:09:15.177443   32734 filesync.go:149] local asset: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem -> 152592.pem in /etc/ssl/certs
	I0731 17:09:15.177455   32734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem -> /etc/ssl/certs/152592.pem
	I0731 17:09:15.177544   32734 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 17:09:15.186304   32734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /etc/ssl/certs/152592.pem (1708 bytes)
	I0731 17:09:15.208295   32734 start.go:296] duration metric: took 123.442529ms for postStartSetup
	I0731 17:09:15.208363   32734 main.go:141] libmachine: (ha-234651) Calling .DriverName
	I0731 17:09:15.208642   32734 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0731 17:09:15.208667   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 17:09:15.211473   32734 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:09:15.211840   32734 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 17:09:15.211864   32734 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:09:15.212011   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 17:09:15.212185   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 17:09:15.212358   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 17:09:15.212492   32734 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651/id_rsa Username:docker}
	W0731 17:09:15.293108   32734 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0731 17:09:15.293131   32734 fix.go:56] duration metric: took 6.487685825s for fixHost
	I0731 17:09:15.293151   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 17:09:15.295915   32734 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:09:15.296261   32734 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 17:09:15.296285   32734 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:09:15.296476   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 17:09:15.296674   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 17:09:15.296838   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 17:09:15.296980   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 17:09:15.297126   32734 main.go:141] libmachine: Using SSH client type: native
	I0731 17:09:15.297385   32734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0731 17:09:15.297399   32734 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 17:09:15.407492   32734 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722445755.370029814
	
	I0731 17:09:15.407513   32734 fix.go:216] guest clock: 1722445755.370029814
	I0731 17:09:15.407520   32734 fix.go:229] Guest: 2024-07-31 17:09:15.370029814 +0000 UTC Remote: 2024-07-31 17:09:15.293137867 +0000 UTC m=+6.609119522 (delta=76.891947ms)
	I0731 17:09:15.407537   32734 fix.go:200] guest clock delta is within tolerance: 76.891947ms
	I0731 17:09:15.407541   32734 start.go:83] releasing machines lock for "ha-234651", held for 6.602112249s
	I0731 17:09:15.407558   32734 main.go:141] libmachine: (ha-234651) Calling .DriverName
	I0731 17:09:15.407825   32734 main.go:141] libmachine: (ha-234651) Calling .GetIP
	I0731 17:09:15.410601   32734 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:09:15.410985   32734 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 17:09:15.411012   32734 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:09:15.411197   32734 main.go:141] libmachine: (ha-234651) Calling .DriverName
	I0731 17:09:15.411652   32734 main.go:141] libmachine: (ha-234651) Calling .DriverName
	I0731 17:09:15.411800   32734 main.go:141] libmachine: (ha-234651) Calling .DriverName
	I0731 17:09:15.411895   32734 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 17:09:15.411931   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 17:09:15.412019   32734 ssh_runner.go:195] Run: cat /version.json
	I0731 17:09:15.412044   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHHostname
	I0731 17:09:15.414601   32734 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:09:15.414703   32734 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:09:15.414930   32734 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 17:09:15.414959   32734 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:09:15.414987   32734 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 17:09:15.415001   32734 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:09:15.415033   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 17:09:15.415223   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 17:09:15.415300   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHPort
	I0731 17:09:15.415393   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 17:09:15.415453   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHKeyPath
	I0731 17:09:15.415526   32734 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651/id_rsa Username:docker}
	I0731 17:09:15.415590   32734 main.go:141] libmachine: (ha-234651) Calling .GetSSHUsername
	I0731 17:09:15.415728   32734 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/ha-234651/id_rsa Username:docker}
	I0731 17:09:15.532253   32734 ssh_runner.go:195] Run: systemctl --version
	I0731 17:09:15.538160   32734 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 17:09:15.695517   32734 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 17:09:15.701921   32734 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 17:09:15.701976   32734 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 17:09:15.710737   32734 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0731 17:09:15.710759   32734 start.go:495] detecting cgroup driver to use...
	I0731 17:09:15.710817   32734 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 17:09:15.727388   32734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 17:09:15.740721   32734 docker.go:217] disabling cri-docker service (if available) ...
	I0731 17:09:15.740788   32734 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 17:09:15.753435   32734 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 17:09:15.766082   32734 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 17:09:15.903783   32734 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 17:09:16.046196   32734 docker.go:233] disabling docker service ...
	I0731 17:09:16.046272   32734 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 17:09:16.063469   32734 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 17:09:16.076872   32734 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 17:09:16.213795   32734 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 17:09:16.351321   32734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 17:09:16.364865   32734 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 17:09:16.382292   32734 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 17:09:16.382382   32734 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:09:16.391983   32734 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 17:09:16.392039   32734 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:09:16.401399   32734 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:09:16.410650   32734 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:09:16.420090   32734 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 17:09:16.429604   32734 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:09:16.439009   32734 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:09:16.448918   32734 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:09:16.458264   32734 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 17:09:16.466590   32734 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 17:09:16.474941   32734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 17:09:16.609915   32734 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 17:09:23.853312   32734 ssh_runner.go:235] Completed: sudo systemctl restart crio: (7.243356887s)
	I0731 17:09:23.853342   32734 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 17:09:23.853395   32734 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 17:09:23.858115   32734 start.go:563] Will wait 60s for crictl version
	I0731 17:09:23.858167   32734 ssh_runner.go:195] Run: which crictl
	I0731 17:09:23.861681   32734 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 17:09:23.896810   32734 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 17:09:23.896892   32734 ssh_runner.go:195] Run: crio --version
	I0731 17:09:23.930309   32734 ssh_runner.go:195] Run: crio --version
	I0731 17:09:23.958897   32734 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 17:09:23.960154   32734 main.go:141] libmachine: (ha-234651) Calling .GetIP
	I0731 17:09:23.963105   32734 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:09:23.963461   32734 main.go:141] libmachine: (ha-234651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:60:53", ip: ""} in network mk-ha-234651: {Iface:virbr1 ExpiryTime:2024-07-31 17:59:15 +0000 UTC Type:0 Mac:52:54:00:20:60:53 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-234651 Clientid:01:52:54:00:20:60:53}
	I0731 17:09:23.963485   32734 main.go:141] libmachine: (ha-234651) DBG | domain ha-234651 has defined IP address 192.168.39.243 and MAC address 52:54:00:20:60:53 in network mk-ha-234651
	I0731 17:09:23.963699   32734 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 17:09:23.968369   32734 kubeadm.go:883] updating cluster {Name:ha-234651 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-234651 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.243 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.235 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.139 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.216 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fr
eshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 17:09:23.968489   32734 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 17:09:23.968534   32734 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 17:09:24.023060   32734 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 17:09:24.023079   32734 crio.go:433] Images already preloaded, skipping extraction
	I0731 17:09:24.023150   32734 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 17:09:24.054846   32734 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 17:09:24.054870   32734 cache_images.go:84] Images are preloaded, skipping loading
	I0731 17:09:24.054881   32734 kubeadm.go:934] updating node { 192.168.39.243 8443 v1.30.3 crio true true} ...
	I0731 17:09:24.055008   32734 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-234651 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.243
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-234651 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 17:09:24.055094   32734 ssh_runner.go:195] Run: crio config
	I0731 17:09:24.108571   32734 cni.go:84] Creating CNI manager for ""
	I0731 17:09:24.108594   32734 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0731 17:09:24.108608   32734 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 17:09:24.108648   32734 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.243 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-234651 NodeName:ha-234651 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.243"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.243 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 17:09:24.108819   32734 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.243
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-234651"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.243
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.243"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 17:09:24.108844   32734 kube-vip.go:115] generating kube-vip config ...
	I0731 17:09:24.108915   32734 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0731 17:09:24.119465   32734 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0731 17:09:24.119563   32734 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0731 17:09:24.119619   32734 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 17:09:24.128151   32734 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 17:09:24.128199   32734 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0731 17:09:24.136567   32734 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0731 17:09:24.151554   32734 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 17:09:24.166604   32734 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0731 17:09:24.181883   32734 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0731 17:09:24.197927   32734 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0731 17:09:24.202365   32734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 17:09:24.335043   32734 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 17:09:24.349103   32734 certs.go:68] Setting up /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651 for IP: 192.168.39.243
	I0731 17:09:24.349125   32734 certs.go:194] generating shared ca certs ...
	I0731 17:09:24.349145   32734 certs.go:226] acquiring lock for ca certs: {Name:mk49eed439085f9c30746706460ce213570d1997 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 17:09:24.349308   32734 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key
	I0731 17:09:24.349364   32734 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key
	I0731 17:09:24.349375   32734 certs.go:256] generating profile certs ...
	I0731 17:09:24.349477   32734 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/client.key
	I0731 17:09:24.349512   32734 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.key.ebf2528d
	I0731 17:09:24.349537   32734 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.crt.ebf2528d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.243 192.168.39.235 192.168.39.139 192.168.39.254]
	I0731 17:09:24.405668   32734 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.crt.ebf2528d ...
	I0731 17:09:24.405699   32734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.crt.ebf2528d: {Name:mk62af08812ccea9aa161fffb4d843357d3b7fba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 17:09:24.405875   32734 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.key.ebf2528d ...
	I0731 17:09:24.405888   32734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.key.ebf2528d: {Name:mkeb68c589be161c2a7ec2258557e2505fc47d80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 17:09:24.405962   32734 certs.go:381] copying /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.crt.ebf2528d -> /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.crt
	I0731 17:09:24.406151   32734 certs.go:385] copying /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.key.ebf2528d -> /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.key
	I0731 17:09:24.406295   32734 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/proxy-client.key
	I0731 17:09:24.406310   32734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 17:09:24.406324   32734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0731 17:09:24.406342   32734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 17:09:24.406359   32734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 17:09:24.406374   32734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 17:09:24.406389   32734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 17:09:24.406403   32734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 17:09:24.406417   32734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 17:09:24.406472   32734 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem (1338 bytes)
	W0731 17:09:24.406505   32734 certs.go:480] ignoring /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259_empty.pem, impossibly tiny 0 bytes
	I0731 17:09:24.406516   32734 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 17:09:24.406551   32734 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem (1082 bytes)
	I0731 17:09:24.406576   32734 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem (1123 bytes)
	I0731 17:09:24.406600   32734 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem (1675 bytes)
	I0731 17:09:24.406640   32734 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem (1708 bytes)
	I0731 17:09:24.406672   32734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem -> /usr/share/ca-certificates/15259.pem
	I0731 17:09:24.406696   32734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem -> /usr/share/ca-certificates/152592.pem
	I0731 17:09:24.406711   32734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 17:09:24.407332   32734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 17:09:24.430792   32734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 17:09:24.452679   32734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 17:09:24.474405   32734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 17:09:24.495683   32734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0731 17:09:24.517072   32734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 17:09:24.538126   32734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 17:09:24.560173   32734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/ha-234651/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 17:09:24.581503   32734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem --> /usr/share/ca-certificates/15259.pem (1338 bytes)
	I0731 17:09:24.603405   32734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /usr/share/ca-certificates/152592.pem (1708 bytes)
	I0731 17:09:24.625241   32734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 17:09:24.646748   32734 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 17:09:24.661823   32734 ssh_runner.go:195] Run: openssl version
	I0731 17:09:24.667159   32734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15259.pem && ln -fs /usr/share/ca-certificates/15259.pem /etc/ssl/certs/15259.pem"
	I0731 17:09:24.676800   32734 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15259.pem
	I0731 17:09:24.680598   32734 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 16:54 /usr/share/ca-certificates/15259.pem
	I0731 17:09:24.680635   32734 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15259.pem
	I0731 17:09:24.686287   32734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15259.pem /etc/ssl/certs/51391683.0"
	I0731 17:09:24.694849   32734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/152592.pem && ln -fs /usr/share/ca-certificates/152592.pem /etc/ssl/certs/152592.pem"
	I0731 17:09:24.704231   32734 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/152592.pem
	I0731 17:09:24.708308   32734 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 16:54 /usr/share/ca-certificates/152592.pem
	I0731 17:09:24.708361   32734 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/152592.pem
	I0731 17:09:24.713520   32734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/152592.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 17:09:24.721797   32734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 17:09:24.731260   32734 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 17:09:24.735179   32734 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 16:42 /usr/share/ca-certificates/minikubeCA.pem
	I0731 17:09:24.735222   32734 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 17:09:24.740327   32734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 17:09:24.748945   32734 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 17:09:24.753014   32734 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 17:09:24.758189   32734 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 17:09:24.764247   32734 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 17:09:24.769339   32734 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 17:09:24.774540   32734 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 17:09:24.779705   32734 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 17:09:24.784914   32734 kubeadm.go:392] StartCluster: {Name:ha-234651 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-234651 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.243 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.235 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.139 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.216 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 17:09:24.785020   32734 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 17:09:24.785077   32734 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 17:09:24.820286   32734 cri.go:89] found id: "09c1fa932954d5765a611e206a5253d821dc0f2181c24e463a1ba6e7ed54a1a0"
	I0731 17:09:24.820304   32734 cri.go:89] found id: "0509b6238779790d1e64bfb61c9fc8ae3dc4fae67353192749c1b11636b82330"
	I0731 17:09:24.820308   32734 cri.go:89] found id: "8de99430711a8f55b548d2738f7eced3012b6d52c2c9faaae56b3e46319ac1e3"
	I0731 17:09:24.820311   32734 cri.go:89] found id: "d4dd0e0cc1ff97ea83c6b7a5f1f719d210801aec9cb468e32188c6c4096b4483"
	I0731 17:09:24.820314   32734 cri.go:89] found id: "5e4d66f773ff46f7a130d9359ac1316e1117516ebef8b32f9b20515706b3a2f8"
	I0731 17:09:24.820316   32734 cri.go:89] found id: "e8ef655791fe4b0b98612bab1bc9fea7e42c73e5327bdd0654676979cb2e8542"
	I0731 17:09:24.820319   32734 cri.go:89] found id: "dd9f6c4536535bb3eb07d6fa75fe49579d9cdee5ff6e4c440e0a58359d4f2f08"
	I0731 17:09:24.820321   32734 cri.go:89] found id: "631c8cee6152ae1808419f27d4b63fa0a6bcebb15c757dd96090c6267d63e6c2"
	I0731 17:09:24.820324   32734 cri.go:89] found id: "639ed1a246cfddb5bcce86bc66133eebc7339f13348608e46548a872b8df456e"
	I0731 17:09:24.820329   32734 cri.go:89] found id: "e5b3417940cd8bf8065dfefafb918da0e5bfda9fd5f7a2f9ccea0f1615151657"
	I0731 17:09:24.820333   32734 cri.go:89] found id: "b48ac56e48fe04efb8cbe0a1f24302f4ddd366a42a7ba2d4395113bf2157a3ea"
	I0731 17:09:24.820337   32734 cri.go:89] found id: "e3ae09638d5d5724f3e27352156c272f710b70f2c3319adc46eb518d357dc8e2"
	I0731 17:09:24.820343   32734 cri.go:89] found id: "ded6421f2f11d7f64ce80d1681e3d4c9d1453ff8aecc860e184ad0865768adde"
	I0731 17:09:24.820346   32734 cri.go:89] found id: ""
	I0731 17:09:24.820394   32734 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 31 17:15:10 ha-234651 crio[3611]: time="2024-07-31 17:15:10.851483867Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:85be3f20ebc7900e3256473a0facbee8f719800984555243bf73c81abad28de1,PodSandboxId:dd5c3cc8dfa5b5df9f8ef10a50841655cb93beea4fbf4be59c4e3709a91a67e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722446049249215457,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 399c842a2f5d3312c2f955f494ccfe00,},Annotations:map[string]string{io.kubernetes.container.hash: c9b9fb5b,io.kubernetes.container.restartCount: 4,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03bd4d260ce130b59638a210602cbec4825abdc1ff858b726a83f84760b887bd,PodSandboxId:7b03581dae9766a752753718256518416c0d25615764fbf9757a9900acc1a591,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722445952147247491,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ec63e59dba1cb738446a43a6fa45bd3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dd73ca3aef16028d31ef6414de5ed3f197f944dc44724cf18deb1d4de51450d,PodSandboxId:a35af548ab44d4f950c1407f245051ec4ddf2065461d5b25630b7cdd2dace264,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722445925454373387,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-j8sz6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 33623743-6f0a-4ac1-b412-b44cc7efa4be,},Annotations:map[string]string{io.kubernetes.container.hash: 653b61ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e254329cb95e187e2c887d98abcd0e7789ef5b45371d528f1b02e554af8fa7d8,PodSandboxId:636290b7ff37db3c7b09bfcf1187777bdc384f8d48b91a63f5b87fb894604abb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722445878885734851,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87455537-bdb8-438b-8122-db85bed01d09,},Annotations:map[string]string{io.kubernetes.container.hash: f1005df0,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3c496d013e95de87039db5d01f03c4419be22469ae9a37ce56d28e927b11599,PodSandboxId:fe52abfdd68d1b413668bc51f2e6f0a739aa55ede4d310878d640c81581d43fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722445808880587985,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9982d04d181fbc6333c44627c777728,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fb8c0770f47453fe0f2c6f7122df80e417aaafa2536b243fbfa363d7bb7f8f9,PodSandboxId:7b03581dae9766a752753718256518416c0d25615764fbf9757a9900acc1a591,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1722445778774855214,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ec63e59dba1cb738446a43a6fa45bd3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aad4a823ee5ed456a27996750e3b981bb7867983b27a76041fde18d92f1aff17,PodSandboxId:2db80018f77dc188d1f4ac6d3682ebf3d2b34cb3f84d5555586e48a727d443cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722445768062018813,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nsx9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2cde006-dbb7-4e6f-a5f1-cf7760740104,},Annotations:map[string]string{io.kubernetes.container.hash: 9d1dd63f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPo
rt\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11197df475b3001b3fd4ae5f027e3f3d3524805332a4f3556041ef5a33fba07d,PodSandboxId:6fc26849386a99727929b3f7e5f13ae61d2191960f680dc2e43f447c05053383,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722445767604809778,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jfgs8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ead85d8-0fd0-4900-8c02-2f23217ca208,},Annotations:map[string]string{io.kubernetes.container.hash: 85f1c355,
io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97b26172311050320f93611435daf7f831aac7a433159e06423c34df3f98184f,PodSandboxId:cf494ddbf7d6d9905f5a17873a3ca1a29aa2bd0465775fc735a7c40618618e96,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722445767627236576,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wfbt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eda8095-ce75-4043-8ddf-6e5663de8212,},Annotations:map[string]string{io.kubernetes.container.hash: f1243e66,io.kubernetes.container.restartCou
nt: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a243770babc3658aea247fc713c71fc93fa1ffeb129584a57f6edb33a122803e,PodSandboxId:6041bfee3d69efce4e7651c2704eda209c766de18ba9cbc9d375b709f37a8765,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722445767742379835,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qbqb9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f76f862-d39e-4976-90e6-fb9a25cc485a,},Annotations:map[string]string{io.kubernetes.container.hash: 9bcf3d4b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort
\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3d0e33db247e8b10836bed4eff51af3c70dbab815033d0f99c72fab296601fe,PodSandboxId:fe52abfdd68d1b413668bc51f2e6f0a739aa55ede4d310878d640c81581d43fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722445767509992622,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-234651,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9982d04d181fbc6333c44627c777728,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d677e670bb4261a20839c274a89255a77d159fe3e55246ea9089551953087862,PodSandboxId:26976089b091358d1c26e7a6dbd31734c34e960ba5f378698d657955d6735c3f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722445767451944101,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 7f8584f0ef1731ec6b6fb11b7fa84aeb,},Annotations:map[string]string{io.kubernetes.container.hash: a3feaab1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99fe8054c8b333dba8f752ca3b7d5a59656d3429dcd52f299ba255df7a8f6e23,PodSandboxId:83eb828e8f77a4ab85e195c75183e35d71fb7f704a6e5392079d8fcda89ed43a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722445767438737352,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3defc3711c
46905c8aec12eb318ecd3b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e4d66f773ff46f7a130d9359ac1316e1117516ebef8b32f9b20515706b3a2f8,PodSandboxId:754996ae28b01f43599725b23a8ef7ea035322c1da4ccbda6c379abd7e897fe0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722445212877412151,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qbqb9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f76f862-d39e-4976-90e6-fb9a25cc485a,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 9bcf3d4b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8ef655791fe4b0b98612bab1bc9fea7e42c73e5327bdd0654676979cb2e8542,PodSandboxId:1616415c6b8f6463e22fcd4bc693a293f8fdee9b60f2f3d789bc645ace8300a6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722445212817731273,Labels:map[string]string{io.
kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nsx9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2cde006-dbb7-4e6f-a5f1-cf7760740104,},Annotations:map[string]string{io.kubernetes.container.hash: 9d1dd63f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd9f6c4536535bb3eb07d6fa75fe49579d9cdee5ff6e4c440e0a58359d4f2f08,PodSandboxId:99ba869aafb121f3032d38c0a926a34605210fa5e62304b252efb3ce0f139b7c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f
6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722445201111519856,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wfbt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eda8095-ce75-4043-8ddf-6e5663de8212,},Annotations:map[string]string{io.kubernetes.container.hash: f1243e66,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:631c8cee6152ae1808419f27d4b63fa0a6bcebb15c757dd96090c6267d63e6c2,PodSandboxId:88fd41ca8aad113a54b218f3a36159bdb37464ee5f76ec8ae31f27cd31f7daa1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722445197367295822,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jfgs8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ead85d8-0fd0-4900-8c02-2f23217ca208,},Annotations:map[string]string{io.kubernetes.container.hash: 85f1c355,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5b3417940cd8bf8065dfefafb918da0e5bfda9fd5f7a2f9ccea0f1615151657,PodSandboxId:8ccf20bcd63f37163f71bce3d8a364eb92d86b0694bc75949c8979e5633c5cc2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722445176479177868,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f8584f0ef1731ec6b6fb11b7fa84aeb,},Annotations:map[string]string{io.kubernetes.container.hash: a3feaab1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ded6421f2f11d7f64ce80d1681e3d4c9d1453ff8aecc860e184ad0865768adde,PodSandboxId:90ca479a21b17fc36c4328bfd82fe7023d144a893b3a1fbc315c66af59696ea2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caa
cbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722445176364856785,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3defc3711c46905c8aec12eb318ecd3b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=03739f2c-100a-4db6-8e6e-c98e993508e8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:15:10 ha-234651 crio[3611]: time="2024-07-31 17:15:10.864251288Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=be33b7e3-37e6-4a40-98e3-a47cc46235ce name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 31 17:15:10 ha-234651 crio[3611]: time="2024-07-31 17:15:10.864680541Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:a35af548ab44d4f950c1407f245051ec4ddf2065461d5b25630b7cdd2dace264,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-j8sz6,Uid:33623743-6f0a-4ac1-b412-b44cc7efa4be,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722445922932745899,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-j8sz6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 33623743-6f0a-4ac1-b412-b44cc7efa4be,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T17:12:02.619812906Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7b03581dae9766a752753718256518416c0d25615764fbf9757a9900acc1a591,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-234651,Uid:1ec63e59dba1cb738446a43a6fa45bd3,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1722445778677214548,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ec63e59dba1cb738446a43a6fa45bd3,},Annotations:map[string]string{kubernetes.io/config.hash: 1ec63e59dba1cb738446a43a6fa45bd3,kubernetes.io/config.seen: 2024-07-31T17:09:24.161193970Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2db80018f77dc188d1f4ac6d3682ebf3d2b34cb3f84d5555586e48a727d443cc,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-nsx9j,Uid:b2cde006-dbb7-4e6f-a5f1-cf7760740104,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722445767242665541,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-nsx9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2cde006-dbb7-4e6f-a5f1-cf7760740104,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07
-31T17:00:12.281658362Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6041bfee3d69efce4e7651c2704eda209c766de18ba9cbc9d375b709f37a8765,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-qbqb9,Uid:4f76f862-d39e-4976-90e6-fb9a25cc485a,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722445767221221810,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-qbqb9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f76f862-d39e-4976-90e6-fb9a25cc485a,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T17:00:12.291105521Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dd5c3cc8dfa5b5df9f8ef10a50841655cb93beea4fbf4be59c4e3709a91a67e8,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-234651,Uid:399c842a2f5d3312c2f955f494ccfe00,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722445767147672907,Labels:map[string]strin
g{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 399c842a2f5d3312c2f955f494ccfe00,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.243:8443,kubernetes.io/config.hash: 399c842a2f5d3312c2f955f494ccfe00,kubernetes.io/config.seen: 2024-07-31T16:59:42.813503875Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6fc26849386a99727929b3f7e5f13ae61d2191960f680dc2e43f447c05053383,Metadata:&PodSandboxMetadata{Name:kube-proxy-jfgs8,Uid:5ead85d8-0fd0-4900-8c02-2f23217ca208,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722445767118238645,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-jfgs8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ead85d8-0fd0-4900-8c02-2f23217ca208,k8s-app: kube
-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T16:59:56.058945111Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:636290b7ff37db3c7b09bfcf1187777bdc384f8d48b91a63f5b87fb894604abb,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:87455537-bdb8-438b-8122-db85bed01d09,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722445767108614787,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87455537-bdb8-438b-8122-db85bed01d09,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"names
pace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-31T17:00:12.293249307Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:83eb828e8f77a4ab85e195c75183e35d71fb7f704a6e5392079d8fcda89ed43a,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-234651,Uid:3defc3711c46905c8aec12eb318ecd3b,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722445767096997201,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3defc3711c4690
5c8aec12eb318ecd3b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 3defc3711c46905c8aec12eb318ecd3b,kubernetes.io/config.seen: 2024-07-31T16:59:42.813505726Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:26976089b091358d1c26e7a6dbd31734c34e960ba5f378698d657955d6735c3f,Metadata:&PodSandboxMetadata{Name:etcd-ha-234651,Uid:7f8584f0ef1731ec6b6fb11b7fa84aeb,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722445767052050906,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f8584f0ef1731ec6b6fb11b7fa84aeb,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.243:2379,kubernetes.io/config.hash: 7f8584f0ef1731ec6b6fb11b7fa84aeb,kubernetes.io/config.seen: 2024-07-31T16:59:42.813500094Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:cf4
94ddbf7d6d9905f5a17873a3ca1a29aa2bd0465775fc735a7c40618618e96,Metadata:&PodSandboxMetadata{Name:kindnet-wfbt4,Uid:9eda8095-ce75-4043-8ddf-6e5663de8212,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722445767037304151,Labels:map[string]string{app: kindnet,controller-revision-hash: 549967b474,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-wfbt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eda8095-ce75-4043-8ddf-6e5663de8212,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T16:59:56.052583148Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fe52abfdd68d1b413668bc51f2e6f0a739aa55ede4d310878d640c81581d43fd,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-234651,Uid:d9982d04d181fbc6333c44627c777728,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722445767030567087,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.
container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9982d04d181fbc6333c44627c777728,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d9982d04d181fbc6333c44627c777728,kubernetes.io/config.seen: 2024-07-31T16:59:42.813504922Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=be33b7e3-37e6-4a40-98e3-a47cc46235ce name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 31 17:15:10 ha-234651 crio[3611]: time="2024-07-31 17:15:10.865608847Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=37585d61-2bb0-46bc-8f20-4daf9f1c66b5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:15:10 ha-234651 crio[3611]: time="2024-07-31 17:15:10.865689511Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=37585d61-2bb0-46bc-8f20-4daf9f1c66b5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:15:10 ha-234651 crio[3611]: time="2024-07-31 17:15:10.868653226Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:03bd4d260ce130b59638a210602cbec4825abdc1ff858b726a83f84760b887bd,PodSandboxId:7b03581dae9766a752753718256518416c0d25615764fbf9757a9900acc1a591,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722445952147247491,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ec63e59dba1cb738446a43a6fa45bd3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dd73ca3aef16028d31ef6414de5ed3f197f944dc44724cf18deb1d4de51450d,PodSandboxId:a35af548ab44d4f950c1407f245051ec4ddf2065461d5b25630b7cdd2dace264,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722445925454373387,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-j8sz6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 33623743-6f0a-4ac1-b412-b44cc7efa4be,},Annotations:map[string]string{io.kubernetes.container.hash: 653b61ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3c496d013e95de87039db5d01f03c4419be22469ae9a37ce56d28e927b11599,PodSandboxId:fe52abfdd68d1b413668bc51f2e6f0a739aa55ede4d310878d640c81581d43fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722445808880587985,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9982d04d181fbc6333c44627c777728,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aad4a823ee5ed456a27996750e3b981bb7867983b27a76041fde18d92f1aff17,PodSandboxId:2db80018f77dc188d1f4ac6d3682ebf3d2b34cb3f84d5555586e48a727d443cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722445768062018813,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nsx9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2cde006-dbb7-4e6f-a5f1-cf7760740104,},Annotations:map[string]string{io.kubernetes.container.hash: 9d1dd63f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"c
ontainerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11197df475b3001b3fd4ae5f027e3f3d3524805332a4f3556041ef5a33fba07d,PodSandboxId:6fc26849386a99727929b3f7e5f13ae61d2191960f680dc2e43f447c05053383,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722445767604809778,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jfgs8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ead85d8-0fd0-4900-8c02-2f23217ca
208,},Annotations:map[string]string{io.kubernetes.container.hash: 85f1c355,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97b26172311050320f93611435daf7f831aac7a433159e06423c34df3f98184f,PodSandboxId:cf494ddbf7d6d9905f5a17873a3ca1a29aa2bd0465775fc735a7c40618618e96,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722445767627236576,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wfbt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eda8095-ce75-4043-8ddf-6e5663de8212,},Annotations:map[string]strin
g{io.kubernetes.container.hash: f1243e66,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a243770babc3658aea247fc713c71fc93fa1ffeb129584a57f6edb33a122803e,PodSandboxId:6041bfee3d69efce4e7651c2704eda209c766de18ba9cbc9d375b709f37a8765,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722445767742379835,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qbqb9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f76f862-d39e-4976-90e6-fb9a25cc485a,},Annotations:map[string]string{io.kubernetes.container.hash:
9bcf3d4b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d677e670bb4261a20839c274a89255a77d159fe3e55246ea9089551953087862,PodSandboxId:26976089b091358d1c26e7a6dbd31734c34e960ba5f378698d657955d6735c3f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722445767451944101,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name
: etcd-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f8584f0ef1731ec6b6fb11b7fa84aeb,},Annotations:map[string]string{io.kubernetes.container.hash: a3feaab1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99fe8054c8b333dba8f752ca3b7d5a59656d3429dcd52f299ba255df7a8f6e23,PodSandboxId:83eb828e8f77a4ab85e195c75183e35d71fb7f704a6e5392079d8fcda89ed43a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722445767438737352,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-234651,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3defc3711c46905c8aec12eb318ecd3b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=37585d61-2bb0-46bc-8f20-4daf9f1c66b5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:15:10 ha-234651 crio[3611]: time="2024-07-31 17:15:10.875674548Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=60f432dc-b37f-426e-902d-73022c7367cb name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 31 17:15:10 ha-234651 crio[3611]: time="2024-07-31 17:15:10.879620739Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:a35af548ab44d4f950c1407f245051ec4ddf2065461d5b25630b7cdd2dace264,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-j8sz6,Uid:33623743-6f0a-4ac1-b412-b44cc7efa4be,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722445922932745899,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-j8sz6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 33623743-6f0a-4ac1-b412-b44cc7efa4be,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T17:12:02.619812906Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7b03581dae9766a752753718256518416c0d25615764fbf9757a9900acc1a591,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-234651,Uid:1ec63e59dba1cb738446a43a6fa45bd3,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1722445778677214548,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ec63e59dba1cb738446a43a6fa45bd3,},Annotations:map[string]string{kubernetes.io/config.hash: 1ec63e59dba1cb738446a43a6fa45bd3,kubernetes.io/config.seen: 2024-07-31T17:09:24.161193970Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2db80018f77dc188d1f4ac6d3682ebf3d2b34cb3f84d5555586e48a727d443cc,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-nsx9j,Uid:b2cde006-dbb7-4e6f-a5f1-cf7760740104,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722445767242665541,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-nsx9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2cde006-dbb7-4e6f-a5f1-cf7760740104,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07
-31T17:00:12.281658362Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6041bfee3d69efce4e7651c2704eda209c766de18ba9cbc9d375b709f37a8765,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-qbqb9,Uid:4f76f862-d39e-4976-90e6-fb9a25cc485a,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722445767221221810,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-qbqb9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f76f862-d39e-4976-90e6-fb9a25cc485a,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T17:00:12.291105521Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dd5c3cc8dfa5b5df9f8ef10a50841655cb93beea4fbf4be59c4e3709a91a67e8,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-234651,Uid:399c842a2f5d3312c2f955f494ccfe00,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722445767147672907,Labels:map[string]strin
g{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 399c842a2f5d3312c2f955f494ccfe00,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.243:8443,kubernetes.io/config.hash: 399c842a2f5d3312c2f955f494ccfe00,kubernetes.io/config.seen: 2024-07-31T16:59:42.813503875Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6fc26849386a99727929b3f7e5f13ae61d2191960f680dc2e43f447c05053383,Metadata:&PodSandboxMetadata{Name:kube-proxy-jfgs8,Uid:5ead85d8-0fd0-4900-8c02-2f23217ca208,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722445767118238645,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-jfgs8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ead85d8-0fd0-4900-8c02-2f23217ca208,k8s-app: kube
-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T16:59:56.058945111Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:636290b7ff37db3c7b09bfcf1187777bdc384f8d48b91a63f5b87fb894604abb,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:87455537-bdb8-438b-8122-db85bed01d09,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722445767108614787,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87455537-bdb8-438b-8122-db85bed01d09,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"names
pace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-31T17:00:12.293249307Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:83eb828e8f77a4ab85e195c75183e35d71fb7f704a6e5392079d8fcda89ed43a,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-234651,Uid:3defc3711c46905c8aec12eb318ecd3b,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722445767096997201,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3defc3711c4690
5c8aec12eb318ecd3b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 3defc3711c46905c8aec12eb318ecd3b,kubernetes.io/config.seen: 2024-07-31T16:59:42.813505726Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:26976089b091358d1c26e7a6dbd31734c34e960ba5f378698d657955d6735c3f,Metadata:&PodSandboxMetadata{Name:etcd-ha-234651,Uid:7f8584f0ef1731ec6b6fb11b7fa84aeb,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722445767052050906,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f8584f0ef1731ec6b6fb11b7fa84aeb,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.243:2379,kubernetes.io/config.hash: 7f8584f0ef1731ec6b6fb11b7fa84aeb,kubernetes.io/config.seen: 2024-07-31T16:59:42.813500094Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:cf4
94ddbf7d6d9905f5a17873a3ca1a29aa2bd0465775fc735a7c40618618e96,Metadata:&PodSandboxMetadata{Name:kindnet-wfbt4,Uid:9eda8095-ce75-4043-8ddf-6e5663de8212,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722445767037304151,Labels:map[string]string{app: kindnet,controller-revision-hash: 549967b474,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-wfbt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eda8095-ce75-4043-8ddf-6e5663de8212,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T16:59:56.052583148Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fe52abfdd68d1b413668bc51f2e6f0a739aa55ede4d310878d640c81581d43fd,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-234651,Uid:d9982d04d181fbc6333c44627c777728,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722445767030567087,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.
container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9982d04d181fbc6333c44627c777728,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d9982d04d181fbc6333c44627c777728,kubernetes.io/config.seen: 2024-07-31T16:59:42.813504922Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:754996ae28b01f43599725b23a8ef7ea035322c1da4ccbda6c379abd7e897fe0,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-qbqb9,Uid:4f76f862-d39e-4976-90e6-fb9a25cc485a,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722445212609755554,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-qbqb9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f76f862-d39e-4976-90e6-fb9a25cc485a,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T17:00:12.291105521Z,kubernetes.
io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1616415c6b8f6463e22fcd4bc693a293f8fdee9b60f2f3d789bc645ace8300a6,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-nsx9j,Uid:b2cde006-dbb7-4e6f-a5f1-cf7760740104,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722445212589247328,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-nsx9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2cde006-dbb7-4e6f-a5f1-cf7760740104,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T17:00:12.281658362Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:88fd41ca8aad113a54b218f3a36159bdb37464ee5f76ec8ae31f27cd31f7daa1,Metadata:&PodSandboxMetadata{Name:kube-proxy-jfgs8,Uid:5ead85d8-0fd0-4900-8c02-2f23217ca208,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722445197269221011,Labels:map[string]string{controller-revision-hash: 5bbc7
8d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-jfgs8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ead85d8-0fd0-4900-8c02-2f23217ca208,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T16:59:56.058945111Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:99ba869aafb121f3032d38c0a926a34605210fa5e62304b252efb3ce0f139b7c,Metadata:&PodSandboxMetadata{Name:kindnet-wfbt4,Uid:9eda8095-ce75-4043-8ddf-6e5663de8212,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722445196365090312,Labels:map[string]string{app: kindnet,controller-revision-hash: 549967b474,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-wfbt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eda8095-ce75-4043-8ddf-6e5663de8212,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T16:59:56.052583148Z,kub
ernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8ccf20bcd63f37163f71bce3d8a364eb92d86b0694bc75949c8979e5633c5cc2,Metadata:&PodSandboxMetadata{Name:etcd-ha-234651,Uid:7f8584f0ef1731ec6b6fb11b7fa84aeb,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722445176211738887,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f8584f0ef1731ec6b6fb11b7fa84aeb,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.243:2379,kubernetes.io/config.hash: 7f8584f0ef1731ec6b6fb11b7fa84aeb,kubernetes.io/config.seen: 2024-07-31T16:59:35.725398431Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:90ca479a21b17fc36c4328bfd82fe7023d144a893b3a1fbc315c66af59696ea2,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-234651,Uid:3defc3711c46905c8aec12eb318ecd3b,Namespace:kube-system,Attempt:0,
},State:SANDBOX_NOTREADY,CreatedAt:1722445176167625024,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3defc3711c46905c8aec12eb318ecd3b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 3defc3711c46905c8aec12eb318ecd3b,kubernetes.io/config.seen: 2024-07-31T16:59:35.725396175Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=60f432dc-b37f-426e-902d-73022c7367cb name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 31 17:15:10 ha-234651 crio[3611]: time="2024-07-31 17:15:10.881972042Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=27aa39ab-8750-4170-987b-eccaa63954af name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:15:10 ha-234651 crio[3611]: time="2024-07-31 17:15:10.882058664Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=27aa39ab-8750-4170-987b-eccaa63954af name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:15:10 ha-234651 crio[3611]: time="2024-07-31 17:15:10.885961432Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:85be3f20ebc7900e3256473a0facbee8f719800984555243bf73c81abad28de1,PodSandboxId:dd5c3cc8dfa5b5df9f8ef10a50841655cb93beea4fbf4be59c4e3709a91a67e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722446049249215457,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 399c842a2f5d3312c2f955f494ccfe00,},Annotations:map[string]string{io.kubernetes.container.hash: c9b9fb5b,io.kubernetes.container.restartCount: 4,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03bd4d260ce130b59638a210602cbec4825abdc1ff858b726a83f84760b887bd,PodSandboxId:7b03581dae9766a752753718256518416c0d25615764fbf9757a9900acc1a591,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722445952147247491,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ec63e59dba1cb738446a43a6fa45bd3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dd73ca3aef16028d31ef6414de5ed3f197f944dc44724cf18deb1d4de51450d,PodSandboxId:a35af548ab44d4f950c1407f245051ec4ddf2065461d5b25630b7cdd2dace264,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722445925454373387,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-j8sz6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 33623743-6f0a-4ac1-b412-b44cc7efa4be,},Annotations:map[string]string{io.kubernetes.container.hash: 653b61ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e254329cb95e187e2c887d98abcd0e7789ef5b45371d528f1b02e554af8fa7d8,PodSandboxId:636290b7ff37db3c7b09bfcf1187777bdc384f8d48b91a63f5b87fb894604abb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722445878885734851,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87455537-bdb8-438b-8122-db85bed01d09,},Annotations:map[string]string{io.kubernetes.container.hash: f1005df0,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3c496d013e95de87039db5d01f03c4419be22469ae9a37ce56d28e927b11599,PodSandboxId:fe52abfdd68d1b413668bc51f2e6f0a739aa55ede4d310878d640c81581d43fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722445808880587985,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9982d04d181fbc6333c44627c777728,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fb8c0770f47453fe0f2c6f7122df80e417aaafa2536b243fbfa363d7bb7f8f9,PodSandboxId:7b03581dae9766a752753718256518416c0d25615764fbf9757a9900acc1a591,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1722445778774855214,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ec63e59dba1cb738446a43a6fa45bd3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aad4a823ee5ed456a27996750e3b981bb7867983b27a76041fde18d92f1aff17,PodSandboxId:2db80018f77dc188d1f4ac6d3682ebf3d2b34cb3f84d5555586e48a727d443cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722445768062018813,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nsx9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2cde006-dbb7-4e6f-a5f1-cf7760740104,},Annotations:map[string]string{io.kubernetes.container.hash: 9d1dd63f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPo
rt\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11197df475b3001b3fd4ae5f027e3f3d3524805332a4f3556041ef5a33fba07d,PodSandboxId:6fc26849386a99727929b3f7e5f13ae61d2191960f680dc2e43f447c05053383,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722445767604809778,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jfgs8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ead85d8-0fd0-4900-8c02-2f23217ca208,},Annotations:map[string]string{io.kubernetes.container.hash: 85f1c355,
io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97b26172311050320f93611435daf7f831aac7a433159e06423c34df3f98184f,PodSandboxId:cf494ddbf7d6d9905f5a17873a3ca1a29aa2bd0465775fc735a7c40618618e96,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722445767627236576,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wfbt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eda8095-ce75-4043-8ddf-6e5663de8212,},Annotations:map[string]string{io.kubernetes.container.hash: f1243e66,io.kubernetes.container.restartCou
nt: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a243770babc3658aea247fc713c71fc93fa1ffeb129584a57f6edb33a122803e,PodSandboxId:6041bfee3d69efce4e7651c2704eda209c766de18ba9cbc9d375b709f37a8765,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722445767742379835,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qbqb9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f76f862-d39e-4976-90e6-fb9a25cc485a,},Annotations:map[string]string{io.kubernetes.container.hash: 9bcf3d4b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort
\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3d0e33db247e8b10836bed4eff51af3c70dbab815033d0f99c72fab296601fe,PodSandboxId:fe52abfdd68d1b413668bc51f2e6f0a739aa55ede4d310878d640c81581d43fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722445767509992622,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-234651,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9982d04d181fbc6333c44627c777728,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d677e670bb4261a20839c274a89255a77d159fe3e55246ea9089551953087862,PodSandboxId:26976089b091358d1c26e7a6dbd31734c34e960ba5f378698d657955d6735c3f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722445767451944101,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 7f8584f0ef1731ec6b6fb11b7fa84aeb,},Annotations:map[string]string{io.kubernetes.container.hash: a3feaab1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99fe8054c8b333dba8f752ca3b7d5a59656d3429dcd52f299ba255df7a8f6e23,PodSandboxId:83eb828e8f77a4ab85e195c75183e35d71fb7f704a6e5392079d8fcda89ed43a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722445767438737352,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3defc3711c
46905c8aec12eb318ecd3b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e4d66f773ff46f7a130d9359ac1316e1117516ebef8b32f9b20515706b3a2f8,PodSandboxId:754996ae28b01f43599725b23a8ef7ea035322c1da4ccbda6c379abd7e897fe0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722445212877412151,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qbqb9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f76f862-d39e-4976-90e6-fb9a25cc485a,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 9bcf3d4b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8ef655791fe4b0b98612bab1bc9fea7e42c73e5327bdd0654676979cb2e8542,PodSandboxId:1616415c6b8f6463e22fcd4bc693a293f8fdee9b60f2f3d789bc645ace8300a6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722445212817731273,Labels:map[string]string{io.
kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nsx9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2cde006-dbb7-4e6f-a5f1-cf7760740104,},Annotations:map[string]string{io.kubernetes.container.hash: 9d1dd63f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd9f6c4536535bb3eb07d6fa75fe49579d9cdee5ff6e4c440e0a58359d4f2f08,PodSandboxId:99ba869aafb121f3032d38c0a926a34605210fa5e62304b252efb3ce0f139b7c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f
6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722445201111519856,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wfbt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eda8095-ce75-4043-8ddf-6e5663de8212,},Annotations:map[string]string{io.kubernetes.container.hash: f1243e66,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:631c8cee6152ae1808419f27d4b63fa0a6bcebb15c757dd96090c6267d63e6c2,PodSandboxId:88fd41ca8aad113a54b218f3a36159bdb37464ee5f76ec8ae31f27cd31f7daa1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722445197367295822,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jfgs8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ead85d8-0fd0-4900-8c02-2f23217ca208,},Annotations:map[string]string{io.kubernetes.container.hash: 85f1c355,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5b3417940cd8bf8065dfefafb918da0e5bfda9fd5f7a2f9ccea0f1615151657,PodSandboxId:8ccf20bcd63f37163f71bce3d8a364eb92d86b0694bc75949c8979e5633c5cc2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722445176479177868,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f8584f0ef1731ec6b6fb11b7fa84aeb,},Annotations:map[string]string{io.kubernetes.container.hash: a3feaab1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ded6421f2f11d7f64ce80d1681e3d4c9d1453ff8aecc860e184ad0865768adde,PodSandboxId:90ca479a21b17fc36c4328bfd82fe7023d144a893b3a1fbc315c66af59696ea2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caa
cbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722445176364856785,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3defc3711c46905c8aec12eb318ecd3b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=27aa39ab-8750-4170-987b-eccaa63954af name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:15:10 ha-234651 crio[3611]: time="2024-07-31 17:15:10.906518553Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d7f4ad25-1a6e-45b2-becc-f812e1cc82ac name=/runtime.v1.RuntimeService/Version
	Jul 31 17:15:10 ha-234651 crio[3611]: time="2024-07-31 17:15:10.906608134Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d7f4ad25-1a6e-45b2-becc-f812e1cc82ac name=/runtime.v1.RuntimeService/Version
	Jul 31 17:15:10 ha-234651 crio[3611]: time="2024-07-31 17:15:10.907740164Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=659eb2d1-408b-49de-85da-3883f5fd832a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:15:10 ha-234651 crio[3611]: time="2024-07-31 17:15:10.908260990Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722446110908237354,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=659eb2d1-408b-49de-85da-3883f5fd832a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:15:10 ha-234651 crio[3611]: time="2024-07-31 17:15:10.908913480Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=df9ec804-7ef9-4afb-93a4-1c03fb27990b name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:15:10 ha-234651 crio[3611]: time="2024-07-31 17:15:10.908968564Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=df9ec804-7ef9-4afb-93a4-1c03fb27990b name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:15:10 ha-234651 crio[3611]: time="2024-07-31 17:15:10.909307994Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:85be3f20ebc7900e3256473a0facbee8f719800984555243bf73c81abad28de1,PodSandboxId:dd5c3cc8dfa5b5df9f8ef10a50841655cb93beea4fbf4be59c4e3709a91a67e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722446049249215457,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 399c842a2f5d3312c2f955f494ccfe00,},Annotations:map[string]string{io.kubernetes.container.hash: c9b9fb5b,io.kubernetes.container.restartCount: 4,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03bd4d260ce130b59638a210602cbec4825abdc1ff858b726a83f84760b887bd,PodSandboxId:7b03581dae9766a752753718256518416c0d25615764fbf9757a9900acc1a591,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722445952147247491,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ec63e59dba1cb738446a43a6fa45bd3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dd73ca3aef16028d31ef6414de5ed3f197f944dc44724cf18deb1d4de51450d,PodSandboxId:a35af548ab44d4f950c1407f245051ec4ddf2065461d5b25630b7cdd2dace264,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722445925454373387,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-j8sz6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 33623743-6f0a-4ac1-b412-b44cc7efa4be,},Annotations:map[string]string{io.kubernetes.container.hash: 653b61ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e254329cb95e187e2c887d98abcd0e7789ef5b45371d528f1b02e554af8fa7d8,PodSandboxId:636290b7ff37db3c7b09bfcf1187777bdc384f8d48b91a63f5b87fb894604abb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722445878885734851,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87455537-bdb8-438b-8122-db85bed01d09,},Annotations:map[string]string{io.kubernetes.container.hash: f1005df0,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3c496d013e95de87039db5d01f03c4419be22469ae9a37ce56d28e927b11599,PodSandboxId:fe52abfdd68d1b413668bc51f2e6f0a739aa55ede4d310878d640c81581d43fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722445808880587985,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9982d04d181fbc6333c44627c777728,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fb8c0770f47453fe0f2c6f7122df80e417aaafa2536b243fbfa363d7bb7f8f9,PodSandboxId:7b03581dae9766a752753718256518416c0d25615764fbf9757a9900acc1a591,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1722445778774855214,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ec63e59dba1cb738446a43a6fa45bd3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aad4a823ee5ed456a27996750e3b981bb7867983b27a76041fde18d92f1aff17,PodSandboxId:2db80018f77dc188d1f4ac6d3682ebf3d2b34cb3f84d5555586e48a727d443cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722445768062018813,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nsx9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2cde006-dbb7-4e6f-a5f1-cf7760740104,},Annotations:map[string]string{io.kubernetes.container.hash: 9d1dd63f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPo
rt\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11197df475b3001b3fd4ae5f027e3f3d3524805332a4f3556041ef5a33fba07d,PodSandboxId:6fc26849386a99727929b3f7e5f13ae61d2191960f680dc2e43f447c05053383,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722445767604809778,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jfgs8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ead85d8-0fd0-4900-8c02-2f23217ca208,},Annotations:map[string]string{io.kubernetes.container.hash: 85f1c355,
io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97b26172311050320f93611435daf7f831aac7a433159e06423c34df3f98184f,PodSandboxId:cf494ddbf7d6d9905f5a17873a3ca1a29aa2bd0465775fc735a7c40618618e96,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722445767627236576,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wfbt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eda8095-ce75-4043-8ddf-6e5663de8212,},Annotations:map[string]string{io.kubernetes.container.hash: f1243e66,io.kubernetes.container.restartCou
nt: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a243770babc3658aea247fc713c71fc93fa1ffeb129584a57f6edb33a122803e,PodSandboxId:6041bfee3d69efce4e7651c2704eda209c766de18ba9cbc9d375b709f37a8765,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722445767742379835,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qbqb9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f76f862-d39e-4976-90e6-fb9a25cc485a,},Annotations:map[string]string{io.kubernetes.container.hash: 9bcf3d4b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort
\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3d0e33db247e8b10836bed4eff51af3c70dbab815033d0f99c72fab296601fe,PodSandboxId:fe52abfdd68d1b413668bc51f2e6f0a739aa55ede4d310878d640c81581d43fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722445767509992622,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-234651,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9982d04d181fbc6333c44627c777728,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d677e670bb4261a20839c274a89255a77d159fe3e55246ea9089551953087862,PodSandboxId:26976089b091358d1c26e7a6dbd31734c34e960ba5f378698d657955d6735c3f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722445767451944101,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 7f8584f0ef1731ec6b6fb11b7fa84aeb,},Annotations:map[string]string{io.kubernetes.container.hash: a3feaab1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99fe8054c8b333dba8f752ca3b7d5a59656d3429dcd52f299ba255df7a8f6e23,PodSandboxId:83eb828e8f77a4ab85e195c75183e35d71fb7f704a6e5392079d8fcda89ed43a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722445767438737352,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3defc3711c
46905c8aec12eb318ecd3b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e4d66f773ff46f7a130d9359ac1316e1117516ebef8b32f9b20515706b3a2f8,PodSandboxId:754996ae28b01f43599725b23a8ef7ea035322c1da4ccbda6c379abd7e897fe0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722445212877412151,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qbqb9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f76f862-d39e-4976-90e6-fb9a25cc485a,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 9bcf3d4b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8ef655791fe4b0b98612bab1bc9fea7e42c73e5327bdd0654676979cb2e8542,PodSandboxId:1616415c6b8f6463e22fcd4bc693a293f8fdee9b60f2f3d789bc645ace8300a6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722445212817731273,Labels:map[string]string{io.
kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nsx9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2cde006-dbb7-4e6f-a5f1-cf7760740104,},Annotations:map[string]string{io.kubernetes.container.hash: 9d1dd63f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd9f6c4536535bb3eb07d6fa75fe49579d9cdee5ff6e4c440e0a58359d4f2f08,PodSandboxId:99ba869aafb121f3032d38c0a926a34605210fa5e62304b252efb3ce0f139b7c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f
6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722445201111519856,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wfbt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eda8095-ce75-4043-8ddf-6e5663de8212,},Annotations:map[string]string{io.kubernetes.container.hash: f1243e66,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:631c8cee6152ae1808419f27d4b63fa0a6bcebb15c757dd96090c6267d63e6c2,PodSandboxId:88fd41ca8aad113a54b218f3a36159bdb37464ee5f76ec8ae31f27cd31f7daa1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722445197367295822,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jfgs8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ead85d8-0fd0-4900-8c02-2f23217ca208,},Annotations:map[string]string{io.kubernetes.container.hash: 85f1c355,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5b3417940cd8bf8065dfefafb918da0e5bfda9fd5f7a2f9ccea0f1615151657,PodSandboxId:8ccf20bcd63f37163f71bce3d8a364eb92d86b0694bc75949c8979e5633c5cc2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722445176479177868,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f8584f0ef1731ec6b6fb11b7fa84aeb,},Annotations:map[string]string{io.kubernetes.container.hash: a3feaab1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ded6421f2f11d7f64ce80d1681e3d4c9d1453ff8aecc860e184ad0865768adde,PodSandboxId:90ca479a21b17fc36c4328bfd82fe7023d144a893b3a1fbc315c66af59696ea2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caa
cbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722445176364856785,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3defc3711c46905c8aec12eb318ecd3b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=df9ec804-7ef9-4afb-93a4-1c03fb27990b name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:15:10 ha-234651 crio[3611]: time="2024-07-31 17:15:10.948683534Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=33477c13-6b09-4ab1-8d62-98137b1b606d name=/runtime.v1.RuntimeService/Version
	Jul 31 17:15:10 ha-234651 crio[3611]: time="2024-07-31 17:15:10.948797661Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=33477c13-6b09-4ab1-8d62-98137b1b606d name=/runtime.v1.RuntimeService/Version
	Jul 31 17:15:10 ha-234651 crio[3611]: time="2024-07-31 17:15:10.950223147Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6e2ef6f7-ba9d-4b5e-af72-bfb1c697be9e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:15:10 ha-234651 crio[3611]: time="2024-07-31 17:15:10.950885358Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722446110950853472,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6e2ef6f7-ba9d-4b5e-af72-bfb1c697be9e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:15:10 ha-234651 crio[3611]: time="2024-07-31 17:15:10.951822185Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=28cc32ed-36d8-4278-8d0c-3d95ca2d899a name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:15:10 ha-234651 crio[3611]: time="2024-07-31 17:15:10.951899408Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=28cc32ed-36d8-4278-8d0c-3d95ca2d899a name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:15:10 ha-234651 crio[3611]: time="2024-07-31 17:15:10.952502794Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:85be3f20ebc7900e3256473a0facbee8f719800984555243bf73c81abad28de1,PodSandboxId:dd5c3cc8dfa5b5df9f8ef10a50841655cb93beea4fbf4be59c4e3709a91a67e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722446049249215457,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 399c842a2f5d3312c2f955f494ccfe00,},Annotations:map[string]string{io.kubernetes.container.hash: c9b9fb5b,io.kubernetes.container.restartCount: 4,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03bd4d260ce130b59638a210602cbec4825abdc1ff858b726a83f84760b887bd,PodSandboxId:7b03581dae9766a752753718256518416c0d25615764fbf9757a9900acc1a591,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722445952147247491,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ec63e59dba1cb738446a43a6fa45bd3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dd73ca3aef16028d31ef6414de5ed3f197f944dc44724cf18deb1d4de51450d,PodSandboxId:a35af548ab44d4f950c1407f245051ec4ddf2065461d5b25630b7cdd2dace264,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722445925454373387,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-j8sz6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 33623743-6f0a-4ac1-b412-b44cc7efa4be,},Annotations:map[string]string{io.kubernetes.container.hash: 653b61ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e254329cb95e187e2c887d98abcd0e7789ef5b45371d528f1b02e554af8fa7d8,PodSandboxId:636290b7ff37db3c7b09bfcf1187777bdc384f8d48b91a63f5b87fb894604abb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722445878885734851,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87455537-bdb8-438b-8122-db85bed01d09,},Annotations:map[string]string{io.kubernetes.container.hash: f1005df0,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3c496d013e95de87039db5d01f03c4419be22469ae9a37ce56d28e927b11599,PodSandboxId:fe52abfdd68d1b413668bc51f2e6f0a739aa55ede4d310878d640c81581d43fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722445808880587985,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9982d04d181fbc6333c44627c777728,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fb8c0770f47453fe0f2c6f7122df80e417aaafa2536b243fbfa363d7bb7f8f9,PodSandboxId:7b03581dae9766a752753718256518416c0d25615764fbf9757a9900acc1a591,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1722445778774855214,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ec63e59dba1cb738446a43a6fa45bd3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aad4a823ee5ed456a27996750e3b981bb7867983b27a76041fde18d92f1aff17,PodSandboxId:2db80018f77dc188d1f4ac6d3682ebf3d2b34cb3f84d5555586e48a727d443cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722445768062018813,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nsx9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2cde006-dbb7-4e6f-a5f1-cf7760740104,},Annotations:map[string]string{io.kubernetes.container.hash: 9d1dd63f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPo
rt\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11197df475b3001b3fd4ae5f027e3f3d3524805332a4f3556041ef5a33fba07d,PodSandboxId:6fc26849386a99727929b3f7e5f13ae61d2191960f680dc2e43f447c05053383,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722445767604809778,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jfgs8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ead85d8-0fd0-4900-8c02-2f23217ca208,},Annotations:map[string]string{io.kubernetes.container.hash: 85f1c355,
io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97b26172311050320f93611435daf7f831aac7a433159e06423c34df3f98184f,PodSandboxId:cf494ddbf7d6d9905f5a17873a3ca1a29aa2bd0465775fc735a7c40618618e96,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722445767627236576,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wfbt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eda8095-ce75-4043-8ddf-6e5663de8212,},Annotations:map[string]string{io.kubernetes.container.hash: f1243e66,io.kubernetes.container.restartCou
nt: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a243770babc3658aea247fc713c71fc93fa1ffeb129584a57f6edb33a122803e,PodSandboxId:6041bfee3d69efce4e7651c2704eda209c766de18ba9cbc9d375b709f37a8765,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722445767742379835,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qbqb9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f76f862-d39e-4976-90e6-fb9a25cc485a,},Annotations:map[string]string{io.kubernetes.container.hash: 9bcf3d4b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort
\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3d0e33db247e8b10836bed4eff51af3c70dbab815033d0f99c72fab296601fe,PodSandboxId:fe52abfdd68d1b413668bc51f2e6f0a739aa55ede4d310878d640c81581d43fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722445767509992622,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-234651,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9982d04d181fbc6333c44627c777728,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d677e670bb4261a20839c274a89255a77d159fe3e55246ea9089551953087862,PodSandboxId:26976089b091358d1c26e7a6dbd31734c34e960ba5f378698d657955d6735c3f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722445767451944101,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 7f8584f0ef1731ec6b6fb11b7fa84aeb,},Annotations:map[string]string{io.kubernetes.container.hash: a3feaab1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99fe8054c8b333dba8f752ca3b7d5a59656d3429dcd52f299ba255df7a8f6e23,PodSandboxId:83eb828e8f77a4ab85e195c75183e35d71fb7f704a6e5392079d8fcda89ed43a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722445767438737352,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3defc3711c
46905c8aec12eb318ecd3b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e4d66f773ff46f7a130d9359ac1316e1117516ebef8b32f9b20515706b3a2f8,PodSandboxId:754996ae28b01f43599725b23a8ef7ea035322c1da4ccbda6c379abd7e897fe0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722445212877412151,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qbqb9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f76f862-d39e-4976-90e6-fb9a25cc485a,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 9bcf3d4b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8ef655791fe4b0b98612bab1bc9fea7e42c73e5327bdd0654676979cb2e8542,PodSandboxId:1616415c6b8f6463e22fcd4bc693a293f8fdee9b60f2f3d789bc645ace8300a6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722445212817731273,Labels:map[string]string{io.
kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nsx9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2cde006-dbb7-4e6f-a5f1-cf7760740104,},Annotations:map[string]string{io.kubernetes.container.hash: 9d1dd63f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd9f6c4536535bb3eb07d6fa75fe49579d9cdee5ff6e4c440e0a58359d4f2f08,PodSandboxId:99ba869aafb121f3032d38c0a926a34605210fa5e62304b252efb3ce0f139b7c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f
6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722445201111519856,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wfbt4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eda8095-ce75-4043-8ddf-6e5663de8212,},Annotations:map[string]string{io.kubernetes.container.hash: f1243e66,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:631c8cee6152ae1808419f27d4b63fa0a6bcebb15c757dd96090c6267d63e6c2,PodSandboxId:88fd41ca8aad113a54b218f3a36159bdb37464ee5f76ec8ae31f27cd31f7daa1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722445197367295822,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jfgs8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ead85d8-0fd0-4900-8c02-2f23217ca208,},Annotations:map[string]string{io.kubernetes.container.hash: 85f1c355,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5b3417940cd8bf8065dfefafb918da0e5bfda9fd5f7a2f9ccea0f1615151657,PodSandboxId:8ccf20bcd63f37163f71bce3d8a364eb92d86b0694bc75949c8979e5633c5cc2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722445176479177868,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f8584f0ef1731ec6b6fb11b7fa84aeb,},Annotations:map[string]string{io.kubernetes.container.hash: a3feaab1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ded6421f2f11d7f64ce80d1681e3d4c9d1453ff8aecc860e184ad0865768adde,PodSandboxId:90ca479a21b17fc36c4328bfd82fe7023d144a893b3a1fbc315c66af59696ea2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caa
cbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722445176364856785,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-234651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3defc3711c46905c8aec12eb318ecd3b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=28cc32ed-36d8-4278-8d0c-3d95ca2d899a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	85be3f20ebc79       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      About a minute ago   Exited              kube-apiserver            4                   dd5c3cc8dfa5b       kube-apiserver-ha-234651
	03bd4d260ce13       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  1                   7b03581dae976       kube-vip-ha-234651
	8dd73ca3aef16       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago        Running             busybox                   0                   a35af548ab44d       busybox-fc5497c4f-j8sz6
	e254329cb95e1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago        Exited              storage-provisioner       5                   636290b7ff37d       storage-provisioner
	d3c496d013e95       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      5 minutes ago        Running             kube-controller-manager   2                   fe52abfdd68d1       kube-controller-manager-ha-234651
	5fb8c0770f474       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      5 minutes ago        Exited              kube-vip                  0                   7b03581dae976       kube-vip-ha-234651
	aad4a823ee5ed       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago        Running             coredns                   1                   2db80018f77dc       coredns-7db6d8ff4d-nsx9j
	a243770babc36       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago        Running             coredns                   1                   6041bfee3d69e       coredns-7db6d8ff4d-qbqb9
	97b2617231105       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      5 minutes ago        Running             kindnet-cni               1                   cf494ddbf7d6d       kindnet-wfbt4
	11197df475b30       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      5 minutes ago        Running             kube-proxy                1                   6fc26849386a9       kube-proxy-jfgs8
	e3d0e33db247e       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      5 minutes ago        Exited              kube-controller-manager   1                   fe52abfdd68d1       kube-controller-manager-ha-234651
	d677e670bb426       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      5 minutes ago        Running             etcd                      1                   26976089b0913       etcd-ha-234651
	99fe8054c8b33       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      5 minutes ago        Running             kube-scheduler            1                   83eb828e8f77a       kube-scheduler-ha-234651
	5e4d66f773ff4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      14 minutes ago       Exited              coredns                   0                   754996ae28b01       coredns-7db6d8ff4d-qbqb9
	e8ef655791fe4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      14 minutes ago       Exited              coredns                   0                   1616415c6b8f6       coredns-7db6d8ff4d-nsx9j
	dd9f6c4536535       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    15 minutes ago       Exited              kindnet-cni               0                   99ba869aafb12       kindnet-wfbt4
	631c8cee6152a       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      15 minutes ago       Exited              kube-proxy                0                   88fd41ca8aad1       kube-proxy-jfgs8
	e5b3417940cd8       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      15 minutes ago       Exited              etcd                      0                   8ccf20bcd63f3       etcd-ha-234651
	ded6421f2f11d       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      15 minutes ago       Exited              kube-scheduler            0                   90ca479a21b17       kube-scheduler-ha-234651
	
	
	==> coredns [5e4d66f773ff46f7a130d9359ac1316e1117516ebef8b32f9b20515706b3a2f8] <==
	[INFO] 10.244.1.3:46716 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000125267s
	[INFO] 10.244.2.2:44128 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00011516s
	[INFO] 10.244.2.2:51451 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000094315s
	[INFO] 10.244.2.2:36147 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001399s
	[INFO] 10.244.2.2:36545 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001276628s
	[INFO] 10.244.1.2:43917 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113879s
	[INFO] 10.244.1.2:52270 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00173961s
	[INFO] 10.244.1.2:43272 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000090127s
	[INFO] 10.244.1.2:40969 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001253454s
	[INFO] 10.244.1.2:36005 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000101429s
	[INFO] 10.244.1.3:57882 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000155324s
	[INFO] 10.244.1.3:52921 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104436s
	[INFO] 10.244.1.3:53848 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000118293s
	[INFO] 10.244.1.2:59324 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114877s
	[INFO] 10.244.1.2:35559 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080871s
	[INFO] 10.244.1.3:36523 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000149158s
	[INFO] 10.244.1.3:43713 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000113949s
	[INFO] 10.244.2.2:57100 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000104476s
	[INFO] 10.244.2.2:36343 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000075949s
	[INFO] 10.244.1.2:36593 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000110887s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	
	
	==> coredns [a243770babc3658aea247fc713c71fc93fa1ffeb129584a57f6edb33a122803e] <==
	Trace[1393399778]: ---"Objects listed" error:Unauthorized 12213ms (17:14:48.298)
	Trace[1393399778]: [12.213861048s] [12.213861048s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[540948719]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Jul-2024 17:14:36.170) (total time: 12128ms):
	Trace[540948719]: ---"Objects listed" error:Unauthorized 12128ms (17:14:48.298)
	Trace[540948719]: [12.128715672s] [12.128715672s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[2018750601]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Jul-2024 17:14:36.332) (total time: 11970ms):
	Trace[2018750601]: ---"Objects listed" error:Unauthorized 11970ms (17:14:48.302)
	Trace[2018750601]: [11.970310629s] [11.970310629s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=2369": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=2369": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=2422": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=2422": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=2384": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=2384": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=2369": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=2369": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=2422": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=2422": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=2384": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=2384": dial tcp 10.96.0.1:443: connect: no route to host
	
	
	==> coredns [aad4a823ee5ed456a27996750e3b981bb7867983b27a76041fde18d92f1aff17] <==
	[INFO] plugin/kubernetes: Trace[1573451597]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Jul-2024 17:14:37.475) (total time: 10832ms):
	Trace[1573451597]: ---"Objects listed" error:Unauthorized 10832ms (17:14:48.307)
	Trace[1573451597]: [10.832571992s] [10.832571992s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[1816589961]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Jul-2024 17:14:35.908) (total time: 12398ms):
	Trace[1816589961]: ---"Objects listed" error:Unauthorized 12398ms (17:14:48.307)
	Trace[1816589961]: [12.398988903s] [12.398988903s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[671007508]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Jul-2024 17:14:36.192) (total time: 12116ms):
	Trace[671007508]: ---"Objects listed" error:Unauthorized 12115ms (17:14:48.307)
	Trace[671007508]: [12.116054058s] [12.116054058s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=2620": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=2620": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=2617": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=2617": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=2619": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=2619": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=2620": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=2620": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=2617": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=2617": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=2619": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=2619": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> coredns [e8ef655791fe4b0b98612bab1bc9fea7e42c73e5327bdd0654676979cb2e8542] <==
	[INFO] 10.244.1.3:35968 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003019868s
	[INFO] 10.244.1.3:50760 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000096087s
	[INFO] 10.244.2.2:47184 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001873446s
	[INFO] 10.244.2.2:52684 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000141574s
	[INFO] 10.244.2.2:55915 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000097985s
	[INFO] 10.244.2.2:37641 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000064285s
	[INFO] 10.244.1.2:44538 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000098479s
	[INFO] 10.244.1.2:51050 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000063987s
	[INFO] 10.244.1.2:53102 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117625s
	[INFO] 10.244.1.3:34472 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000093028s
	[INFO] 10.244.2.2:50493 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000198464s
	[INFO] 10.244.2.2:59387 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000091819s
	[INFO] 10.244.2.2:46587 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000140652s
	[INFO] 10.244.2.2:44332 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000062045s
	[INFO] 10.244.1.2:56100 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000129501s
	[INFO] 10.244.1.2:52904 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075504s
	[INFO] 10.244.1.3:45513 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000201365s
	[INFO] 10.244.1.3:56964 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000220702s
	[INFO] 10.244.2.2:52612 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000221354s
	[INFO] 10.244.2.2:34847 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000096723s
	[INFO] 10.244.1.2:54098 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00017441s
	[INFO] 10.244.1.2:35429 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000097269s
	[INFO] 10.244.1.2:35606 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000150264s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.022347] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.054568] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.050413] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.168871] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.140385] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.264161] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +3.967546] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +4.664986] systemd-fstab-generator[951]: Ignoring "noauto" option for root device
	[  +0.073312] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.322437] systemd-fstab-generator[1369]: Ignoring "noauto" option for root device
	[  +0.079901] kauditd_printk_skb: 79 callbacks suppressed
	[ +13.802679] kauditd_printk_skb: 21 callbacks suppressed
	[Jul31 17:00] kauditd_printk_skb: 34 callbacks suppressed
	[ +47.907295] kauditd_printk_skb: 26 callbacks suppressed
	[Jul31 17:07] kauditd_printk_skb: 1 callbacks suppressed
	[Jul31 17:09] systemd-fstab-generator[3531]: Ignoring "noauto" option for root device
	[  +0.140478] systemd-fstab-generator[3543]: Ignoring "noauto" option for root device
	[  +0.172285] systemd-fstab-generator[3557]: Ignoring "noauto" option for root device
	[  +0.137076] systemd-fstab-generator[3570]: Ignoring "noauto" option for root device
	[  +0.256187] systemd-fstab-generator[3598]: Ignoring "noauto" option for root device
	[  +7.725144] systemd-fstab-generator[3693]: Ignoring "noauto" option for root device
	[  +0.082305] kauditd_printk_skb: 100 callbacks suppressed
	[ +14.434425] kauditd_printk_skb: 102 callbacks suppressed
	
	
	==> etcd [d677e670bb4261a20839c274a89255a77d159fe3e55246ea9089551953087862] <==
	{"level":"info","ts":"2024-07-31T17:15:06.328465Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-07-31T17:15:06.328498Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 received MsgPreVoteResp from 4d6f7e7e767b3ff3 at term 3"}
	{"level":"info","ts":"2024-07-31T17:15:06.328532Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 [logterm: 3, index: 3099] sent MsgPreVote request to 6d43d0f719899a7e at term 3"}
	{"level":"info","ts":"2024-07-31T17:15:07.328385Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 is starting a new election at term 3"}
	{"level":"info","ts":"2024-07-31T17:15:07.328431Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-07-31T17:15:07.328445Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 received MsgPreVoteResp from 4d6f7e7e767b3ff3 at term 3"}
	{"level":"info","ts":"2024-07-31T17:15:07.32846Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 [logterm: 3, index: 3099] sent MsgPreVote request to 6d43d0f719899a7e at term 3"}
	{"level":"info","ts":"2024-07-31T17:15:08.327834Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 is starting a new election at term 3"}
	{"level":"info","ts":"2024-07-31T17:15:08.3279Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-07-31T17:15:08.327914Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 received MsgPreVoteResp from 4d6f7e7e767b3ff3 at term 3"}
	{"level":"info","ts":"2024-07-31T17:15:08.327928Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 [logterm: 3, index: 3099] sent MsgPreVote request to 6d43d0f719899a7e at term 3"}
	{"level":"warn","ts":"2024-07-31T17:15:08.659862Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"6d43d0f719899a7e","rtt":"10.021726ms","error":"dial tcp 192.168.39.235:2380: i/o timeout"}
	{"level":"warn","ts":"2024-07-31T17:15:08.669139Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"6d43d0f719899a7e","rtt":"883.14µs","error":"dial tcp 192.168.39.235:2380: i/o timeout"}
	{"level":"info","ts":"2024-07-31T17:15:09.328242Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 is starting a new election at term 3"}
	{"level":"info","ts":"2024-07-31T17:15:09.328421Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-07-31T17:15:09.328456Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 received MsgPreVoteResp from 4d6f7e7e767b3ff3 at term 3"}
	{"level":"info","ts":"2024-07-31T17:15:09.328489Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 [logterm: 3, index: 3099] sent MsgPreVote request to 6d43d0f719899a7e at term 3"}
	{"level":"info","ts":"2024-07-31T17:15:10.327725Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 is starting a new election at term 3"}
	{"level":"info","ts":"2024-07-31T17:15:10.327774Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-07-31T17:15:10.327788Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 received MsgPreVoteResp from 4d6f7e7e767b3ff3 at term 3"}
	{"level":"info","ts":"2024-07-31T17:15:10.327802Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 [logterm: 3, index: 3099] sent MsgPreVote request to 6d43d0f719899a7e at term 3"}
	{"level":"info","ts":"2024-07-31T17:15:11.328422Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 is starting a new election at term 3"}
	{"level":"info","ts":"2024-07-31T17:15:11.32848Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-07-31T17:15:11.328494Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 received MsgPreVoteResp from 4d6f7e7e767b3ff3 at term 3"}
	{"level":"info","ts":"2024-07-31T17:15:11.328514Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 [logterm: 3, index: 3099] sent MsgPreVote request to 6d43d0f719899a7e at term 3"}
	
	
	==> etcd [e5b3417940cd8bf8065dfefafb918da0e5bfda9fd5f7a2f9ccea0f1615151657] <==
	2024/07/31 17:09:09 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/31 17:09:09 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/31 17:09:09 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/31 17:09:09 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/31 17:09:09 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-31T17:09:09.636276Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.243:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T17:09:09.636393Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.243:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-31T17:09:09.636593Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"4d6f7e7e767b3ff3","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-31T17:09:09.636801Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"6d43d0f719899a7e"}
	{"level":"info","ts":"2024-07-31T17:09:09.636948Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"6d43d0f719899a7e"}
	{"level":"info","ts":"2024-07-31T17:09:09.637039Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"6d43d0f719899a7e"}
	{"level":"info","ts":"2024-07-31T17:09:09.637134Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"4d6f7e7e767b3ff3","remote-peer-id":"6d43d0f719899a7e"}
	{"level":"info","ts":"2024-07-31T17:09:09.637191Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"4d6f7e7e767b3ff3","remote-peer-id":"6d43d0f719899a7e"}
	{"level":"info","ts":"2024-07-31T17:09:09.637261Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"4d6f7e7e767b3ff3","remote-peer-id":"6d43d0f719899a7e"}
	{"level":"info","ts":"2024-07-31T17:09:09.637288Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"6d43d0f719899a7e"}
	{"level":"info","ts":"2024-07-31T17:09:09.637296Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"3a31efdaa5cc8d31"}
	{"level":"info","ts":"2024-07-31T17:09:09.637307Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"3a31efdaa5cc8d31"}
	{"level":"info","ts":"2024-07-31T17:09:09.637367Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"3a31efdaa5cc8d31"}
	{"level":"info","ts":"2024-07-31T17:09:09.637492Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"4d6f7e7e767b3ff3","remote-peer-id":"3a31efdaa5cc8d31"}
	{"level":"info","ts":"2024-07-31T17:09:09.63759Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"4d6f7e7e767b3ff3","remote-peer-id":"3a31efdaa5cc8d31"}
	{"level":"info","ts":"2024-07-31T17:09:09.637726Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"4d6f7e7e767b3ff3","remote-peer-id":"3a31efdaa5cc8d31"}
	{"level":"info","ts":"2024-07-31T17:09:09.6378Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"3a31efdaa5cc8d31"}
	{"level":"info","ts":"2024-07-31T17:09:09.640565Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.243:2380"}
	{"level":"info","ts":"2024-07-31T17:09:09.640778Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.243:2380"}
	{"level":"info","ts":"2024-07-31T17:09:09.640812Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-234651","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.243:2380"],"advertise-client-urls":["https://192.168.39.243:2379"]}
	
	
	==> kernel <==
	 17:15:11 up 16 min,  0 users,  load average: 0.26, 0.65, 0.49
	Linux ha-234651 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [97b26172311050320f93611435daf7f831aac7a433159e06423c34df3f98184f] <==
	E0731 17:14:41.284165       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Unauthorized
	W0731 17:14:45.797737       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?resourceVersion=2620": dial tcp 10.96.0.1:443: connect: no route to host
	E0731 17:14:45.797795       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?resourceVersion=2620": dial tcp 10.96.0.1:443: connect: no route to host
	I0731 17:14:48.746419       1 main.go:295] Handling node with IPs: map[192.168.39.243:{}]
	I0731 17:14:48.746462       1 main.go:299] handling current node
	I0731 17:14:48.746494       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0731 17:14:48.746500       1 main.go:322] Node ha-234651-m02 has CIDR [10.244.1.0/24] 
	I0731 17:14:48.746659       1 main.go:295] Handling node with IPs: map[192.168.39.216:{}]
	I0731 17:14:48.746679       1 main.go:322] Node ha-234651-m04 has CIDR [10.244.3.0/24] 
	W0731 17:14:53.029816       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?resourceVersion=2620": dial tcp 10.96.0.1:443: connect: no route to host
	E0731 17:14:53.029891       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?resourceVersion=2620": dial tcp 10.96.0.1:443: connect: no route to host
	I0731 17:14:58.749678       1 main.go:295] Handling node with IPs: map[192.168.39.243:{}]
	I0731 17:14:58.749727       1 main.go:299] handling current node
	I0731 17:14:58.749743       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0731 17:14:58.749748       1 main.go:322] Node ha-234651-m02 has CIDR [10.244.1.0/24] 
	I0731 17:14:58.749948       1 main.go:295] Handling node with IPs: map[192.168.39.216:{}]
	I0731 17:14:58.749982       1 main.go:322] Node ha-234651-m04 has CIDR [10.244.3.0/24] 
	W0731 17:15:07.301732       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?resourceVersion=2620": dial tcp 10.96.0.1:443: connect: no route to host
	E0731 17:15:07.301779       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?resourceVersion=2620": dial tcp 10.96.0.1:443: connect: no route to host
	I0731 17:15:08.744776       1 main.go:295] Handling node with IPs: map[192.168.39.243:{}]
	I0731 17:15:08.744831       1 main.go:299] handling current node
	I0731 17:15:08.744846       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0731 17:15:08.744852       1 main.go:322] Node ha-234651-m02 has CIDR [10.244.1.0/24] 
	I0731 17:15:08.745012       1 main.go:295] Handling node with IPs: map[192.168.39.216:{}]
	I0731 17:15:08.745033       1 main.go:322] Node ha-234651-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [dd9f6c4536535bb3eb07d6fa75fe49579d9cdee5ff6e4c440e0a58359d4f2f08] <==
	I0731 17:08:32.129403       1 main.go:299] handling current node
	I0731 17:08:42.134528       1 main.go:295] Handling node with IPs: map[192.168.39.243:{}]
	I0731 17:08:42.134597       1 main.go:299] handling current node
	I0731 17:08:42.134613       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0731 17:08:42.134618       1 main.go:322] Node ha-234651-m02 has CIDR [10.244.1.0/24] 
	I0731 17:08:42.134757       1 main.go:295] Handling node with IPs: map[192.168.39.139:{}]
	I0731 17:08:42.134778       1 main.go:322] Node ha-234651-m03 has CIDR [10.244.2.0/24] 
	I0731 17:08:42.134840       1 main.go:295] Handling node with IPs: map[192.168.39.216:{}]
	I0731 17:08:42.134844       1 main.go:322] Node ha-234651-m04 has CIDR [10.244.3.0/24] 
	I0731 17:08:52.129737       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0731 17:08:52.129782       1 main.go:322] Node ha-234651-m02 has CIDR [10.244.1.0/24] 
	I0731 17:08:52.129928       1 main.go:295] Handling node with IPs: map[192.168.39.139:{}]
	I0731 17:08:52.129948       1 main.go:322] Node ha-234651-m03 has CIDR [10.244.2.0/24] 
	I0731 17:08:52.130018       1 main.go:295] Handling node with IPs: map[192.168.39.216:{}]
	I0731 17:08:52.130036       1 main.go:322] Node ha-234651-m04 has CIDR [10.244.3.0/24] 
	I0731 17:08:52.130085       1 main.go:295] Handling node with IPs: map[192.168.39.243:{}]
	I0731 17:08:52.130100       1 main.go:299] handling current node
	I0731 17:09:02.126927       1 main.go:295] Handling node with IPs: map[192.168.39.243:{}]
	I0731 17:09:02.127018       1 main.go:299] handling current node
	I0731 17:09:02.127047       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0731 17:09:02.127066       1 main.go:322] Node ha-234651-m02 has CIDR [10.244.1.0/24] 
	I0731 17:09:02.127239       1 main.go:295] Handling node with IPs: map[192.168.39.139:{}]
	I0731 17:09:02.127262       1 main.go:322] Node ha-234651-m03 has CIDR [10.244.2.0/24] 
	I0731 17:09:02.127395       1 main.go:295] Handling node with IPs: map[192.168.39.216:{}]
	I0731 17:09:02.127429       1 main.go:322] Node ha-234651-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [85be3f20ebc7900e3256473a0facbee8f719800984555243bf73c81abad28de1] <==
	E0731 17:14:48.306615       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: etcdserver: request timed out
	E0731 17:14:48.306740       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, etcdserver: request timed out]"
	E0731 17:14:48.306834       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, etcdserver: request timed out]"
	E0731 17:14:48.306893       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, etcdserver: request timed out]"
	I0731 17:14:48.307028       1 trace.go:236] Trace[584994387]: "List(recursive=true) etcd3" audit-id:,key:/persistentvolumes,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (31-Jul-2024 17:14:35.305) (total time: 13001ms):
	Trace[584994387]: [13.001536809s] [13.001536809s] END
	W0731 17:14:48.307287       1 reflector.go:547] storage/cacher.go:/persistentvolumes: failed to list *core.PersistentVolume: etcdserver: request timed out
	I0731 17:14:48.307454       1 trace.go:236] Trace[1961985492]: "Reflector ListAndWatch" name:storage/cacher.go:/persistentvolumes (31-Jul-2024 17:14:35.305) (total time: 13001ms):
	Trace[1961985492]: ---"Objects listed" error:etcdserver: request timed out 13001ms (17:14:48.307)
	Trace[1961985492]: [13.001980872s] [13.001980872s] END
	E0731 17:14:48.307968       1 cacher.go:475] cacher (persistentvolumes): unexpected ListAndWatch error: failed to list *core.PersistentVolume: etcdserver: request timed out; reinitializing...
	E0731 17:14:48.307208       1 status.go:71] apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:"etcdserver: request timed out"}: etcdserver: request timed out
	I0731 17:14:48.308080       1 trace.go:236] Trace[1673236993]: "List" accept:application/vnd.kubernetes.protobuf, */*,audit-id:9c672e6a-6a09-467f-a6a5-032b25bd9a7e,client:127.0.0.1,api-group:,api-version:v1,name:,subresource:,namespace:,protocol:HTTP/2.0,resource:persistentvolumes,scope:cluster,url:/api/v1/persistentvolumes,user-agent:kube-apiserver/v1.30.3 (linux/amd64) kubernetes/6fc0a69,verb:LIST (31-Jul-2024 17:14:36.791) (total time: 11516ms):
	Trace[1673236993]: ["List(recursive=true) etcd3" audit-id:9c672e6a-6a09-467f-a6a5-032b25bd9a7e,key:/persistentvolumes,resourceVersion:0,resourceVersionMatch:,limit:500,continue: 11516ms (17:14:36.791)]
	Trace[1673236993]: [11.516138371s] [11.516138371s] END
	W0731 17:14:48.308456       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: etcdserver: request timed out
	I0731 17:14:48.308643       1 trace.go:236] Trace[442588294]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (31-Jul-2024 17:14:36.791) (total time: 11517ms):
	Trace[442588294]: ---"Objects listed" error:etcdserver: request timed out 11517ms (17:14:48.308)
	Trace[442588294]: [11.517487105s] [11.517487105s] END
	E0731 17:14:48.308676       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: etcdserver: request timed out
	E0731 17:14:55.294164       1 status.go:71] apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:"etcdserver: request timed out"}: etcdserver: request timed out
	I0731 17:14:55.294281       1 trace.go:236] Trace[191912644]: "Get" accept:application/vnd.kubernetes.protobuf, */*,audit-id:b4ad2375-835b-4cf7-8ff4-a55643b1b976,client:127.0.0.1,api-group:scheduling.k8s.io,api-version:v1,name:system-node-critical,subresource:,namespace:,protocol:HTTP/2.0,resource:priorityclasses,scope:resource,url:/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical,user-agent:kube-apiserver/v1.30.3 (linux/amd64) kubernetes/6fc0a69,verb:GET (31-Jul-2024 17:14:48.290) (total time: 7003ms):
	Trace[191912644]: [7.003363647s] [7.003363647s] END
	W0731 17:14:55.295078       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: etcdserver: request timed out. Retrying...
	F0731 17:14:55.295126       1 hooks.go:203] PostStartHook "scheduling/bootstrap-system-priority-classes" failed: unable to add default system priority classes: timed out waiting for the condition
	
	
	==> kube-controller-manager [d3c496d013e95de87039db5d01f03c4419be22469ae9a37ce56d28e927b11599] <==
	W0731 17:15:00.841855       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.243:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.243:8443: connect: connection refused
	W0731 17:15:01.842746       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.243:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.243:8443: connect: connection refused
	E0731 17:15:02.795981       1 gc_controller.go:153] "Failed to get node" err="node \"ha-234651-m03\" not found" logger="pod-garbage-collector-controller" node="ha-234651-m03"
	E0731 17:15:02.796015       1 gc_controller.go:153] "Failed to get node" err="node \"ha-234651-m03\" not found" logger="pod-garbage-collector-controller" node="ha-234651-m03"
	E0731 17:15:02.796022       1 gc_controller.go:153] "Failed to get node" err="node \"ha-234651-m03\" not found" logger="pod-garbage-collector-controller" node="ha-234651-m03"
	E0731 17:15:02.796028       1 gc_controller.go:153] "Failed to get node" err="node \"ha-234651-m03\" not found" logger="pod-garbage-collector-controller" node="ha-234651-m03"
	E0731 17:15:02.796032       1 gc_controller.go:153] "Failed to get node" err="node \"ha-234651-m03\" not found" logger="pod-garbage-collector-controller" node="ha-234651-m03"
	W0731 17:15:02.796633       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.243:8443/api/v1/namespaces/kube-system/serviceaccounts/pod-garbage-collector": dial tcp 192.168.39.243:8443: connect: connection refused
	W0731 17:15:03.298047       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.243:8443/api/v1/namespaces/kube-system/serviceaccounts/pod-garbage-collector": dial tcp 192.168.39.243:8443: connect: connection refused
	W0731 17:15:03.843593       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.243:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.243:8443: connect: connection refused
	E0731 17:15:03.843664       1 node_lifecycle_controller.go:715] "Failed while getting a Node to retry updating node health. Probably Node was deleted" logger="node-lifecycle-controller" node="ha-234651-m02"
	E0731 17:15:03.843679       1 node_lifecycle_controller.go:720] "Update health of Node from Controller error, Skipping - no pods will be evicted" err="Get \"https://192.168.39.243:8443/api/v1/nodes/ha-234651-m02\": failed to get token for kube-system/node-controller: timed out waiting for the condition" logger="node-lifecycle-controller" node=""
	W0731 17:15:04.299221       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.243:8443/api/v1/namespaces/kube-system/serviceaccounts/pod-garbage-collector": dial tcp 192.168.39.243:8443: connect: connection refused
	W0731 17:15:04.306863       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.243:8443/api/v1/namespaces/kube-system/serviceaccounts/resourcequota-controller": dial tcp 192.168.39.243:8443: connect: connection refused
	W0731 17:15:04.808205       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.243:8443/api/v1/namespaces/kube-system/serviceaccounts/resourcequota-controller": dial tcp 192.168.39.243:8443: connect: connection refused
	W0731 17:15:05.809289       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.243:8443/api/v1/namespaces/kube-system/serviceaccounts/resourcequota-controller": dial tcp 192.168.39.243:8443: connect: connection refused
	W0731 17:15:06.300650       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.243:8443/api/v1/namespaces/kube-system/serviceaccounts/pod-garbage-collector": dial tcp 192.168.39.243:8443: connect: connection refused
	E0731 17:15:06.300818       1 gc_controller.go:278] "Error while getting node" err="Get \"https://192.168.39.243:8443/api/v1/nodes/ha-234651-m03\": failed to get token for kube-system/pod-garbage-collector: timed out waiting for the condition" logger="pod-garbage-collector-controller" node="ha-234651-m03"
	W0731 17:15:07.810667       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.243:8443/api/v1/namespaces/kube-system/serviceaccounts/resourcequota-controller": dial tcp 192.168.39.243:8443: connect: connection refused
	E0731 17:15:07.810794       1 resource_quota_controller.go:440] failed to discover resources: Get "https://192.168.39.243:8443/api": failed to get token for kube-system/resourcequota-controller: timed out waiting for the condition
	I0731 17:15:08.844125       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	W0731 17:15:08.844847       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.243:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.243:8443: connect: connection refused
	W0731 17:15:09.345882       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.243:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.243:8443: connect: connection refused
	W0731 17:15:10.346851       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.243:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.243:8443: connect: connection refused
	W0731 17:15:11.285291       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.243:8443/api/v1/namespaces/kube-system/serviceaccounts/generic-garbage-collector": dial tcp 192.168.39.243:8443: connect: connection refused
	
	
	==> kube-controller-manager [e3d0e33db247e8b10836bed4eff51af3c70dbab815033d0f99c72fab296601fe] <==
	I0731 17:09:28.924323       1 serving.go:380] Generated self-signed cert in-memory
	I0731 17:09:29.519803       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0731 17:09:29.519839       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 17:09:29.521449       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0731 17:09:29.521633       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0731 17:09:29.521671       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0731 17:09:29.521641       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	E0731 17:09:49.946085       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.243:8443/healthz\": dial tcp 192.168.39.243:8443: connect: connection refused"
	
	
	==> kube-proxy [11197df475b3001b3fd4ae5f027e3f3d3524805332a4f3556041ef5a33fba07d] <==
	E0731 17:13:16.391709       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2596": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 17:13:16.391996       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2600": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 17:13:16.392080       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2600": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 17:13:25.606740       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2600": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 17:13:25.606799       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2600": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 17:13:28.678153       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2596": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 17:13:28.678229       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2596": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 17:13:31.750418       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-234651&resourceVersion=2605": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 17:13:31.750542       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-234651&resourceVersion=2605": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 17:13:47.109934       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2600": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 17:13:47.110792       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2600": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 17:13:56.326639       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-234651&resourceVersion=2605": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 17:13:56.327318       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-234651&resourceVersion=2605": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 17:13:56.327833       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2596": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 17:13:56.328041       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2596": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 17:14:14.758221       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2600": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 17:14:14.758484       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2600": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 17:14:30.118260       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-234651&resourceVersion=2605": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 17:14:30.118544       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-234651&resourceVersion=2605": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 17:14:45.479169       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2596": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 17:14:45.479450       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2596": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 17:15:06.982478       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-234651&resourceVersion=2605": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 17:15:06.982619       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-234651&resourceVersion=2605": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 17:15:06.982961       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2600": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 17:15:06.983050       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2600": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [631c8cee6152ae1808419f27d4b63fa0a6bcebb15c757dd96090c6267d63e6c2] <==
	E0731 17:07:51.205891       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1856": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 17:07:51.205970       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-234651&resourceVersion=1793": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 17:07:51.206000       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-234651&resourceVersion=1793": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 17:07:51.206164       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1835": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 17:07:51.206227       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1835": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 17:07:57.605884       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-234651&resourceVersion=1793": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 17:07:57.605958       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-234651&resourceVersion=1793": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 17:07:57.606037       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1835": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 17:07:57.606083       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1835": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 17:07:57.606055       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1856": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 17:07:57.606143       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1856": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 17:08:09.575524       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-234651&resourceVersion=1793": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 17:08:09.576213       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-234651&resourceVersion=1793": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 17:08:09.576233       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1856": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 17:08:09.575493       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1835": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 17:08:09.576392       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1835": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 17:08:09.576440       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1856": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 17:08:28.007089       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-234651&resourceVersion=1793": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 17:08:28.007254       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-234651&resourceVersion=1793": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 17:08:28.007079       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1856": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 17:08:28.007397       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1856": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 17:08:31.078543       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1835": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 17:08:31.078697       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1835": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 17:09:07.943276       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-234651&resourceVersion=1793": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 17:09:07.943401       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-234651&resourceVersion=1793": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [99fe8054c8b333dba8f752ca3b7d5a59656d3429dcd52f299ba255df7a8f6e23] <==
	E0731 17:14:42.685978       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0731 17:14:43.162050       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 17:14:43.162161       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0731 17:14:43.302781       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0731 17:14:43.302898       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0731 17:14:43.607739       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 17:14:43.607877       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0731 17:14:44.680956       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 17:14:44.681017       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0731 17:14:45.004295       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 17:14:45.004554       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 17:14:45.295396       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 17:14:45.295514       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0731 17:14:48.714613       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 17:14:48.714666       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0731 17:14:49.780745       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 17:14:49.780853       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 17:14:50.626117       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0731 17:14:50.626250       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0731 17:14:51.260880       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0731 17:14:51.260925       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0731 17:14:52.127406       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 17:14:52.127510       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0731 17:15:09.410176       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.243:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&resourceVersion=2612": dial tcp 192.168.39.243:8443: connect: connection refused
	E0731 17:15:09.410307       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.243:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&resourceVersion=2612": dial tcp 192.168.39.243:8443: connect: connection refused
	
	
	==> kube-scheduler [ded6421f2f11d7f64ce80d1681e3d4c9d1453ff8aecc860e184ad0865768adde] <==
	W0731 17:09:04.792897       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 17:09:04.792975       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0731 17:09:04.914103       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 17:09:04.914192       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0731 17:09:07.307475       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 17:09:07.307516       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0731 17:09:07.586862       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0731 17:09:07.586994       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0731 17:09:07.868625       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 17:09:07.868673       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0731 17:09:07.954793       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0731 17:09:07.954843       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0731 17:09:08.086478       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 17:09:08.086556       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 17:09:08.135035       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0731 17:09:08.135109       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0731 17:09:08.524801       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 17:09:08.524893       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0731 17:09:08.546473       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0731 17:09:08.546565       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0731 17:09:08.658250       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 17:09:08.658437       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0731 17:09:09.119633       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 17:09:09.119662       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 17:09:09.564811       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 31 17:14:55 ha-234651 kubelet[1377]: I0731 17:14:55.803956    1377 scope.go:117] "RemoveContainer" containerID="f674e9fa7967bc4f5ac1136de9ccd505e7fe0f1638629fda624b6de0e2587f08"
	Jul 31 17:14:55 ha-234651 kubelet[1377]: I0731 17:14:55.804309    1377 scope.go:117] "RemoveContainer" containerID="85be3f20ebc7900e3256473a0facbee8f719800984555243bf73c81abad28de1"
	Jul 31 17:14:55 ha-234651 kubelet[1377]: E0731 17:14:55.804809    1377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-ha-234651_kube-system(399c842a2f5d3312c2f955f494ccfe00)\"" pod="kube-system/kube-apiserver-ha-234651" podUID="399c842a2f5d3312c2f955f494ccfe00"
	Jul 31 17:14:57 ha-234651 kubelet[1377]: I0731 17:14:57.119656    1377 scope.go:117] "RemoveContainer" containerID="85be3f20ebc7900e3256473a0facbee8f719800984555243bf73c81abad28de1"
	Jul 31 17:14:57 ha-234651 kubelet[1377]: E0731 17:14:57.120474    1377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-ha-234651_kube-system(399c842a2f5d3312c2f955f494ccfe00)\"" pod="kube-system/kube-apiserver-ha-234651" podUID="399c842a2f5d3312c2f955f494ccfe00"
	Jul 31 17:14:57 ha-234651 kubelet[1377]: E0731 17:14:57.765733    1377 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-234651\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-234651?resourceVersion=0&timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 31 17:14:57 ha-234651 kubelet[1377]: I0731 17:14:57.765728    1377 status_manager.go:853] "Failed to get status for pod" podUID="1ec63e59dba1cb738446a43a6fa45bd3" pod="kube-system/kube-vip-ha-234651" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-vip-ha-234651\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 31 17:14:58 ha-234651 kubelet[1377]: I0731 17:14:58.442735    1377 scope.go:117] "RemoveContainer" containerID="85be3f20ebc7900e3256473a0facbee8f719800984555243bf73c81abad28de1"
	Jul 31 17:14:58 ha-234651 kubelet[1377]: E0731 17:14:58.443640    1377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-ha-234651_kube-system(399c842a2f5d3312c2f955f494ccfe00)\"" pod="kube-system/kube-apiserver-ha-234651" podUID="399c842a2f5d3312c2f955f494ccfe00"
	Jul 31 17:15:00 ha-234651 kubelet[1377]: E0731 17:15:00.837809    1377 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-234651?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
	Jul 31 17:15:00 ha-234651 kubelet[1377]: E0731 17:15:00.837963    1377 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-234651\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-234651?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 31 17:15:00 ha-234651 kubelet[1377]: I0731 17:15:00.837841    1377 status_manager.go:853] "Failed to get status for pod" podUID="87455537-bdb8-438b-8122-db85bed01d09" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 31 17:15:00 ha-234651 kubelet[1377]: I0731 17:15:00.863455    1377 scope.go:117] "RemoveContainer" containerID="e254329cb95e187e2c887d98abcd0e7789ef5b45371d528f1b02e554af8fa7d8"
	Jul 31 17:15:00 ha-234651 kubelet[1377]: E0731 17:15:00.863765    1377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(87455537-bdb8-438b-8122-db85bed01d09)\"" pod="kube-system/storage-provisioner" podUID="87455537-bdb8-438b-8122-db85bed01d09"
	Jul 31 17:15:03 ha-234651 kubelet[1377]: I0731 17:15:03.909820    1377 status_manager.go:853] "Failed to get status for pod" podUID="399c842a2f5d3312c2f955f494ccfe00" pod="kube-system/kube-apiserver-ha-234651" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-234651\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 31 17:15:03 ha-234651 kubelet[1377]: E0731 17:15:03.909766    1377 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/kube-apiserver-ha-234651.17e75b2d5733b784\": dial tcp 192.168.39.254:8443: connect: no route to host" event="&Event{ObjectMeta:{kube-apiserver-ha-234651.17e75b2d5733b784  kube-system   2017 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-234651,UID:399c842a2f5d3312c2f955f494ccfe00,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ha-234651,},FirstTimestamp:2024-07-31 17:07:12 +0000 UTC,LastTimestamp:2024-07-31 17:12:25.788701639 +0000 UTC m=+763.090489139,Count:26,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Seri
es:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-234651,}"
	Jul 31 17:15:03 ha-234651 kubelet[1377]: E0731 17:15:03.909921    1377 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-234651\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-234651?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 31 17:15:06 ha-234651 kubelet[1377]: W0731 17:15:06.981849    1377 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=2376": dial tcp 192.168.39.254:8443: connect: no route to host
	Jul 31 17:15:06 ha-234651 kubelet[1377]: E0731 17:15:06.981955    1377 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=2376": dial tcp 192.168.39.254:8443: connect: no route to host
	Jul 31 17:15:06 ha-234651 kubelet[1377]: I0731 17:15:06.982041    1377 status_manager.go:853] "Failed to get status for pod" podUID="1ec63e59dba1cb738446a43a6fa45bd3" pod="kube-system/kube-vip-ha-234651" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-vip-ha-234651\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 31 17:15:06 ha-234651 kubelet[1377]: E0731 17:15:06.982765    1377 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-234651\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-234651?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 31 17:15:10 ha-234651 kubelet[1377]: E0731 17:15:10.053831    1377 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-234651?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
	Jul 31 17:15:10 ha-234651 kubelet[1377]: I0731 17:15:10.054279    1377 status_manager.go:853] "Failed to get status for pod" podUID="399c842a2f5d3312c2f955f494ccfe00" pod="kube-system/kube-apiserver-ha-234651" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-234651\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 31 17:15:10 ha-234651 kubelet[1377]: E0731 17:15:10.054535    1377 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-234651\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-234651?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 31 17:15:10 ha-234651 kubelet[1377]: E0731 17:15:10.054573    1377 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 17:15:10.567570   34975 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19349-8084/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-234651 -n ha-234651
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-234651 -n ha-234651: exit status 2 (210.470939ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-234651" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (172.82s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (335.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-498089
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-498089
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-498089: exit status 82 (2m1.756740306s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-498089-m03"  ...
	* Stopping node "multinode-498089-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-498089" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-498089 --wait=true -v=8 --alsologtostderr
E0731 17:32:57.005598   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/functional-496242/client.crt: no such file or directory
E0731 17:34:05.346143   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-498089 --wait=true -v=8 --alsologtostderr: (3m31.438249276s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-498089
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-498089 -n multinode-498089
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498089 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-498089 logs -n 25: (1.455342599s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-498089 ssh -n                                                                 | multinode-498089 | jenkins | v1.33.1 | 31 Jul 24 17:29 UTC | 31 Jul 24 17:29 UTC |
	|         | multinode-498089-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-498089 cp multinode-498089-m02:/home/docker/cp-test.txt                       | multinode-498089 | jenkins | v1.33.1 | 31 Jul 24 17:29 UTC | 31 Jul 24 17:29 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile179218134/001/cp-test_multinode-498089-m02.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-498089 ssh -n                                                                 | multinode-498089 | jenkins | v1.33.1 | 31 Jul 24 17:29 UTC | 31 Jul 24 17:29 UTC |
	|         | multinode-498089-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-498089 cp multinode-498089-m02:/home/docker/cp-test.txt                       | multinode-498089 | jenkins | v1.33.1 | 31 Jul 24 17:29 UTC | 31 Jul 24 17:29 UTC |
	|         | multinode-498089:/home/docker/cp-test_multinode-498089-m02_multinode-498089.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-498089 ssh -n                                                                 | multinode-498089 | jenkins | v1.33.1 | 31 Jul 24 17:29 UTC | 31 Jul 24 17:29 UTC |
	|         | multinode-498089-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-498089 ssh -n multinode-498089 sudo cat                                       | multinode-498089 | jenkins | v1.33.1 | 31 Jul 24 17:29 UTC | 31 Jul 24 17:29 UTC |
	|         | /home/docker/cp-test_multinode-498089-m02_multinode-498089.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-498089 cp multinode-498089-m02:/home/docker/cp-test.txt                       | multinode-498089 | jenkins | v1.33.1 | 31 Jul 24 17:29 UTC | 31 Jul 24 17:29 UTC |
	|         | multinode-498089-m03:/home/docker/cp-test_multinode-498089-m02_multinode-498089-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-498089 ssh -n                                                                 | multinode-498089 | jenkins | v1.33.1 | 31 Jul 24 17:29 UTC | 31 Jul 24 17:29 UTC |
	|         | multinode-498089-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-498089 ssh -n multinode-498089-m03 sudo cat                                   | multinode-498089 | jenkins | v1.33.1 | 31 Jul 24 17:29 UTC | 31 Jul 24 17:29 UTC |
	|         | /home/docker/cp-test_multinode-498089-m02_multinode-498089-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-498089 cp testdata/cp-test.txt                                                | multinode-498089 | jenkins | v1.33.1 | 31 Jul 24 17:29 UTC | 31 Jul 24 17:29 UTC |
	|         | multinode-498089-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-498089 ssh -n                                                                 | multinode-498089 | jenkins | v1.33.1 | 31 Jul 24 17:29 UTC | 31 Jul 24 17:29 UTC |
	|         | multinode-498089-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-498089 cp multinode-498089-m03:/home/docker/cp-test.txt                       | multinode-498089 | jenkins | v1.33.1 | 31 Jul 24 17:29 UTC | 31 Jul 24 17:29 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile179218134/001/cp-test_multinode-498089-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-498089 ssh -n                                                                 | multinode-498089 | jenkins | v1.33.1 | 31 Jul 24 17:29 UTC | 31 Jul 24 17:29 UTC |
	|         | multinode-498089-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-498089 cp multinode-498089-m03:/home/docker/cp-test.txt                       | multinode-498089 | jenkins | v1.33.1 | 31 Jul 24 17:29 UTC | 31 Jul 24 17:29 UTC |
	|         | multinode-498089:/home/docker/cp-test_multinode-498089-m03_multinode-498089.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-498089 ssh -n                                                                 | multinode-498089 | jenkins | v1.33.1 | 31 Jul 24 17:29 UTC | 31 Jul 24 17:29 UTC |
	|         | multinode-498089-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-498089 ssh -n multinode-498089 sudo cat                                       | multinode-498089 | jenkins | v1.33.1 | 31 Jul 24 17:29 UTC | 31 Jul 24 17:29 UTC |
	|         | /home/docker/cp-test_multinode-498089-m03_multinode-498089.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-498089 cp multinode-498089-m03:/home/docker/cp-test.txt                       | multinode-498089 | jenkins | v1.33.1 | 31 Jul 24 17:29 UTC | 31 Jul 24 17:29 UTC |
	|         | multinode-498089-m02:/home/docker/cp-test_multinode-498089-m03_multinode-498089-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-498089 ssh -n                                                                 | multinode-498089 | jenkins | v1.33.1 | 31 Jul 24 17:29 UTC | 31 Jul 24 17:29 UTC |
	|         | multinode-498089-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-498089 ssh -n multinode-498089-m02 sudo cat                                   | multinode-498089 | jenkins | v1.33.1 | 31 Jul 24 17:29 UTC | 31 Jul 24 17:29 UTC |
	|         | /home/docker/cp-test_multinode-498089-m03_multinode-498089-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-498089 node stop m03                                                          | multinode-498089 | jenkins | v1.33.1 | 31 Jul 24 17:29 UTC | 31 Jul 24 17:29 UTC |
	| node    | multinode-498089 node start                                                             | multinode-498089 | jenkins | v1.33.1 | 31 Jul 24 17:29 UTC | 31 Jul 24 17:30 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-498089                                                                | multinode-498089 | jenkins | v1.33.1 | 31 Jul 24 17:30 UTC |                     |
	| stop    | -p multinode-498089                                                                     | multinode-498089 | jenkins | v1.33.1 | 31 Jul 24 17:30 UTC |                     |
	| start   | -p multinode-498089                                                                     | multinode-498089 | jenkins | v1.33.1 | 31 Jul 24 17:32 UTC | 31 Jul 24 17:35 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-498089                                                                | multinode-498089 | jenkins | v1.33.1 | 31 Jul 24 17:35 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 17:32:04
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 17:32:04.858081   44770 out.go:291] Setting OutFile to fd 1 ...
	I0731 17:32:04.858323   44770 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 17:32:04.858333   44770 out.go:304] Setting ErrFile to fd 2...
	I0731 17:32:04.858339   44770 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 17:32:04.858549   44770 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19349-8084/.minikube/bin
	I0731 17:32:04.859081   44770 out.go:298] Setting JSON to false
	I0731 17:32:04.859996   44770 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4469,"bootTime":1722442656,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 17:32:04.860046   44770 start.go:139] virtualization: kvm guest
	I0731 17:32:04.862270   44770 out.go:177] * [multinode-498089] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 17:32:04.863505   44770 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 17:32:04.863518   44770 notify.go:220] Checking for updates...
	I0731 17:32:04.865687   44770 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 17:32:04.866837   44770 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 17:32:04.867941   44770 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19349-8084/.minikube
	I0731 17:32:04.869044   44770 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 17:32:04.870223   44770 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 17:32:04.871695   44770 config.go:182] Loaded profile config "multinode-498089": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 17:32:04.871777   44770 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 17:32:04.872269   44770 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:32:04.872347   44770 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:32:04.887540   44770 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32893
	I0731 17:32:04.887927   44770 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:32:04.888530   44770 main.go:141] libmachine: Using API Version  1
	I0731 17:32:04.888552   44770 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:32:04.888922   44770 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:32:04.889099   44770 main.go:141] libmachine: (multinode-498089) Calling .DriverName
	I0731 17:32:04.923316   44770 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 17:32:04.924753   44770 start.go:297] selected driver: kvm2
	I0731 17:32:04.924767   44770 start.go:901] validating driver "kvm2" against &{Name:multinode-498089 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-498089 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.204 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 17:32:04.924931   44770 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 17:32:04.925276   44770 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 17:32:04.925360   44770 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19349-8084/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 17:32:04.939542   44770 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 17:32:04.940209   44770 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 17:32:04.940279   44770 cni.go:84] Creating CNI manager for ""
	I0731 17:32:04.940293   44770 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0731 17:32:04.940355   44770 start.go:340] cluster config:
	{Name:multinode-498089 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-498089 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.204 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 17:32:04.940498   44770 iso.go:125] acquiring lock: {Name:mk4494bb5a63902bf12a909c3109db85229bc516 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 17:32:04.942246   44770 out.go:177] * Starting "multinode-498089" primary control-plane node in "multinode-498089" cluster
	I0731 17:32:04.943354   44770 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 17:32:04.943385   44770 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0731 17:32:04.943392   44770 cache.go:56] Caching tarball of preloaded images
	I0731 17:32:04.943463   44770 preload.go:172] Found /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 17:32:04.943473   44770 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 17:32:04.943581   44770 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/multinode-498089/config.json ...
	I0731 17:32:04.943837   44770 start.go:360] acquireMachinesLock for multinode-498089: {Name:mkd78eacfab7d6c3058c6674434b3d889ec957e0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 17:32:04.943898   44770 start.go:364] duration metric: took 38.407µs to acquireMachinesLock for "multinode-498089"
	I0731 17:32:04.943916   44770 start.go:96] Skipping create...Using existing machine configuration
	I0731 17:32:04.943923   44770 fix.go:54] fixHost starting: 
	I0731 17:32:04.944189   44770 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:32:04.944225   44770 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:32:04.958902   44770 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39019
	I0731 17:32:04.959335   44770 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:32:04.959754   44770 main.go:141] libmachine: Using API Version  1
	I0731 17:32:04.959769   44770 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:32:04.960127   44770 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:32:04.960325   44770 main.go:141] libmachine: (multinode-498089) Calling .DriverName
	I0731 17:32:04.960447   44770 main.go:141] libmachine: (multinode-498089) Calling .GetState
	I0731 17:32:04.962243   44770 fix.go:112] recreateIfNeeded on multinode-498089: state=Running err=<nil>
	W0731 17:32:04.962258   44770 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 17:32:04.964305   44770 out.go:177] * Updating the running kvm2 "multinode-498089" VM ...
	I0731 17:32:04.965538   44770 machine.go:94] provisionDockerMachine start ...
	I0731 17:32:04.965559   44770 main.go:141] libmachine: (multinode-498089) Calling .DriverName
	I0731 17:32:04.965773   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHHostname
	I0731 17:32:04.968333   44770 main.go:141] libmachine: (multinode-498089) DBG | domain multinode-498089 has defined MAC address 52:54:00:b0:35:3d in network mk-multinode-498089
	I0731 17:32:04.968788   44770 main.go:141] libmachine: (multinode-498089) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:35:3d", ip: ""} in network mk-multinode-498089: {Iface:virbr1 ExpiryTime:2024-07-31 18:26:40 +0000 UTC Type:0 Mac:52:54:00:b0:35:3d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-498089 Clientid:01:52:54:00:b0:35:3d}
	I0731 17:32:04.968814   44770 main.go:141] libmachine: (multinode-498089) DBG | domain multinode-498089 has defined IP address 192.168.39.100 and MAC address 52:54:00:b0:35:3d in network mk-multinode-498089
	I0731 17:32:04.968964   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHPort
	I0731 17:32:04.969131   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHKeyPath
	I0731 17:32:04.969275   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHKeyPath
	I0731 17:32:04.969411   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHUsername
	I0731 17:32:04.969569   44770 main.go:141] libmachine: Using SSH client type: native
	I0731 17:32:04.969747   44770 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0731 17:32:04.969757   44770 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 17:32:05.079751   44770 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-498089
	
	I0731 17:32:05.079775   44770 main.go:141] libmachine: (multinode-498089) Calling .GetMachineName
	I0731 17:32:05.080010   44770 buildroot.go:166] provisioning hostname "multinode-498089"
	I0731 17:32:05.080036   44770 main.go:141] libmachine: (multinode-498089) Calling .GetMachineName
	I0731 17:32:05.080184   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHHostname
	I0731 17:32:05.082906   44770 main.go:141] libmachine: (multinode-498089) DBG | domain multinode-498089 has defined MAC address 52:54:00:b0:35:3d in network mk-multinode-498089
	I0731 17:32:05.083273   44770 main.go:141] libmachine: (multinode-498089) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:35:3d", ip: ""} in network mk-multinode-498089: {Iface:virbr1 ExpiryTime:2024-07-31 18:26:40 +0000 UTC Type:0 Mac:52:54:00:b0:35:3d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-498089 Clientid:01:52:54:00:b0:35:3d}
	I0731 17:32:05.083302   44770 main.go:141] libmachine: (multinode-498089) DBG | domain multinode-498089 has defined IP address 192.168.39.100 and MAC address 52:54:00:b0:35:3d in network mk-multinode-498089
	I0731 17:32:05.083471   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHPort
	I0731 17:32:05.083626   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHKeyPath
	I0731 17:32:05.083812   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHKeyPath
	I0731 17:32:05.083963   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHUsername
	I0731 17:32:05.084116   44770 main.go:141] libmachine: Using SSH client type: native
	I0731 17:32:05.084263   44770 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0731 17:32:05.084276   44770 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-498089 && echo "multinode-498089" | sudo tee /etc/hostname
	I0731 17:32:05.212409   44770 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-498089
	
	I0731 17:32:05.212438   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHHostname
	I0731 17:32:05.215195   44770 main.go:141] libmachine: (multinode-498089) DBG | domain multinode-498089 has defined MAC address 52:54:00:b0:35:3d in network mk-multinode-498089
	I0731 17:32:05.215633   44770 main.go:141] libmachine: (multinode-498089) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:35:3d", ip: ""} in network mk-multinode-498089: {Iface:virbr1 ExpiryTime:2024-07-31 18:26:40 +0000 UTC Type:0 Mac:52:54:00:b0:35:3d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-498089 Clientid:01:52:54:00:b0:35:3d}
	I0731 17:32:05.215678   44770 main.go:141] libmachine: (multinode-498089) DBG | domain multinode-498089 has defined IP address 192.168.39.100 and MAC address 52:54:00:b0:35:3d in network mk-multinode-498089
	I0731 17:32:05.215795   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHPort
	I0731 17:32:05.215985   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHKeyPath
	I0731 17:32:05.216158   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHKeyPath
	I0731 17:32:05.216293   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHUsername
	I0731 17:32:05.216490   44770 main.go:141] libmachine: Using SSH client type: native
	I0731 17:32:05.216722   44770 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0731 17:32:05.216741   44770 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-498089' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-498089/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-498089' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 17:32:05.323809   44770 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 17:32:05.323838   44770 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19349-8084/.minikube CaCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19349-8084/.minikube}
	I0731 17:32:05.323885   44770 buildroot.go:174] setting up certificates
	I0731 17:32:05.323900   44770 provision.go:84] configureAuth start
	I0731 17:32:05.323916   44770 main.go:141] libmachine: (multinode-498089) Calling .GetMachineName
	I0731 17:32:05.324174   44770 main.go:141] libmachine: (multinode-498089) Calling .GetIP
	I0731 17:32:05.326844   44770 main.go:141] libmachine: (multinode-498089) DBG | domain multinode-498089 has defined MAC address 52:54:00:b0:35:3d in network mk-multinode-498089
	I0731 17:32:05.327214   44770 main.go:141] libmachine: (multinode-498089) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:35:3d", ip: ""} in network mk-multinode-498089: {Iface:virbr1 ExpiryTime:2024-07-31 18:26:40 +0000 UTC Type:0 Mac:52:54:00:b0:35:3d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-498089 Clientid:01:52:54:00:b0:35:3d}
	I0731 17:32:05.327237   44770 main.go:141] libmachine: (multinode-498089) DBG | domain multinode-498089 has defined IP address 192.168.39.100 and MAC address 52:54:00:b0:35:3d in network mk-multinode-498089
	I0731 17:32:05.327389   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHHostname
	I0731 17:32:05.329551   44770 main.go:141] libmachine: (multinode-498089) DBG | domain multinode-498089 has defined MAC address 52:54:00:b0:35:3d in network mk-multinode-498089
	I0731 17:32:05.329954   44770 main.go:141] libmachine: (multinode-498089) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:35:3d", ip: ""} in network mk-multinode-498089: {Iface:virbr1 ExpiryTime:2024-07-31 18:26:40 +0000 UTC Type:0 Mac:52:54:00:b0:35:3d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-498089 Clientid:01:52:54:00:b0:35:3d}
	I0731 17:32:05.329977   44770 main.go:141] libmachine: (multinode-498089) DBG | domain multinode-498089 has defined IP address 192.168.39.100 and MAC address 52:54:00:b0:35:3d in network mk-multinode-498089
	I0731 17:32:05.330101   44770 provision.go:143] copyHostCerts
	I0731 17:32:05.330128   44770 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem
	I0731 17:32:05.330158   44770 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem, removing ...
	I0731 17:32:05.330166   44770 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem
	I0731 17:32:05.330229   44770 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem (1082 bytes)
	I0731 17:32:05.330321   44770 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem
	I0731 17:32:05.330355   44770 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem, removing ...
	I0731 17:32:05.330364   44770 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem
	I0731 17:32:05.330416   44770 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem (1123 bytes)
	I0731 17:32:05.330468   44770 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem
	I0731 17:32:05.330485   44770 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem, removing ...
	I0731 17:32:05.330492   44770 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem
	I0731 17:32:05.330514   44770 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem (1675 bytes)
	I0731 17:32:05.330558   44770 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem org=jenkins.multinode-498089 san=[127.0.0.1 192.168.39.100 localhost minikube multinode-498089]
	I0731 17:32:05.542031   44770 provision.go:177] copyRemoteCerts
	I0731 17:32:05.542088   44770 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 17:32:05.542111   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHHostname
	I0731 17:32:05.544909   44770 main.go:141] libmachine: (multinode-498089) DBG | domain multinode-498089 has defined MAC address 52:54:00:b0:35:3d in network mk-multinode-498089
	I0731 17:32:05.545284   44770 main.go:141] libmachine: (multinode-498089) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:35:3d", ip: ""} in network mk-multinode-498089: {Iface:virbr1 ExpiryTime:2024-07-31 18:26:40 +0000 UTC Type:0 Mac:52:54:00:b0:35:3d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-498089 Clientid:01:52:54:00:b0:35:3d}
	I0731 17:32:05.545305   44770 main.go:141] libmachine: (multinode-498089) DBG | domain multinode-498089 has defined IP address 192.168.39.100 and MAC address 52:54:00:b0:35:3d in network mk-multinode-498089
	I0731 17:32:05.545466   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHPort
	I0731 17:32:05.545667   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHKeyPath
	I0731 17:32:05.545801   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHUsername
	I0731 17:32:05.545932   44770 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/multinode-498089/id_rsa Username:docker}
	I0731 17:32:05.628730   44770 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0731 17:32:05.628807   44770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0731 17:32:05.651861   44770 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0731 17:32:05.651921   44770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 17:32:05.674267   44770 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0731 17:32:05.674345   44770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 17:32:05.696904   44770 provision.go:87] duration metric: took 372.990017ms to configureAuth
	I0731 17:32:05.696930   44770 buildroot.go:189] setting minikube options for container-runtime
	I0731 17:32:05.697130   44770 config.go:182] Loaded profile config "multinode-498089": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 17:32:05.697197   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHHostname
	I0731 17:32:05.699779   44770 main.go:141] libmachine: (multinode-498089) DBG | domain multinode-498089 has defined MAC address 52:54:00:b0:35:3d in network mk-multinode-498089
	I0731 17:32:05.700212   44770 main.go:141] libmachine: (multinode-498089) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:35:3d", ip: ""} in network mk-multinode-498089: {Iface:virbr1 ExpiryTime:2024-07-31 18:26:40 +0000 UTC Type:0 Mac:52:54:00:b0:35:3d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-498089 Clientid:01:52:54:00:b0:35:3d}
	I0731 17:32:05.700240   44770 main.go:141] libmachine: (multinode-498089) DBG | domain multinode-498089 has defined IP address 192.168.39.100 and MAC address 52:54:00:b0:35:3d in network mk-multinode-498089
	I0731 17:32:05.700405   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHPort
	I0731 17:32:05.700592   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHKeyPath
	I0731 17:32:05.700780   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHKeyPath
	I0731 17:32:05.700896   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHUsername
	I0731 17:32:05.701034   44770 main.go:141] libmachine: Using SSH client type: native
	I0731 17:32:05.701197   44770 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0731 17:32:05.701212   44770 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 17:33:36.388058   44770 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 17:33:36.388081   44770 machine.go:97] duration metric: took 1m31.4225305s to provisionDockerMachine
	I0731 17:33:36.388094   44770 start.go:293] postStartSetup for "multinode-498089" (driver="kvm2")
	I0731 17:33:36.388103   44770 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 17:33:36.388135   44770 main.go:141] libmachine: (multinode-498089) Calling .DriverName
	I0731 17:33:36.388476   44770 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 17:33:36.388513   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHHostname
	I0731 17:33:36.391503   44770 main.go:141] libmachine: (multinode-498089) DBG | domain multinode-498089 has defined MAC address 52:54:00:b0:35:3d in network mk-multinode-498089
	I0731 17:33:36.391952   44770 main.go:141] libmachine: (multinode-498089) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:35:3d", ip: ""} in network mk-multinode-498089: {Iface:virbr1 ExpiryTime:2024-07-31 18:26:40 +0000 UTC Type:0 Mac:52:54:00:b0:35:3d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-498089 Clientid:01:52:54:00:b0:35:3d}
	I0731 17:33:36.391981   44770 main.go:141] libmachine: (multinode-498089) DBG | domain multinode-498089 has defined IP address 192.168.39.100 and MAC address 52:54:00:b0:35:3d in network mk-multinode-498089
	I0731 17:33:36.392120   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHPort
	I0731 17:33:36.392309   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHKeyPath
	I0731 17:33:36.392451   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHUsername
	I0731 17:33:36.392619   44770 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/multinode-498089/id_rsa Username:docker}
	I0731 17:33:36.478415   44770 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 17:33:36.482171   44770 command_runner.go:130] > NAME=Buildroot
	I0731 17:33:36.482191   44770 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0731 17:33:36.482197   44770 command_runner.go:130] > ID=buildroot
	I0731 17:33:36.482204   44770 command_runner.go:130] > VERSION_ID=2023.02.9
	I0731 17:33:36.482212   44770 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0731 17:33:36.482359   44770 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 17:33:36.482374   44770 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/addons for local assets ...
	I0731 17:33:36.482426   44770 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/files for local assets ...
	I0731 17:33:36.482492   44770 filesync.go:149] local asset: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem -> 152592.pem in /etc/ssl/certs
	I0731 17:33:36.482500   44770 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem -> /etc/ssl/certs/152592.pem
	I0731 17:33:36.482594   44770 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 17:33:36.491281   44770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /etc/ssl/certs/152592.pem (1708 bytes)
	I0731 17:33:36.513081   44770 start.go:296] duration metric: took 124.973721ms for postStartSetup
	I0731 17:33:36.513132   44770 fix.go:56] duration metric: took 1m31.569207544s for fixHost
	I0731 17:33:36.513159   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHHostname
	I0731 17:33:36.515858   44770 main.go:141] libmachine: (multinode-498089) DBG | domain multinode-498089 has defined MAC address 52:54:00:b0:35:3d in network mk-multinode-498089
	I0731 17:33:36.516231   44770 main.go:141] libmachine: (multinode-498089) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:35:3d", ip: ""} in network mk-multinode-498089: {Iface:virbr1 ExpiryTime:2024-07-31 18:26:40 +0000 UTC Type:0 Mac:52:54:00:b0:35:3d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-498089 Clientid:01:52:54:00:b0:35:3d}
	I0731 17:33:36.516252   44770 main.go:141] libmachine: (multinode-498089) DBG | domain multinode-498089 has defined IP address 192.168.39.100 and MAC address 52:54:00:b0:35:3d in network mk-multinode-498089
	I0731 17:33:36.516427   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHPort
	I0731 17:33:36.516624   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHKeyPath
	I0731 17:33:36.516793   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHKeyPath
	I0731 17:33:36.516944   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHUsername
	I0731 17:33:36.517100   44770 main.go:141] libmachine: Using SSH client type: native
	I0731 17:33:36.517312   44770 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0731 17:33:36.517325   44770 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 17:33:36.623347   44770 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722447216.595759064
	
	I0731 17:33:36.623367   44770 fix.go:216] guest clock: 1722447216.595759064
	I0731 17:33:36.623394   44770 fix.go:229] Guest: 2024-07-31 17:33:36.595759064 +0000 UTC Remote: 2024-07-31 17:33:36.513139254 +0000 UTC m=+91.689064171 (delta=82.61981ms)
	I0731 17:33:36.623417   44770 fix.go:200] guest clock delta is within tolerance: 82.61981ms
	I0731 17:33:36.623424   44770 start.go:83] releasing machines lock for "multinode-498089", held for 1m31.679514445s
	I0731 17:33:36.623448   44770 main.go:141] libmachine: (multinode-498089) Calling .DriverName
	I0731 17:33:36.623727   44770 main.go:141] libmachine: (multinode-498089) Calling .GetIP
	I0731 17:33:36.626386   44770 main.go:141] libmachine: (multinode-498089) DBG | domain multinode-498089 has defined MAC address 52:54:00:b0:35:3d in network mk-multinode-498089
	I0731 17:33:36.626818   44770 main.go:141] libmachine: (multinode-498089) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:35:3d", ip: ""} in network mk-multinode-498089: {Iface:virbr1 ExpiryTime:2024-07-31 18:26:40 +0000 UTC Type:0 Mac:52:54:00:b0:35:3d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-498089 Clientid:01:52:54:00:b0:35:3d}
	I0731 17:33:36.626844   44770 main.go:141] libmachine: (multinode-498089) DBG | domain multinode-498089 has defined IP address 192.168.39.100 and MAC address 52:54:00:b0:35:3d in network mk-multinode-498089
	I0731 17:33:36.626977   44770 main.go:141] libmachine: (multinode-498089) Calling .DriverName
	I0731 17:33:36.627481   44770 main.go:141] libmachine: (multinode-498089) Calling .DriverName
	I0731 17:33:36.627676   44770 main.go:141] libmachine: (multinode-498089) Calling .DriverName
	I0731 17:33:36.627776   44770 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 17:33:36.627814   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHHostname
	I0731 17:33:36.627856   44770 ssh_runner.go:195] Run: cat /version.json
	I0731 17:33:36.627879   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHHostname
	I0731 17:33:36.630486   44770 main.go:141] libmachine: (multinode-498089) DBG | domain multinode-498089 has defined MAC address 52:54:00:b0:35:3d in network mk-multinode-498089
	I0731 17:33:36.630771   44770 main.go:141] libmachine: (multinode-498089) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:35:3d", ip: ""} in network mk-multinode-498089: {Iface:virbr1 ExpiryTime:2024-07-31 18:26:40 +0000 UTC Type:0 Mac:52:54:00:b0:35:3d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-498089 Clientid:01:52:54:00:b0:35:3d}
	I0731 17:33:36.630797   44770 main.go:141] libmachine: (multinode-498089) DBG | domain multinode-498089 has defined IP address 192.168.39.100 and MAC address 52:54:00:b0:35:3d in network mk-multinode-498089
	I0731 17:33:36.630935   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHPort
	I0731 17:33:36.631078   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHKeyPath
	I0731 17:33:36.631102   44770 main.go:141] libmachine: (multinode-498089) DBG | domain multinode-498089 has defined MAC address 52:54:00:b0:35:3d in network mk-multinode-498089
	I0731 17:33:36.631236   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHUsername
	I0731 17:33:36.631386   44770 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/multinode-498089/id_rsa Username:docker}
	I0731 17:33:36.631528   44770 main.go:141] libmachine: (multinode-498089) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:35:3d", ip: ""} in network mk-multinode-498089: {Iface:virbr1 ExpiryTime:2024-07-31 18:26:40 +0000 UTC Type:0 Mac:52:54:00:b0:35:3d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-498089 Clientid:01:52:54:00:b0:35:3d}
	I0731 17:33:36.631553   44770 main.go:141] libmachine: (multinode-498089) DBG | domain multinode-498089 has defined IP address 192.168.39.100 and MAC address 52:54:00:b0:35:3d in network mk-multinode-498089
	I0731 17:33:36.631707   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHPort
	I0731 17:33:36.631850   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHKeyPath
	I0731 17:33:36.631978   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHUsername
	I0731 17:33:36.632084   44770 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/multinode-498089/id_rsa Username:docker}
	I0731 17:33:36.707368   44770 command_runner.go:130] > {"iso_version": "v1.33.1-1722248113-19339", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "b8389556a97747a5bbaa1906d238251ad536d76e"}
	I0731 17:33:36.733194   44770 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0731 17:33:36.733996   44770 ssh_runner.go:195] Run: systemctl --version
	I0731 17:33:36.739650   44770 command_runner.go:130] > systemd 252 (252)
	I0731 17:33:36.739674   44770 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0731 17:33:36.739878   44770 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 17:33:36.899731   44770 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0731 17:33:36.906085   44770 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0731 17:33:36.906440   44770 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 17:33:36.906507   44770 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 17:33:36.915290   44770 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0731 17:33:36.915325   44770 start.go:495] detecting cgroup driver to use...
	I0731 17:33:36.915384   44770 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 17:33:36.930656   44770 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 17:33:36.944288   44770 docker.go:217] disabling cri-docker service (if available) ...
	I0731 17:33:36.944355   44770 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 17:33:36.957127   44770 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 17:33:36.969429   44770 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 17:33:37.107233   44770 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 17:33:37.244913   44770 docker.go:233] disabling docker service ...
	I0731 17:33:37.245025   44770 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 17:33:37.262053   44770 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 17:33:37.275191   44770 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 17:33:37.417025   44770 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 17:33:37.554431   44770 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 17:33:37.568310   44770 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 17:33:37.585678   44770 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0731 17:33:37.585718   44770 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 17:33:37.585773   44770 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:33:37.596050   44770 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 17:33:37.596101   44770 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:33:37.606013   44770 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:33:37.615771   44770 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:33:37.625468   44770 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 17:33:37.635825   44770 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:33:37.645486   44770 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:33:37.656233   44770 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:33:37.666052   44770 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 17:33:37.675501   44770 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0731 17:33:37.675548   44770 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 17:33:37.684175   44770 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 17:33:37.823283   44770 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 17:33:44.743654   44770 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.920338488s)
	I0731 17:33:44.743679   44770 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 17:33:44.743719   44770 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 17:33:44.748369   44770 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0731 17:33:44.748389   44770 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0731 17:33:44.748411   44770 command_runner.go:130] > Device: 0,22	Inode: 1332        Links: 1
	I0731 17:33:44.748420   44770 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0731 17:33:44.748429   44770 command_runner.go:130] > Access: 2024-07-31 17:33:44.612491405 +0000
	I0731 17:33:44.748437   44770 command_runner.go:130] > Modify: 2024-07-31 17:33:44.612491405 +0000
	I0731 17:33:44.748450   44770 command_runner.go:130] > Change: 2024-07-31 17:33:44.612491405 +0000
	I0731 17:33:44.748455   44770 command_runner.go:130] >  Birth: -
	I0731 17:33:44.748540   44770 start.go:563] Will wait 60s for crictl version
	I0731 17:33:44.748588   44770 ssh_runner.go:195] Run: which crictl
	I0731 17:33:44.752043   44770 command_runner.go:130] > /usr/bin/crictl
	I0731 17:33:44.752654   44770 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 17:33:44.798701   44770 command_runner.go:130] > Version:  0.1.0
	I0731 17:33:44.798728   44770 command_runner.go:130] > RuntimeName:  cri-o
	I0731 17:33:44.798734   44770 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0731 17:33:44.798739   44770 command_runner.go:130] > RuntimeApiVersion:  v1
	I0731 17:33:44.799872   44770 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 17:33:44.799956   44770 ssh_runner.go:195] Run: crio --version
	I0731 17:33:44.830303   44770 command_runner.go:130] > crio version 1.29.1
	I0731 17:33:44.830323   44770 command_runner.go:130] > Version:        1.29.1
	I0731 17:33:44.830331   44770 command_runner.go:130] > GitCommit:      unknown
	I0731 17:33:44.830337   44770 command_runner.go:130] > GitCommitDate:  unknown
	I0731 17:33:44.830341   44770 command_runner.go:130] > GitTreeState:   clean
	I0731 17:33:44.830346   44770 command_runner.go:130] > BuildDate:      2024-07-29T16:04:01Z
	I0731 17:33:44.830354   44770 command_runner.go:130] > GoVersion:      go1.21.6
	I0731 17:33:44.830358   44770 command_runner.go:130] > Compiler:       gc
	I0731 17:33:44.830363   44770 command_runner.go:130] > Platform:       linux/amd64
	I0731 17:33:44.830369   44770 command_runner.go:130] > Linkmode:       dynamic
	I0731 17:33:44.830384   44770 command_runner.go:130] > BuildTags:      
	I0731 17:33:44.830393   44770 command_runner.go:130] >   containers_image_ostree_stub
	I0731 17:33:44.830401   44770 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0731 17:33:44.830408   44770 command_runner.go:130] >   btrfs_noversion
	I0731 17:33:44.830415   44770 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0731 17:33:44.830420   44770 command_runner.go:130] >   libdm_no_deferred_remove
	I0731 17:33:44.830423   44770 command_runner.go:130] >   seccomp
	I0731 17:33:44.830427   44770 command_runner.go:130] > LDFlags:          unknown
	I0731 17:33:44.830431   44770 command_runner.go:130] > SeccompEnabled:   true
	I0731 17:33:44.830435   44770 command_runner.go:130] > AppArmorEnabled:  false
	I0731 17:33:44.830500   44770 ssh_runner.go:195] Run: crio --version
	I0731 17:33:44.858545   44770 command_runner.go:130] > crio version 1.29.1
	I0731 17:33:44.858561   44770 command_runner.go:130] > Version:        1.29.1
	I0731 17:33:44.858567   44770 command_runner.go:130] > GitCommit:      unknown
	I0731 17:33:44.858571   44770 command_runner.go:130] > GitCommitDate:  unknown
	I0731 17:33:44.858575   44770 command_runner.go:130] > GitTreeState:   clean
	I0731 17:33:44.858582   44770 command_runner.go:130] > BuildDate:      2024-07-29T16:04:01Z
	I0731 17:33:44.858587   44770 command_runner.go:130] > GoVersion:      go1.21.6
	I0731 17:33:44.858590   44770 command_runner.go:130] > Compiler:       gc
	I0731 17:33:44.858595   44770 command_runner.go:130] > Platform:       linux/amd64
	I0731 17:33:44.858607   44770 command_runner.go:130] > Linkmode:       dynamic
	I0731 17:33:44.858613   44770 command_runner.go:130] > BuildTags:      
	I0731 17:33:44.858698   44770 command_runner.go:130] >   containers_image_ostree_stub
	I0731 17:33:44.858706   44770 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0731 17:33:44.858712   44770 command_runner.go:130] >   btrfs_noversion
	I0731 17:33:44.858723   44770 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0731 17:33:44.858730   44770 command_runner.go:130] >   libdm_no_deferred_remove
	I0731 17:33:44.858737   44770 command_runner.go:130] >   seccomp
	I0731 17:33:44.858743   44770 command_runner.go:130] > LDFlags:          unknown
	I0731 17:33:44.858749   44770 command_runner.go:130] > SeccompEnabled:   true
	I0731 17:33:44.858753   44770 command_runner.go:130] > AppArmorEnabled:  false
	I0731 17:33:44.861646   44770 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 17:33:44.862886   44770 main.go:141] libmachine: (multinode-498089) Calling .GetIP
	I0731 17:33:44.865474   44770 main.go:141] libmachine: (multinode-498089) DBG | domain multinode-498089 has defined MAC address 52:54:00:b0:35:3d in network mk-multinode-498089
	I0731 17:33:44.865851   44770 main.go:141] libmachine: (multinode-498089) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:35:3d", ip: ""} in network mk-multinode-498089: {Iface:virbr1 ExpiryTime:2024-07-31 18:26:40 +0000 UTC Type:0 Mac:52:54:00:b0:35:3d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-498089 Clientid:01:52:54:00:b0:35:3d}
	I0731 17:33:44.865879   44770 main.go:141] libmachine: (multinode-498089) DBG | domain multinode-498089 has defined IP address 192.168.39.100 and MAC address 52:54:00:b0:35:3d in network mk-multinode-498089
	I0731 17:33:44.866066   44770 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 17:33:44.869745   44770 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0731 17:33:44.869903   44770 kubeadm.go:883] updating cluster {Name:multinode-498089 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-498089 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.204 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fa
lse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 17:33:44.870084   44770 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 17:33:44.870146   44770 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 17:33:44.909541   44770 command_runner.go:130] > {
	I0731 17:33:44.909561   44770 command_runner.go:130] >   "images": [
	I0731 17:33:44.909566   44770 command_runner.go:130] >     {
	I0731 17:33:44.909577   44770 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0731 17:33:44.909592   44770 command_runner.go:130] >       "repoTags": [
	I0731 17:33:44.909601   44770 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0731 17:33:44.909608   44770 command_runner.go:130] >       ],
	I0731 17:33:44.909617   44770 command_runner.go:130] >       "repoDigests": [
	I0731 17:33:44.909635   44770 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0731 17:33:44.909650   44770 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0731 17:33:44.909659   44770 command_runner.go:130] >       ],
	I0731 17:33:44.909667   44770 command_runner.go:130] >       "size": "87165492",
	I0731 17:33:44.909677   44770 command_runner.go:130] >       "uid": null,
	I0731 17:33:44.909686   44770 command_runner.go:130] >       "username": "",
	I0731 17:33:44.909699   44770 command_runner.go:130] >       "spec": null,
	I0731 17:33:44.909709   44770 command_runner.go:130] >       "pinned": false
	I0731 17:33:44.909717   44770 command_runner.go:130] >     },
	I0731 17:33:44.909726   44770 command_runner.go:130] >     {
	I0731 17:33:44.909739   44770 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0731 17:33:44.909749   44770 command_runner.go:130] >       "repoTags": [
	I0731 17:33:44.909760   44770 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0731 17:33:44.909768   44770 command_runner.go:130] >       ],
	I0731 17:33:44.909776   44770 command_runner.go:130] >       "repoDigests": [
	I0731 17:33:44.909792   44770 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0731 17:33:44.909810   44770 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0731 17:33:44.909819   44770 command_runner.go:130] >       ],
	I0731 17:33:44.909827   44770 command_runner.go:130] >       "size": "87174707",
	I0731 17:33:44.909836   44770 command_runner.go:130] >       "uid": null,
	I0731 17:33:44.909856   44770 command_runner.go:130] >       "username": "",
	I0731 17:33:44.909865   44770 command_runner.go:130] >       "spec": null,
	I0731 17:33:44.909872   44770 command_runner.go:130] >       "pinned": false
	I0731 17:33:44.909877   44770 command_runner.go:130] >     },
	I0731 17:33:44.909883   44770 command_runner.go:130] >     {
	I0731 17:33:44.909896   44770 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0731 17:33:44.909905   44770 command_runner.go:130] >       "repoTags": [
	I0731 17:33:44.909915   44770 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0731 17:33:44.909923   44770 command_runner.go:130] >       ],
	I0731 17:33:44.909930   44770 command_runner.go:130] >       "repoDigests": [
	I0731 17:33:44.909945   44770 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0731 17:33:44.909960   44770 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0731 17:33:44.909968   44770 command_runner.go:130] >       ],
	I0731 17:33:44.909976   44770 command_runner.go:130] >       "size": "1363676",
	I0731 17:33:44.909985   44770 command_runner.go:130] >       "uid": null,
	I0731 17:33:44.909994   44770 command_runner.go:130] >       "username": "",
	I0731 17:33:44.910002   44770 command_runner.go:130] >       "spec": null,
	I0731 17:33:44.910009   44770 command_runner.go:130] >       "pinned": false
	I0731 17:33:44.910016   44770 command_runner.go:130] >     },
	I0731 17:33:44.910025   44770 command_runner.go:130] >     {
	I0731 17:33:44.910036   44770 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0731 17:33:44.910045   44770 command_runner.go:130] >       "repoTags": [
	I0731 17:33:44.910054   44770 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0731 17:33:44.910061   44770 command_runner.go:130] >       ],
	I0731 17:33:44.910069   44770 command_runner.go:130] >       "repoDigests": [
	I0731 17:33:44.910084   44770 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0731 17:33:44.910107   44770 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0731 17:33:44.910115   44770 command_runner.go:130] >       ],
	I0731 17:33:44.910123   44770 command_runner.go:130] >       "size": "31470524",
	I0731 17:33:44.910133   44770 command_runner.go:130] >       "uid": null,
	I0731 17:33:44.910142   44770 command_runner.go:130] >       "username": "",
	I0731 17:33:44.910151   44770 command_runner.go:130] >       "spec": null,
	I0731 17:33:44.910165   44770 command_runner.go:130] >       "pinned": false
	I0731 17:33:44.910173   44770 command_runner.go:130] >     },
	I0731 17:33:44.910180   44770 command_runner.go:130] >     {
	I0731 17:33:44.910191   44770 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0731 17:33:44.910209   44770 command_runner.go:130] >       "repoTags": [
	I0731 17:33:44.910220   44770 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0731 17:33:44.910226   44770 command_runner.go:130] >       ],
	I0731 17:33:44.910235   44770 command_runner.go:130] >       "repoDigests": [
	I0731 17:33:44.910252   44770 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0731 17:33:44.910267   44770 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0731 17:33:44.910276   44770 command_runner.go:130] >       ],
	I0731 17:33:44.910283   44770 command_runner.go:130] >       "size": "61245718",
	I0731 17:33:44.910292   44770 command_runner.go:130] >       "uid": null,
	I0731 17:33:44.910300   44770 command_runner.go:130] >       "username": "nonroot",
	I0731 17:33:44.910309   44770 command_runner.go:130] >       "spec": null,
	I0731 17:33:44.910317   44770 command_runner.go:130] >       "pinned": false
	I0731 17:33:44.910325   44770 command_runner.go:130] >     },
	I0731 17:33:44.910331   44770 command_runner.go:130] >     {
	I0731 17:33:44.910340   44770 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0731 17:33:44.910346   44770 command_runner.go:130] >       "repoTags": [
	I0731 17:33:44.910360   44770 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0731 17:33:44.910370   44770 command_runner.go:130] >       ],
	I0731 17:33:44.910377   44770 command_runner.go:130] >       "repoDigests": [
	I0731 17:33:44.910391   44770 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0731 17:33:44.910410   44770 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0731 17:33:44.910418   44770 command_runner.go:130] >       ],
	I0731 17:33:44.910426   44770 command_runner.go:130] >       "size": "150779692",
	I0731 17:33:44.910436   44770 command_runner.go:130] >       "uid": {
	I0731 17:33:44.910445   44770 command_runner.go:130] >         "value": "0"
	I0731 17:33:44.910452   44770 command_runner.go:130] >       },
	I0731 17:33:44.910462   44770 command_runner.go:130] >       "username": "",
	I0731 17:33:44.910471   44770 command_runner.go:130] >       "spec": null,
	I0731 17:33:44.910480   44770 command_runner.go:130] >       "pinned": false
	I0731 17:33:44.910486   44770 command_runner.go:130] >     },
	I0731 17:33:44.910494   44770 command_runner.go:130] >     {
	I0731 17:33:44.910505   44770 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0731 17:33:44.910514   44770 command_runner.go:130] >       "repoTags": [
	I0731 17:33:44.910525   44770 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0731 17:33:44.910533   44770 command_runner.go:130] >       ],
	I0731 17:33:44.910541   44770 command_runner.go:130] >       "repoDigests": [
	I0731 17:33:44.910560   44770 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0731 17:33:44.910574   44770 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0731 17:33:44.910581   44770 command_runner.go:130] >       ],
	I0731 17:33:44.910594   44770 command_runner.go:130] >       "size": "117609954",
	I0731 17:33:44.910602   44770 command_runner.go:130] >       "uid": {
	I0731 17:33:44.910608   44770 command_runner.go:130] >         "value": "0"
	I0731 17:33:44.910617   44770 command_runner.go:130] >       },
	I0731 17:33:44.910624   44770 command_runner.go:130] >       "username": "",
	I0731 17:33:44.910633   44770 command_runner.go:130] >       "spec": null,
	I0731 17:33:44.910647   44770 command_runner.go:130] >       "pinned": false
	I0731 17:33:44.910655   44770 command_runner.go:130] >     },
	I0731 17:33:44.910661   44770 command_runner.go:130] >     {
	I0731 17:33:44.910672   44770 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0731 17:33:44.910682   44770 command_runner.go:130] >       "repoTags": [
	I0731 17:33:44.910693   44770 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0731 17:33:44.910701   44770 command_runner.go:130] >       ],
	I0731 17:33:44.910709   44770 command_runner.go:130] >       "repoDigests": [
	I0731 17:33:44.910738   44770 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0731 17:33:44.910754   44770 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0731 17:33:44.910760   44770 command_runner.go:130] >       ],
	I0731 17:33:44.910767   44770 command_runner.go:130] >       "size": "112198984",
	I0731 17:33:44.910775   44770 command_runner.go:130] >       "uid": {
	I0731 17:33:44.910780   44770 command_runner.go:130] >         "value": "0"
	I0731 17:33:44.910785   44770 command_runner.go:130] >       },
	I0731 17:33:44.910789   44770 command_runner.go:130] >       "username": "",
	I0731 17:33:44.910798   44770 command_runner.go:130] >       "spec": null,
	I0731 17:33:44.910803   44770 command_runner.go:130] >       "pinned": false
	I0731 17:33:44.910808   44770 command_runner.go:130] >     },
	I0731 17:33:44.910812   44770 command_runner.go:130] >     {
	I0731 17:33:44.910820   44770 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0731 17:33:44.910824   44770 command_runner.go:130] >       "repoTags": [
	I0731 17:33:44.910831   44770 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0731 17:33:44.910835   44770 command_runner.go:130] >       ],
	I0731 17:33:44.910840   44770 command_runner.go:130] >       "repoDigests": [
	I0731 17:33:44.910849   44770 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0731 17:33:44.910859   44770 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0731 17:33:44.910871   44770 command_runner.go:130] >       ],
	I0731 17:33:44.910876   44770 command_runner.go:130] >       "size": "85953945",
	I0731 17:33:44.910882   44770 command_runner.go:130] >       "uid": null,
	I0731 17:33:44.910888   44770 command_runner.go:130] >       "username": "",
	I0731 17:33:44.910894   44770 command_runner.go:130] >       "spec": null,
	I0731 17:33:44.910900   44770 command_runner.go:130] >       "pinned": false
	I0731 17:33:44.910905   44770 command_runner.go:130] >     },
	I0731 17:33:44.910910   44770 command_runner.go:130] >     {
	I0731 17:33:44.910921   44770 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0731 17:33:44.910927   44770 command_runner.go:130] >       "repoTags": [
	I0731 17:33:44.910934   44770 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0731 17:33:44.910940   44770 command_runner.go:130] >       ],
	I0731 17:33:44.910947   44770 command_runner.go:130] >       "repoDigests": [
	I0731 17:33:44.910957   44770 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0731 17:33:44.910972   44770 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0731 17:33:44.910978   44770 command_runner.go:130] >       ],
	I0731 17:33:44.910986   44770 command_runner.go:130] >       "size": "63051080",
	I0731 17:33:44.910992   44770 command_runner.go:130] >       "uid": {
	I0731 17:33:44.910999   44770 command_runner.go:130] >         "value": "0"
	I0731 17:33:44.911004   44770 command_runner.go:130] >       },
	I0731 17:33:44.911009   44770 command_runner.go:130] >       "username": "",
	I0731 17:33:44.911016   44770 command_runner.go:130] >       "spec": null,
	I0731 17:33:44.911020   44770 command_runner.go:130] >       "pinned": false
	I0731 17:33:44.911023   44770 command_runner.go:130] >     },
	I0731 17:33:44.911026   44770 command_runner.go:130] >     {
	I0731 17:33:44.911032   44770 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0731 17:33:44.911037   44770 command_runner.go:130] >       "repoTags": [
	I0731 17:33:44.911041   44770 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0731 17:33:44.911045   44770 command_runner.go:130] >       ],
	I0731 17:33:44.911049   44770 command_runner.go:130] >       "repoDigests": [
	I0731 17:33:44.911056   44770 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0731 17:33:44.911065   44770 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0731 17:33:44.911068   44770 command_runner.go:130] >       ],
	I0731 17:33:44.911072   44770 command_runner.go:130] >       "size": "750414",
	I0731 17:33:44.911076   44770 command_runner.go:130] >       "uid": {
	I0731 17:33:44.911080   44770 command_runner.go:130] >         "value": "65535"
	I0731 17:33:44.911091   44770 command_runner.go:130] >       },
	I0731 17:33:44.911097   44770 command_runner.go:130] >       "username": "",
	I0731 17:33:44.911101   44770 command_runner.go:130] >       "spec": null,
	I0731 17:33:44.911105   44770 command_runner.go:130] >       "pinned": true
	I0731 17:33:44.911123   44770 command_runner.go:130] >     }
	I0731 17:33:44.911129   44770 command_runner.go:130] >   ]
	I0731 17:33:44.911135   44770 command_runner.go:130] > }
	I0731 17:33:44.911363   44770 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 17:33:44.911376   44770 crio.go:433] Images already preloaded, skipping extraction
	I0731 17:33:44.911416   44770 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 17:33:44.943099   44770 command_runner.go:130] > {
	I0731 17:33:44.943143   44770 command_runner.go:130] >   "images": [
	I0731 17:33:44.943149   44770 command_runner.go:130] >     {
	I0731 17:33:44.943158   44770 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0731 17:33:44.943164   44770 command_runner.go:130] >       "repoTags": [
	I0731 17:33:44.943173   44770 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0731 17:33:44.943178   44770 command_runner.go:130] >       ],
	I0731 17:33:44.943184   44770 command_runner.go:130] >       "repoDigests": [
	I0731 17:33:44.943198   44770 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0731 17:33:44.943213   44770 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0731 17:33:44.943223   44770 command_runner.go:130] >       ],
	I0731 17:33:44.943230   44770 command_runner.go:130] >       "size": "87165492",
	I0731 17:33:44.943239   44770 command_runner.go:130] >       "uid": null,
	I0731 17:33:44.943244   44770 command_runner.go:130] >       "username": "",
	I0731 17:33:44.943250   44770 command_runner.go:130] >       "spec": null,
	I0731 17:33:44.943254   44770 command_runner.go:130] >       "pinned": false
	I0731 17:33:44.943258   44770 command_runner.go:130] >     },
	I0731 17:33:44.943266   44770 command_runner.go:130] >     {
	I0731 17:33:44.943275   44770 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0731 17:33:44.943285   44770 command_runner.go:130] >       "repoTags": [
	I0731 17:33:44.943294   44770 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0731 17:33:44.943302   44770 command_runner.go:130] >       ],
	I0731 17:33:44.943325   44770 command_runner.go:130] >       "repoDigests": [
	I0731 17:33:44.943340   44770 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0731 17:33:44.943350   44770 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0731 17:33:44.943357   44770 command_runner.go:130] >       ],
	I0731 17:33:44.943364   44770 command_runner.go:130] >       "size": "87174707",
	I0731 17:33:44.943373   44770 command_runner.go:130] >       "uid": null,
	I0731 17:33:44.943389   44770 command_runner.go:130] >       "username": "",
	I0731 17:33:44.943399   44770 command_runner.go:130] >       "spec": null,
	I0731 17:33:44.943408   44770 command_runner.go:130] >       "pinned": false
	I0731 17:33:44.943417   44770 command_runner.go:130] >     },
	I0731 17:33:44.943425   44770 command_runner.go:130] >     {
	I0731 17:33:44.943437   44770 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0731 17:33:44.943447   44770 command_runner.go:130] >       "repoTags": [
	I0731 17:33:44.943456   44770 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0731 17:33:44.943462   44770 command_runner.go:130] >       ],
	I0731 17:33:44.943468   44770 command_runner.go:130] >       "repoDigests": [
	I0731 17:33:44.943482   44770 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0731 17:33:44.943497   44770 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0731 17:33:44.943506   44770 command_runner.go:130] >       ],
	I0731 17:33:44.943516   44770 command_runner.go:130] >       "size": "1363676",
	I0731 17:33:44.943525   44770 command_runner.go:130] >       "uid": null,
	I0731 17:33:44.943534   44770 command_runner.go:130] >       "username": "",
	I0731 17:33:44.943543   44770 command_runner.go:130] >       "spec": null,
	I0731 17:33:44.943553   44770 command_runner.go:130] >       "pinned": false
	I0731 17:33:44.943557   44770 command_runner.go:130] >     },
	I0731 17:33:44.943563   44770 command_runner.go:130] >     {
	I0731 17:33:44.943573   44770 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0731 17:33:44.943582   44770 command_runner.go:130] >       "repoTags": [
	I0731 17:33:44.943594   44770 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0731 17:33:44.943604   44770 command_runner.go:130] >       ],
	I0731 17:33:44.943613   44770 command_runner.go:130] >       "repoDigests": [
	I0731 17:33:44.943628   44770 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0731 17:33:44.943650   44770 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0731 17:33:44.943657   44770 command_runner.go:130] >       ],
	I0731 17:33:44.943661   44770 command_runner.go:130] >       "size": "31470524",
	I0731 17:33:44.943668   44770 command_runner.go:130] >       "uid": null,
	I0731 17:33:44.943684   44770 command_runner.go:130] >       "username": "",
	I0731 17:33:44.943693   44770 command_runner.go:130] >       "spec": null,
	I0731 17:33:44.943700   44770 command_runner.go:130] >       "pinned": false
	I0731 17:33:44.943709   44770 command_runner.go:130] >     },
	I0731 17:33:44.943717   44770 command_runner.go:130] >     {
	I0731 17:33:44.943728   44770 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0731 17:33:44.943737   44770 command_runner.go:130] >       "repoTags": [
	I0731 17:33:44.943748   44770 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0731 17:33:44.943756   44770 command_runner.go:130] >       ],
	I0731 17:33:44.943764   44770 command_runner.go:130] >       "repoDigests": [
	I0731 17:33:44.943773   44770 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0731 17:33:44.943786   44770 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0731 17:33:44.943795   44770 command_runner.go:130] >       ],
	I0731 17:33:44.943803   44770 command_runner.go:130] >       "size": "61245718",
	I0731 17:33:44.943812   44770 command_runner.go:130] >       "uid": null,
	I0731 17:33:44.943822   44770 command_runner.go:130] >       "username": "nonroot",
	I0731 17:33:44.943831   44770 command_runner.go:130] >       "spec": null,
	I0731 17:33:44.943840   44770 command_runner.go:130] >       "pinned": false
	I0731 17:33:44.943848   44770 command_runner.go:130] >     },
	I0731 17:33:44.943856   44770 command_runner.go:130] >     {
	I0731 17:33:44.943864   44770 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0731 17:33:44.943870   44770 command_runner.go:130] >       "repoTags": [
	I0731 17:33:44.943877   44770 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0731 17:33:44.943885   44770 command_runner.go:130] >       ],
	I0731 17:33:44.943895   44770 command_runner.go:130] >       "repoDigests": [
	I0731 17:33:44.943906   44770 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0731 17:33:44.943920   44770 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0731 17:33:44.943928   44770 command_runner.go:130] >       ],
	I0731 17:33:44.943935   44770 command_runner.go:130] >       "size": "150779692",
	I0731 17:33:44.943944   44770 command_runner.go:130] >       "uid": {
	I0731 17:33:44.943953   44770 command_runner.go:130] >         "value": "0"
	I0731 17:33:44.943961   44770 command_runner.go:130] >       },
	I0731 17:33:44.943965   44770 command_runner.go:130] >       "username": "",
	I0731 17:33:44.943969   44770 command_runner.go:130] >       "spec": null,
	I0731 17:33:44.943974   44770 command_runner.go:130] >       "pinned": false
	I0731 17:33:44.943982   44770 command_runner.go:130] >     },
	I0731 17:33:44.943997   44770 command_runner.go:130] >     {
	I0731 17:33:44.944011   44770 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0731 17:33:44.944021   44770 command_runner.go:130] >       "repoTags": [
	I0731 17:33:44.944034   44770 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0731 17:33:44.944043   44770 command_runner.go:130] >       ],
	I0731 17:33:44.944052   44770 command_runner.go:130] >       "repoDigests": [
	I0731 17:33:44.944063   44770 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0731 17:33:44.944076   44770 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0731 17:33:44.944085   44770 command_runner.go:130] >       ],
	I0731 17:33:44.944092   44770 command_runner.go:130] >       "size": "117609954",
	I0731 17:33:44.944101   44770 command_runner.go:130] >       "uid": {
	I0731 17:33:44.944107   44770 command_runner.go:130] >         "value": "0"
	I0731 17:33:44.944115   44770 command_runner.go:130] >       },
	I0731 17:33:44.944122   44770 command_runner.go:130] >       "username": "",
	I0731 17:33:44.944134   44770 command_runner.go:130] >       "spec": null,
	I0731 17:33:44.944144   44770 command_runner.go:130] >       "pinned": false
	I0731 17:33:44.944149   44770 command_runner.go:130] >     },
	I0731 17:33:44.944154   44770 command_runner.go:130] >     {
	I0731 17:33:44.944161   44770 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0731 17:33:44.944166   44770 command_runner.go:130] >       "repoTags": [
	I0731 17:33:44.944178   44770 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0731 17:33:44.944186   44770 command_runner.go:130] >       ],
	I0731 17:33:44.944193   44770 command_runner.go:130] >       "repoDigests": [
	I0731 17:33:44.945399   44770 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0731 17:33:44.945427   44770 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0731 17:33:44.945431   44770 command_runner.go:130] >       ],
	I0731 17:33:44.945439   44770 command_runner.go:130] >       "size": "112198984",
	I0731 17:33:44.945442   44770 command_runner.go:130] >       "uid": {
	I0731 17:33:44.945446   44770 command_runner.go:130] >         "value": "0"
	I0731 17:33:44.945451   44770 command_runner.go:130] >       },
	I0731 17:33:44.945458   44770 command_runner.go:130] >       "username": "",
	I0731 17:33:44.945466   44770 command_runner.go:130] >       "spec": null,
	I0731 17:33:44.945476   44770 command_runner.go:130] >       "pinned": false
	I0731 17:33:44.945485   44770 command_runner.go:130] >     },
	I0731 17:33:44.945492   44770 command_runner.go:130] >     {
	I0731 17:33:44.945506   44770 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0731 17:33:44.945522   44770 command_runner.go:130] >       "repoTags": [
	I0731 17:33:44.945541   44770 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0731 17:33:44.945548   44770 command_runner.go:130] >       ],
	I0731 17:33:44.945554   44770 command_runner.go:130] >       "repoDigests": [
	I0731 17:33:44.945565   44770 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0731 17:33:44.945586   44770 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0731 17:33:44.945601   44770 command_runner.go:130] >       ],
	I0731 17:33:44.945612   44770 command_runner.go:130] >       "size": "85953945",
	I0731 17:33:44.945622   44770 command_runner.go:130] >       "uid": null,
	I0731 17:33:44.945636   44770 command_runner.go:130] >       "username": "",
	I0731 17:33:44.945644   44770 command_runner.go:130] >       "spec": null,
	I0731 17:33:44.945652   44770 command_runner.go:130] >       "pinned": false
	I0731 17:33:44.945661   44770 command_runner.go:130] >     },
	I0731 17:33:44.945670   44770 command_runner.go:130] >     {
	I0731 17:33:44.945686   44770 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0731 17:33:44.945695   44770 command_runner.go:130] >       "repoTags": [
	I0731 17:33:44.945706   44770 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0731 17:33:44.945715   44770 command_runner.go:130] >       ],
	I0731 17:33:44.945725   44770 command_runner.go:130] >       "repoDigests": [
	I0731 17:33:44.945742   44770 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0731 17:33:44.945755   44770 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0731 17:33:44.945764   44770 command_runner.go:130] >       ],
	I0731 17:33:44.945780   44770 command_runner.go:130] >       "size": "63051080",
	I0731 17:33:44.945789   44770 command_runner.go:130] >       "uid": {
	I0731 17:33:44.945798   44770 command_runner.go:130] >         "value": "0"
	I0731 17:33:44.945807   44770 command_runner.go:130] >       },
	I0731 17:33:44.945816   44770 command_runner.go:130] >       "username": "",
	I0731 17:33:44.945825   44770 command_runner.go:130] >       "spec": null,
	I0731 17:33:44.945834   44770 command_runner.go:130] >       "pinned": false
	I0731 17:33:44.945843   44770 command_runner.go:130] >     },
	I0731 17:33:44.945851   44770 command_runner.go:130] >     {
	I0731 17:33:44.945861   44770 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0731 17:33:44.945871   44770 command_runner.go:130] >       "repoTags": [
	I0731 17:33:44.945886   44770 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0731 17:33:44.945895   44770 command_runner.go:130] >       ],
	I0731 17:33:44.945904   44770 command_runner.go:130] >       "repoDigests": [
	I0731 17:33:44.945922   44770 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0731 17:33:44.945942   44770 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0731 17:33:44.945952   44770 command_runner.go:130] >       ],
	I0731 17:33:44.945958   44770 command_runner.go:130] >       "size": "750414",
	I0731 17:33:44.945968   44770 command_runner.go:130] >       "uid": {
	I0731 17:33:44.945978   44770 command_runner.go:130] >         "value": "65535"
	I0731 17:33:44.945992   44770 command_runner.go:130] >       },
	I0731 17:33:44.946001   44770 command_runner.go:130] >       "username": "",
	I0731 17:33:44.946019   44770 command_runner.go:130] >       "spec": null,
	I0731 17:33:44.946025   44770 command_runner.go:130] >       "pinned": true
	I0731 17:33:44.946030   44770 command_runner.go:130] >     }
	I0731 17:33:44.946038   44770 command_runner.go:130] >   ]
	I0731 17:33:44.946044   44770 command_runner.go:130] > }
	I0731 17:33:44.946351   44770 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 17:33:44.946666   44770 cache_images.go:84] Images are preloaded, skipping loading
	I0731 17:33:44.946686   44770 kubeadm.go:934] updating node { 192.168.39.100 8443 v1.30.3 crio true true} ...
	I0731 17:33:44.946804   44770 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-498089 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.100
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-498089 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 17:33:44.946887   44770 ssh_runner.go:195] Run: crio config
	I0731 17:33:44.987825   44770 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0731 17:33:44.987854   44770 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0731 17:33:44.987861   44770 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0731 17:33:44.987865   44770 command_runner.go:130] > #
	I0731 17:33:44.987872   44770 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0731 17:33:44.987878   44770 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0731 17:33:44.987886   44770 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0731 17:33:44.987897   44770 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0731 17:33:44.987902   44770 command_runner.go:130] > # reload'.
	I0731 17:33:44.987912   44770 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0731 17:33:44.987921   44770 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0731 17:33:44.987927   44770 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0731 17:33:44.987934   44770 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0731 17:33:44.987938   44770 command_runner.go:130] > [crio]
	I0731 17:33:44.987944   44770 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0731 17:33:44.987950   44770 command_runner.go:130] > # containers images, in this directory.
	I0731 17:33:44.988283   44770 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0731 17:33:44.988325   44770 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0731 17:33:44.988334   44770 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0731 17:33:44.988347   44770 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0731 17:33:44.988356   44770 command_runner.go:130] > # imagestore = ""
	I0731 17:33:44.988366   44770 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0731 17:33:44.988376   44770 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0731 17:33:44.988387   44770 command_runner.go:130] > storage_driver = "overlay"
	I0731 17:33:44.988397   44770 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0731 17:33:44.988410   44770 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0731 17:33:44.988416   44770 command_runner.go:130] > storage_option = [
	I0731 17:33:44.988424   44770 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0731 17:33:44.988430   44770 command_runner.go:130] > ]
	I0731 17:33:44.988440   44770 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0731 17:33:44.988449   44770 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0731 17:33:44.988456   44770 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0731 17:33:44.988466   44770 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0731 17:33:44.988475   44770 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0731 17:33:44.988485   44770 command_runner.go:130] > # always happen on a node reboot
	I0731 17:33:44.988493   44770 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0731 17:33:44.988510   44770 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0731 17:33:44.988523   44770 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0731 17:33:44.988531   44770 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0731 17:33:44.988543   44770 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0731 17:33:44.988561   44770 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0731 17:33:44.988578   44770 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0731 17:33:44.988585   44770 command_runner.go:130] > # internal_wipe = true
	I0731 17:33:44.988596   44770 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0731 17:33:44.988607   44770 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0731 17:33:44.988615   44770 command_runner.go:130] > # internal_repair = false
	I0731 17:33:44.988626   44770 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0731 17:33:44.988635   44770 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0731 17:33:44.988647   44770 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0731 17:33:44.988656   44770 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0731 17:33:44.988669   44770 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0731 17:33:44.988679   44770 command_runner.go:130] > [crio.api]
	I0731 17:33:44.988689   44770 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0731 17:33:44.988700   44770 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0731 17:33:44.988711   44770 command_runner.go:130] > # IP address on which the stream server will listen.
	I0731 17:33:44.988725   44770 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0731 17:33:44.988739   44770 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0731 17:33:44.988748   44770 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0731 17:33:44.988760   44770 command_runner.go:130] > # stream_port = "0"
	I0731 17:33:44.988770   44770 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0731 17:33:44.988780   44770 command_runner.go:130] > # stream_enable_tls = false
	I0731 17:33:44.988789   44770 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0731 17:33:44.988801   44770 command_runner.go:130] > # stream_idle_timeout = ""
	I0731 17:33:44.988816   44770 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0731 17:33:44.988828   44770 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0731 17:33:44.988834   44770 command_runner.go:130] > # minutes.
	I0731 17:33:44.988843   44770 command_runner.go:130] > # stream_tls_cert = ""
	I0731 17:33:44.988853   44770 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0731 17:33:44.988865   44770 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0731 17:33:44.988872   44770 command_runner.go:130] > # stream_tls_key = ""
	I0731 17:33:44.988885   44770 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0731 17:33:44.988899   44770 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0731 17:33:44.988930   44770 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0731 17:33:44.988940   44770 command_runner.go:130] > # stream_tls_ca = ""
	I0731 17:33:44.988952   44770 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0731 17:33:44.988964   44770 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0731 17:33:44.988977   44770 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0731 17:33:44.988987   44770 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0731 17:33:44.988997   44770 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0731 17:33:44.989009   44770 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0731 17:33:44.989018   44770 command_runner.go:130] > [crio.runtime]
	I0731 17:33:44.989028   44770 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0731 17:33:44.989040   44770 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0731 17:33:44.989049   44770 command_runner.go:130] > # "nofile=1024:2048"
	I0731 17:33:44.989063   44770 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0731 17:33:44.989073   44770 command_runner.go:130] > # default_ulimits = [
	I0731 17:33:44.989078   44770 command_runner.go:130] > # ]
	I0731 17:33:44.989088   44770 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0731 17:33:44.989097   44770 command_runner.go:130] > # no_pivot = false
	I0731 17:33:44.989105   44770 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0731 17:33:44.989117   44770 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0731 17:33:44.989125   44770 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0731 17:33:44.989137   44770 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0731 17:33:44.989147   44770 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0731 17:33:44.989158   44770 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0731 17:33:44.989170   44770 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0731 17:33:44.989182   44770 command_runner.go:130] > # Cgroup setting for conmon
	I0731 17:33:44.989194   44770 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0731 17:33:44.989204   44770 command_runner.go:130] > conmon_cgroup = "pod"
	I0731 17:33:44.989213   44770 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0731 17:33:44.989223   44770 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0731 17:33:44.989236   44770 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0731 17:33:44.989245   44770 command_runner.go:130] > conmon_env = [
	I0731 17:33:44.989254   44770 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0731 17:33:44.989262   44770 command_runner.go:130] > ]
	I0731 17:33:44.989271   44770 command_runner.go:130] > # Additional environment variables to set for all the
	I0731 17:33:44.989283   44770 command_runner.go:130] > # containers. These are overridden if set in the
	I0731 17:33:44.989293   44770 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0731 17:33:44.989301   44770 command_runner.go:130] > # default_env = [
	I0731 17:33:44.989307   44770 command_runner.go:130] > # ]
	I0731 17:33:44.989320   44770 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0731 17:33:44.989335   44770 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0731 17:33:44.989341   44770 command_runner.go:130] > # selinux = false
	I0731 17:33:44.989351   44770 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0731 17:33:44.989361   44770 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0731 17:33:44.989372   44770 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0731 17:33:44.989378   44770 command_runner.go:130] > # seccomp_profile = ""
	I0731 17:33:44.989387   44770 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0731 17:33:44.989395   44770 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0731 17:33:44.989408   44770 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0731 17:33:44.989418   44770 command_runner.go:130] > # which might increase security.
	I0731 17:33:44.989425   44770 command_runner.go:130] > # This option is currently deprecated,
	I0731 17:33:44.989438   44770 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0731 17:33:44.989449   44770 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0731 17:33:44.989462   44770 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0731 17:33:44.989476   44770 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0731 17:33:44.989490   44770 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0731 17:33:44.989504   44770 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0731 17:33:44.989516   44770 command_runner.go:130] > # This option supports live configuration reload.
	I0731 17:33:44.989527   44770 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0731 17:33:44.989537   44770 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0731 17:33:44.989556   44770 command_runner.go:130] > # the cgroup blockio controller.
	I0731 17:33:44.989566   44770 command_runner.go:130] > # blockio_config_file = ""
	I0731 17:33:44.989577   44770 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0731 17:33:44.989585   44770 command_runner.go:130] > # blockio parameters.
	I0731 17:33:44.989592   44770 command_runner.go:130] > # blockio_reload = false
	I0731 17:33:44.989602   44770 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0731 17:33:44.989619   44770 command_runner.go:130] > # irqbalance daemon.
	I0731 17:33:44.989632   44770 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0731 17:33:44.989644   44770 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0731 17:33:44.989657   44770 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0731 17:33:44.989672   44770 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0731 17:33:44.989684   44770 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0731 17:33:44.989695   44770 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0731 17:33:44.989707   44770 command_runner.go:130] > # This option supports live configuration reload.
	I0731 17:33:44.989715   44770 command_runner.go:130] > # rdt_config_file = ""
	I0731 17:33:44.989723   44770 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0731 17:33:44.989733   44770 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0731 17:33:44.989815   44770 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0731 17:33:44.989830   44770 command_runner.go:130] > # separate_pull_cgroup = ""
	I0731 17:33:44.989845   44770 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0731 17:33:44.989859   44770 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0731 17:33:44.989868   44770 command_runner.go:130] > # will be added.
	I0731 17:33:44.989875   44770 command_runner.go:130] > # default_capabilities = [
	I0731 17:33:44.989885   44770 command_runner.go:130] > # 	"CHOWN",
	I0731 17:33:44.989891   44770 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0731 17:33:44.989898   44770 command_runner.go:130] > # 	"FSETID",
	I0731 17:33:44.989904   44770 command_runner.go:130] > # 	"FOWNER",
	I0731 17:33:44.989912   44770 command_runner.go:130] > # 	"SETGID",
	I0731 17:33:44.989918   44770 command_runner.go:130] > # 	"SETUID",
	I0731 17:33:44.989926   44770 command_runner.go:130] > # 	"SETPCAP",
	I0731 17:33:44.989934   44770 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0731 17:33:44.989942   44770 command_runner.go:130] > # 	"KILL",
	I0731 17:33:44.989948   44770 command_runner.go:130] > # ]
	I0731 17:33:44.989962   44770 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0731 17:33:44.989975   44770 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0731 17:33:44.989986   44770 command_runner.go:130] > # add_inheritable_capabilities = false
	I0731 17:33:44.989999   44770 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0731 17:33:44.990011   44770 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0731 17:33:44.990020   44770 command_runner.go:130] > default_sysctls = [
	I0731 17:33:44.990028   44770 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0731 17:33:44.990037   44770 command_runner.go:130] > ]
	I0731 17:33:44.990044   44770 command_runner.go:130] > # List of devices on the host that a
	I0731 17:33:44.990058   44770 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0731 17:33:44.990068   44770 command_runner.go:130] > # allowed_devices = [
	I0731 17:33:44.990077   44770 command_runner.go:130] > # 	"/dev/fuse",
	I0731 17:33:44.990082   44770 command_runner.go:130] > # ]
	I0731 17:33:44.990090   44770 command_runner.go:130] > # List of additional devices. specified as
	I0731 17:33:44.990104   44770 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0731 17:33:44.990115   44770 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0731 17:33:44.990124   44770 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0731 17:33:44.990135   44770 command_runner.go:130] > # additional_devices = [
	I0731 17:33:44.990144   44770 command_runner.go:130] > # ]
	I0731 17:33:44.990152   44770 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0731 17:33:44.990170   44770 command_runner.go:130] > # cdi_spec_dirs = [
	I0731 17:33:44.990180   44770 command_runner.go:130] > # 	"/etc/cdi",
	I0731 17:33:44.990187   44770 command_runner.go:130] > # 	"/var/run/cdi",
	I0731 17:33:44.990193   44770 command_runner.go:130] > # ]
	I0731 17:33:44.990203   44770 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0731 17:33:44.990217   44770 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0731 17:33:44.990225   44770 command_runner.go:130] > # Defaults to false.
	I0731 17:33:44.990234   44770 command_runner.go:130] > # device_ownership_from_security_context = false
	I0731 17:33:44.990248   44770 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0731 17:33:44.990260   44770 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0731 17:33:44.990269   44770 command_runner.go:130] > # hooks_dir = [
	I0731 17:33:44.990277   44770 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0731 17:33:44.990286   44770 command_runner.go:130] > # ]
	I0731 17:33:44.990296   44770 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0731 17:33:44.990309   44770 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0731 17:33:44.990320   44770 command_runner.go:130] > # its default mounts from the following two files:
	I0731 17:33:44.990328   44770 command_runner.go:130] > #
	I0731 17:33:44.990337   44770 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0731 17:33:44.990351   44770 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0731 17:33:44.990361   44770 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0731 17:33:44.990369   44770 command_runner.go:130] > #
	I0731 17:33:44.990380   44770 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0731 17:33:44.990393   44770 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0731 17:33:44.990408   44770 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0731 17:33:44.990420   44770 command_runner.go:130] > #      only add mounts it finds in this file.
	I0731 17:33:44.990428   44770 command_runner.go:130] > #
	I0731 17:33:44.990435   44770 command_runner.go:130] > # default_mounts_file = ""
	I0731 17:33:44.990448   44770 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0731 17:33:44.990462   44770 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0731 17:33:44.990472   44770 command_runner.go:130] > pids_limit = 1024
	I0731 17:33:44.990481   44770 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0731 17:33:44.990493   44770 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0731 17:33:44.990506   44770 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0731 17:33:44.990518   44770 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0731 17:33:44.990527   44770 command_runner.go:130] > # log_size_max = -1
	I0731 17:33:44.990541   44770 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0731 17:33:44.990566   44770 command_runner.go:130] > # log_to_journald = false
	I0731 17:33:44.990580   44770 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0731 17:33:44.990591   44770 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0731 17:33:44.990602   44770 command_runner.go:130] > # Path to directory for container attach sockets.
	I0731 17:33:44.990614   44770 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0731 17:33:44.990627   44770 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0731 17:33:44.990637   44770 command_runner.go:130] > # bind_mount_prefix = ""
	I0731 17:33:44.990646   44770 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0731 17:33:44.990655   44770 command_runner.go:130] > # read_only = false
	I0731 17:33:44.990666   44770 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0731 17:33:44.990678   44770 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0731 17:33:44.990687   44770 command_runner.go:130] > # live configuration reload.
	I0731 17:33:44.990694   44770 command_runner.go:130] > # log_level = "info"
	I0731 17:33:44.990704   44770 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0731 17:33:44.990714   44770 command_runner.go:130] > # This option supports live configuration reload.
	I0731 17:33:44.990723   44770 command_runner.go:130] > # log_filter = ""
	I0731 17:33:44.990732   44770 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0731 17:33:44.990748   44770 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0731 17:33:44.990757   44770 command_runner.go:130] > # separated by comma.
	I0731 17:33:44.990767   44770 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0731 17:33:44.990776   44770 command_runner.go:130] > # uid_mappings = ""
	I0731 17:33:44.990785   44770 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0731 17:33:44.990796   44770 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0731 17:33:44.990807   44770 command_runner.go:130] > # separated by comma.
	I0731 17:33:44.990821   44770 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0731 17:33:44.990845   44770 command_runner.go:130] > # gid_mappings = ""
	I0731 17:33:44.990861   44770 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0731 17:33:44.990873   44770 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0731 17:33:44.990885   44770 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0731 17:33:44.990899   44770 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0731 17:33:44.990909   44770 command_runner.go:130] > # minimum_mappable_uid = -1
	I0731 17:33:44.990918   44770 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0731 17:33:44.990930   44770 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0731 17:33:44.990941   44770 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0731 17:33:44.990956   44770 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0731 17:33:44.990966   44770 command_runner.go:130] > # minimum_mappable_gid = -1
	I0731 17:33:44.990986   44770 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0731 17:33:44.990996   44770 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0731 17:33:44.991003   44770 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0731 17:33:44.991010   44770 command_runner.go:130] > # ctr_stop_timeout = 30
	I0731 17:33:44.991015   44770 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0731 17:33:44.991023   44770 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0731 17:33:44.991028   44770 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0731 17:33:44.991035   44770 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0731 17:33:44.991038   44770 command_runner.go:130] > drop_infra_ctr = false
	I0731 17:33:44.991046   44770 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0731 17:33:44.991051   44770 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0731 17:33:44.991059   44770 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0731 17:33:44.991065   44770 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0731 17:33:44.991072   44770 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0731 17:33:44.991078   44770 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0731 17:33:44.991089   44770 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0731 17:33:44.991095   44770 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0731 17:33:44.991104   44770 command_runner.go:130] > # shared_cpuset = ""
	I0731 17:33:44.991125   44770 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0731 17:33:44.991136   44770 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0731 17:33:44.991143   44770 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0731 17:33:44.991156   44770 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0731 17:33:44.991167   44770 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0731 17:33:44.991178   44770 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0731 17:33:44.991191   44770 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0731 17:33:44.991201   44770 command_runner.go:130] > # enable_criu_support = false
	I0731 17:33:44.991212   44770 command_runner.go:130] > # Enable/disable the generation of the container,
	I0731 17:33:44.991224   44770 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0731 17:33:44.991233   44770 command_runner.go:130] > # enable_pod_events = false
	I0731 17:33:44.991240   44770 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0731 17:33:44.991247   44770 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0731 17:33:44.991252   44770 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0731 17:33:44.991259   44770 command_runner.go:130] > # default_runtime = "runc"
	I0731 17:33:44.991265   44770 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0731 17:33:44.991274   44770 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0731 17:33:44.991283   44770 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0731 17:33:44.991297   44770 command_runner.go:130] > # creation as a file is not desired either.
	I0731 17:33:44.991308   44770 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0731 17:33:44.991316   44770 command_runner.go:130] > # the hostname is being managed dynamically.
	I0731 17:33:44.991320   44770 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0731 17:33:44.991326   44770 command_runner.go:130] > # ]
	I0731 17:33:44.991331   44770 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0731 17:33:44.991339   44770 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0731 17:33:44.991345   44770 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0731 17:33:44.991352   44770 command_runner.go:130] > # Each entry in the table should follow the format:
	I0731 17:33:44.991355   44770 command_runner.go:130] > #
	I0731 17:33:44.991359   44770 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0731 17:33:44.991364   44770 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0731 17:33:44.991408   44770 command_runner.go:130] > # runtime_type = "oci"
	I0731 17:33:44.991415   44770 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0731 17:33:44.991419   44770 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0731 17:33:44.991423   44770 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0731 17:33:44.991427   44770 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0731 17:33:44.991431   44770 command_runner.go:130] > # monitor_env = []
	I0731 17:33:44.991436   44770 command_runner.go:130] > # privileged_without_host_devices = false
	I0731 17:33:44.991441   44770 command_runner.go:130] > # allowed_annotations = []
	I0731 17:33:44.991446   44770 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0731 17:33:44.991450   44770 command_runner.go:130] > # Where:
	I0731 17:33:44.991455   44770 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0731 17:33:44.991461   44770 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0731 17:33:44.991469   44770 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0731 17:33:44.991475   44770 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0731 17:33:44.991480   44770 command_runner.go:130] > #   in $PATH.
	I0731 17:33:44.991488   44770 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0731 17:33:44.991494   44770 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0731 17:33:44.991499   44770 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0731 17:33:44.991505   44770 command_runner.go:130] > #   state.
	I0731 17:33:44.991511   44770 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0731 17:33:44.991518   44770 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0731 17:33:44.991524   44770 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0731 17:33:44.991531   44770 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0731 17:33:44.991536   44770 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0731 17:33:44.991553   44770 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0731 17:33:44.991560   44770 command_runner.go:130] > #   The currently recognized values are:
	I0731 17:33:44.991566   44770 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0731 17:33:44.991573   44770 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0731 17:33:44.991581   44770 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0731 17:33:44.991586   44770 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0731 17:33:44.991595   44770 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0731 17:33:44.991601   44770 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0731 17:33:44.991609   44770 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0731 17:33:44.991615   44770 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0731 17:33:44.991622   44770 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0731 17:33:44.991628   44770 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0731 17:33:44.991632   44770 command_runner.go:130] > #   deprecated option "conmon".
	I0731 17:33:44.991640   44770 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0731 17:33:44.991645   44770 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0731 17:33:44.991653   44770 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0731 17:33:44.991658   44770 command_runner.go:130] > #   should be moved to the container's cgroup
	I0731 17:33:44.991666   44770 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0731 17:33:44.991670   44770 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0731 17:33:44.991678   44770 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0731 17:33:44.991683   44770 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0731 17:33:44.991688   44770 command_runner.go:130] > #
	I0731 17:33:44.991692   44770 command_runner.go:130] > # Using the seccomp notifier feature:
	I0731 17:33:44.991695   44770 command_runner.go:130] > #
	I0731 17:33:44.991700   44770 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0731 17:33:44.991707   44770 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0731 17:33:44.991710   44770 command_runner.go:130] > #
	I0731 17:33:44.991718   44770 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0731 17:33:44.991730   44770 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0731 17:33:44.991738   44770 command_runner.go:130] > #
	I0731 17:33:44.991749   44770 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0731 17:33:44.991758   44770 command_runner.go:130] > # feature.
	I0731 17:33:44.991761   44770 command_runner.go:130] > #
	I0731 17:33:44.991767   44770 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0731 17:33:44.991775   44770 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0731 17:33:44.991781   44770 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0731 17:33:44.991794   44770 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0731 17:33:44.991801   44770 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0731 17:33:44.991805   44770 command_runner.go:130] > #
	I0731 17:33:44.991810   44770 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0731 17:33:44.991817   44770 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0731 17:33:44.991820   44770 command_runner.go:130] > #
	I0731 17:33:44.991826   44770 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0731 17:33:44.991833   44770 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0731 17:33:44.991836   44770 command_runner.go:130] > #
	I0731 17:33:44.991842   44770 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0731 17:33:44.991848   44770 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0731 17:33:44.991851   44770 command_runner.go:130] > # limitation.
	I0731 17:33:44.991856   44770 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0731 17:33:44.991861   44770 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0731 17:33:44.991865   44770 command_runner.go:130] > runtime_type = "oci"
	I0731 17:33:44.991871   44770 command_runner.go:130] > runtime_root = "/run/runc"
	I0731 17:33:44.991875   44770 command_runner.go:130] > runtime_config_path = ""
	I0731 17:33:44.991879   44770 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0731 17:33:44.991883   44770 command_runner.go:130] > monitor_cgroup = "pod"
	I0731 17:33:44.991887   44770 command_runner.go:130] > monitor_exec_cgroup = ""
	I0731 17:33:44.991890   44770 command_runner.go:130] > monitor_env = [
	I0731 17:33:44.991897   44770 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0731 17:33:44.991899   44770 command_runner.go:130] > ]
	I0731 17:33:44.991906   44770 command_runner.go:130] > privileged_without_host_devices = false
	I0731 17:33:44.991914   44770 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0731 17:33:44.991919   44770 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0731 17:33:44.991926   44770 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0731 17:33:44.991932   44770 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0731 17:33:44.991942   44770 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0731 17:33:44.991947   44770 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0731 17:33:44.991956   44770 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0731 17:33:44.991965   44770 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0731 17:33:44.991971   44770 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0731 17:33:44.991979   44770 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0731 17:33:44.991983   44770 command_runner.go:130] > # Example:
	I0731 17:33:44.991987   44770 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0731 17:33:44.991997   44770 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0731 17:33:44.992002   44770 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0731 17:33:44.992006   44770 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0731 17:33:44.992010   44770 command_runner.go:130] > # cpuset = 0
	I0731 17:33:44.992013   44770 command_runner.go:130] > # cpushares = "0-1"
	I0731 17:33:44.992016   44770 command_runner.go:130] > # Where:
	I0731 17:33:44.992020   44770 command_runner.go:130] > # The workload name is workload-type.
	I0731 17:33:44.992026   44770 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0731 17:33:44.992031   44770 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0731 17:33:44.992036   44770 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0731 17:33:44.992043   44770 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0731 17:33:44.992048   44770 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0731 17:33:44.992055   44770 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0731 17:33:44.992065   44770 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0731 17:33:44.992072   44770 command_runner.go:130] > # Default value is set to true
	I0731 17:33:44.992078   44770 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0731 17:33:44.992086   44770 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0731 17:33:44.992094   44770 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0731 17:33:44.992100   44770 command_runner.go:130] > # Default value is set to 'false'
	I0731 17:33:44.992107   44770 command_runner.go:130] > # disable_hostport_mapping = false
	I0731 17:33:44.992116   44770 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0731 17:33:44.992120   44770 command_runner.go:130] > #
	I0731 17:33:44.992128   44770 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0731 17:33:44.992137   44770 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0731 17:33:44.992144   44770 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0731 17:33:44.992149   44770 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0731 17:33:44.992154   44770 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0731 17:33:44.992157   44770 command_runner.go:130] > [crio.image]
	I0731 17:33:44.992163   44770 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0731 17:33:44.992167   44770 command_runner.go:130] > # default_transport = "docker://"
	I0731 17:33:44.992172   44770 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0731 17:33:44.992178   44770 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0731 17:33:44.992182   44770 command_runner.go:130] > # global_auth_file = ""
	I0731 17:33:44.992186   44770 command_runner.go:130] > # The image used to instantiate infra containers.
	I0731 17:33:44.992194   44770 command_runner.go:130] > # This option supports live configuration reload.
	I0731 17:33:44.992200   44770 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0731 17:33:44.992216   44770 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0731 17:33:44.992230   44770 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0731 17:33:44.992238   44770 command_runner.go:130] > # This option supports live configuration reload.
	I0731 17:33:44.992248   44770 command_runner.go:130] > # pause_image_auth_file = ""
	I0731 17:33:44.992256   44770 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0731 17:33:44.992268   44770 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0731 17:33:44.992280   44770 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0731 17:33:44.992291   44770 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0731 17:33:44.992301   44770 command_runner.go:130] > # pause_command = "/pause"
	I0731 17:33:44.992311   44770 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0731 17:33:44.992320   44770 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0731 17:33:44.992326   44770 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0731 17:33:44.992332   44770 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0731 17:33:44.992340   44770 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0731 17:33:44.992346   44770 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0731 17:33:44.992352   44770 command_runner.go:130] > # pinned_images = [
	I0731 17:33:44.992358   44770 command_runner.go:130] > # ]
	I0731 17:33:44.992370   44770 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0731 17:33:44.992382   44770 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0731 17:33:44.992395   44770 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0731 17:33:44.992407   44770 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0731 17:33:44.992427   44770 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0731 17:33:44.992433   44770 command_runner.go:130] > # signature_policy = ""
	I0731 17:33:44.992439   44770 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0731 17:33:44.992451   44770 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0731 17:33:44.992465   44770 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0731 17:33:44.992477   44770 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0731 17:33:44.992490   44770 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0731 17:33:44.992500   44770 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0731 17:33:44.992512   44770 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0731 17:33:44.992522   44770 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0731 17:33:44.992530   44770 command_runner.go:130] > # changing them here.
	I0731 17:33:44.992540   44770 command_runner.go:130] > # insecure_registries = [
	I0731 17:33:44.992553   44770 command_runner.go:130] > # ]
	I0731 17:33:44.992566   44770 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0731 17:33:44.992578   44770 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0731 17:33:44.992596   44770 command_runner.go:130] > # image_volumes = "mkdir"
	I0731 17:33:44.992605   44770 command_runner.go:130] > # Temporary directory to use for storing big files
	I0731 17:33:44.992612   44770 command_runner.go:130] > # big_files_temporary_dir = ""
	I0731 17:33:44.992621   44770 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0731 17:33:44.992630   44770 command_runner.go:130] > # CNI plugins.
	I0731 17:33:44.992640   44770 command_runner.go:130] > [crio.network]
	I0731 17:33:44.992649   44770 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0731 17:33:44.992660   44770 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0731 17:33:44.992669   44770 command_runner.go:130] > # cni_default_network = ""
	I0731 17:33:44.992681   44770 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0731 17:33:44.992691   44770 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0731 17:33:44.992699   44770 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0731 17:33:44.992706   44770 command_runner.go:130] > # plugin_dirs = [
	I0731 17:33:44.992710   44770 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0731 17:33:44.992715   44770 command_runner.go:130] > # ]
	I0731 17:33:44.992722   44770 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0731 17:33:44.992731   44770 command_runner.go:130] > [crio.metrics]
	I0731 17:33:44.992740   44770 command_runner.go:130] > # Globally enable or disable metrics support.
	I0731 17:33:44.992749   44770 command_runner.go:130] > enable_metrics = true
	I0731 17:33:44.992760   44770 command_runner.go:130] > # Specify enabled metrics collectors.
	I0731 17:33:44.992770   44770 command_runner.go:130] > # Per default all metrics are enabled.
	I0731 17:33:44.992782   44770 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0731 17:33:44.992795   44770 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0731 17:33:44.992803   44770 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0731 17:33:44.992809   44770 command_runner.go:130] > # metrics_collectors = [
	I0731 17:33:44.992813   44770 command_runner.go:130] > # 	"operations",
	I0731 17:33:44.992820   44770 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0731 17:33:44.992827   44770 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0731 17:33:44.992831   44770 command_runner.go:130] > # 	"operations_errors",
	I0731 17:33:44.992837   44770 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0731 17:33:44.992841   44770 command_runner.go:130] > # 	"image_pulls_by_name",
	I0731 17:33:44.992847   44770 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0731 17:33:44.992851   44770 command_runner.go:130] > # 	"image_pulls_failures",
	I0731 17:33:44.992857   44770 command_runner.go:130] > # 	"image_pulls_successes",
	I0731 17:33:44.992861   44770 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0731 17:33:44.992868   44770 command_runner.go:130] > # 	"image_layer_reuse",
	I0731 17:33:44.992881   44770 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0731 17:33:44.992892   44770 command_runner.go:130] > # 	"containers_oom_total",
	I0731 17:33:44.992898   44770 command_runner.go:130] > # 	"containers_oom",
	I0731 17:33:44.992907   44770 command_runner.go:130] > # 	"processes_defunct",
	I0731 17:33:44.992916   44770 command_runner.go:130] > # 	"operations_total",
	I0731 17:33:44.992926   44770 command_runner.go:130] > # 	"operations_latency_seconds",
	I0731 17:33:44.992936   44770 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0731 17:33:44.992944   44770 command_runner.go:130] > # 	"operations_errors_total",
	I0731 17:33:44.992950   44770 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0731 17:33:44.992954   44770 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0731 17:33:44.992960   44770 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0731 17:33:44.992964   44770 command_runner.go:130] > # 	"image_pulls_success_total",
	I0731 17:33:44.992970   44770 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0731 17:33:44.992974   44770 command_runner.go:130] > # 	"containers_oom_count_total",
	I0731 17:33:44.992982   44770 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0731 17:33:44.992987   44770 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0731 17:33:44.992991   44770 command_runner.go:130] > # ]
	I0731 17:33:44.992997   44770 command_runner.go:130] > # The port on which the metrics server will listen.
	I0731 17:33:44.993001   44770 command_runner.go:130] > # metrics_port = 9090
	I0731 17:33:44.993008   44770 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0731 17:33:44.993012   44770 command_runner.go:130] > # metrics_socket = ""
	I0731 17:33:44.993019   44770 command_runner.go:130] > # The certificate for the secure metrics server.
	I0731 17:33:44.993025   44770 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0731 17:33:44.993032   44770 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0731 17:33:44.993039   44770 command_runner.go:130] > # certificate on any modification event.
	I0731 17:33:44.993043   44770 command_runner.go:130] > # metrics_cert = ""
	I0731 17:33:44.993048   44770 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0731 17:33:44.993054   44770 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0731 17:33:44.993058   44770 command_runner.go:130] > # metrics_key = ""
	I0731 17:33:44.993066   44770 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0731 17:33:44.993072   44770 command_runner.go:130] > [crio.tracing]
	I0731 17:33:44.993078   44770 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0731 17:33:44.993087   44770 command_runner.go:130] > # enable_tracing = false
	I0731 17:33:44.993097   44770 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0731 17:33:44.993106   44770 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0731 17:33:44.993118   44770 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0731 17:33:44.993135   44770 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0731 17:33:44.993144   44770 command_runner.go:130] > # CRI-O NRI configuration.
	I0731 17:33:44.993152   44770 command_runner.go:130] > [crio.nri]
	I0731 17:33:44.993162   44770 command_runner.go:130] > # Globally enable or disable NRI.
	I0731 17:33:44.993169   44770 command_runner.go:130] > # enable_nri = false
	I0731 17:33:44.993179   44770 command_runner.go:130] > # NRI socket to listen on.
	I0731 17:33:44.993189   44770 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0731 17:33:44.993198   44770 command_runner.go:130] > # NRI plugin directory to use.
	I0731 17:33:44.993208   44770 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0731 17:33:44.993217   44770 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0731 17:33:44.993224   44770 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0731 17:33:44.993229   44770 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0731 17:33:44.993236   44770 command_runner.go:130] > # nri_disable_connections = false
	I0731 17:33:44.993241   44770 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0731 17:33:44.993247   44770 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0731 17:33:44.993252   44770 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0731 17:33:44.993258   44770 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0731 17:33:44.993264   44770 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0731 17:33:44.993269   44770 command_runner.go:130] > [crio.stats]
	I0731 17:33:44.993274   44770 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0731 17:33:44.993281   44770 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0731 17:33:44.993285   44770 command_runner.go:130] > # stats_collection_period = 0
	I0731 17:33:44.993319   44770 command_runner.go:130] ! time="2024-07-31 17:33:44.951683195Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0731 17:33:44.993333   44770 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0731 17:33:44.993434   44770 cni.go:84] Creating CNI manager for ""
	I0731 17:33:44.993442   44770 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0731 17:33:44.993450   44770 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 17:33:44.993469   44770 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.100 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-498089 NodeName:multinode-498089 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.100"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.100 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 17:33:44.993609   44770 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.100
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-498089"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.100
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.100"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 17:33:44.993663   44770 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 17:33:45.002855   44770 command_runner.go:130] > kubeadm
	I0731 17:33:45.002873   44770 command_runner.go:130] > kubectl
	I0731 17:33:45.002877   44770 command_runner.go:130] > kubelet
	I0731 17:33:45.002893   44770 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 17:33:45.002942   44770 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 17:33:45.011742   44770 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0731 17:33:45.027439   44770 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 17:33:45.042519   44770 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0731 17:33:45.057536   44770 ssh_runner.go:195] Run: grep 192.168.39.100	control-plane.minikube.internal$ /etc/hosts
	I0731 17:33:45.060949   44770 command_runner.go:130] > 192.168.39.100	control-plane.minikube.internal
	I0731 17:33:45.061020   44770 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 17:33:45.197866   44770 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 17:33:45.212199   44770 certs.go:68] Setting up /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/multinode-498089 for IP: 192.168.39.100
	I0731 17:33:45.212224   44770 certs.go:194] generating shared ca certs ...
	I0731 17:33:45.212238   44770 certs.go:226] acquiring lock for ca certs: {Name:mk49eed439085f9c30746706460ce213570d1997 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 17:33:45.212393   44770 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key
	I0731 17:33:45.212434   44770 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key
	I0731 17:33:45.212444   44770 certs.go:256] generating profile certs ...
	I0731 17:33:45.212517   44770 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/multinode-498089/client.key
	I0731 17:33:45.212579   44770 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/multinode-498089/apiserver.key.4dfe397f
	I0731 17:33:45.212614   44770 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/multinode-498089/proxy-client.key
	I0731 17:33:45.212624   44770 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 17:33:45.212635   44770 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0731 17:33:45.212647   44770 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 17:33:45.212660   44770 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 17:33:45.212672   44770 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/multinode-498089/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 17:33:45.212686   44770 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/multinode-498089/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 17:33:45.212699   44770 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/multinode-498089/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 17:33:45.212710   44770 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/multinode-498089/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 17:33:45.212767   44770 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem (1338 bytes)
	W0731 17:33:45.212794   44770 certs.go:480] ignoring /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259_empty.pem, impossibly tiny 0 bytes
	I0731 17:33:45.212803   44770 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 17:33:45.212825   44770 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem (1082 bytes)
	I0731 17:33:45.212847   44770 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem (1123 bytes)
	I0731 17:33:45.212869   44770 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem (1675 bytes)
	I0731 17:33:45.212906   44770 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem (1708 bytes)
	I0731 17:33:45.212932   44770 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 17:33:45.212945   44770 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem -> /usr/share/ca-certificates/15259.pem
	I0731 17:33:45.212957   44770 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem -> /usr/share/ca-certificates/152592.pem
	I0731 17:33:45.213549   44770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 17:33:45.236566   44770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 17:33:45.259826   44770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 17:33:45.281036   44770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 17:33:45.302707   44770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/multinode-498089/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0731 17:33:45.323906   44770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/multinode-498089/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 17:33:45.344988   44770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/multinode-498089/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 17:33:45.366610   44770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/multinode-498089/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 17:33:45.387934   44770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 17:33:45.410131   44770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem --> /usr/share/ca-certificates/15259.pem (1338 bytes)
	I0731 17:33:45.431621   44770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /usr/share/ca-certificates/152592.pem (1708 bytes)
	I0731 17:33:45.453058   44770 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 17:33:45.468501   44770 ssh_runner.go:195] Run: openssl version
	I0731 17:33:45.474421   44770 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0731 17:33:45.474524   44770 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15259.pem && ln -fs /usr/share/ca-certificates/15259.pem /etc/ssl/certs/15259.pem"
	I0731 17:33:45.484952   44770 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15259.pem
	I0731 17:33:45.488872   44770 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 31 16:54 /usr/share/ca-certificates/15259.pem
	I0731 17:33:45.488898   44770 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 16:54 /usr/share/ca-certificates/15259.pem
	I0731 17:33:45.488929   44770 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15259.pem
	I0731 17:33:45.494098   44770 command_runner.go:130] > 51391683
	I0731 17:33:45.494326   44770 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15259.pem /etc/ssl/certs/51391683.0"
	I0731 17:33:45.502963   44770 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/152592.pem && ln -fs /usr/share/ca-certificates/152592.pem /etc/ssl/certs/152592.pem"
	I0731 17:33:45.512663   44770 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/152592.pem
	I0731 17:33:45.516646   44770 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 31 16:54 /usr/share/ca-certificates/152592.pem
	I0731 17:33:45.516668   44770 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 16:54 /usr/share/ca-certificates/152592.pem
	I0731 17:33:45.516697   44770 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/152592.pem
	I0731 17:33:45.521667   44770 command_runner.go:130] > 3ec20f2e
	I0731 17:33:45.521751   44770 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/152592.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 17:33:45.531731   44770 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 17:33:45.565828   44770 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 17:33:45.574147   44770 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 31 16:42 /usr/share/ca-certificates/minikubeCA.pem
	I0731 17:33:45.574185   44770 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 16:42 /usr/share/ca-certificates/minikubeCA.pem
	I0731 17:33:45.574244   44770 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 17:33:45.587511   44770 command_runner.go:130] > b5213941
	I0731 17:33:45.587595   44770 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 17:33:45.622653   44770 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 17:33:45.630913   44770 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 17:33:45.630943   44770 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0731 17:33:45.630952   44770 command_runner.go:130] > Device: 253,1	Inode: 2103851     Links: 1
	I0731 17:33:45.630963   44770 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0731 17:33:45.630972   44770 command_runner.go:130] > Access: 2024-07-31 17:26:55.141030049 +0000
	I0731 17:33:45.630979   44770 command_runner.go:130] > Modify: 2024-07-31 17:26:55.141030049 +0000
	I0731 17:33:45.630987   44770 command_runner.go:130] > Change: 2024-07-31 17:26:55.141030049 +0000
	I0731 17:33:45.630995   44770 command_runner.go:130] >  Birth: 2024-07-31 17:26:55.141030049 +0000
	I0731 17:33:45.631090   44770 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 17:33:45.637587   44770 command_runner.go:130] > Certificate will not expire
	I0731 17:33:45.637654   44770 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 17:33:45.645645   44770 command_runner.go:130] > Certificate will not expire
	I0731 17:33:45.645937   44770 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 17:33:45.658621   44770 command_runner.go:130] > Certificate will not expire
	I0731 17:33:45.658878   44770 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 17:33:45.664737   44770 command_runner.go:130] > Certificate will not expire
	I0731 17:33:45.665016   44770 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 17:33:45.676183   44770 command_runner.go:130] > Certificate will not expire
	I0731 17:33:45.676451   44770 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 17:33:45.683303   44770 command_runner.go:130] > Certificate will not expire
	I0731 17:33:45.683576   44770 kubeadm.go:392] StartCluster: {Name:multinode-498089 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-498089 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.204 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 17:33:45.683676   44770 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 17:33:45.683736   44770 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 17:33:45.738411   44770 command_runner.go:130] > 6f5a861ab18dbae1101a6f14bc50fff1a1ae8370bcdabb6eaa0a7f743c803677
	I0731 17:33:45.738434   44770 command_runner.go:130] > f713a250ad9e752c0b2d45c5140624e047b77ac24ace1449a8979bc7ce711e59
	I0731 17:33:45.738441   44770 command_runner.go:130] > 11f278a9b370395f6b3a7cd171498efe28e15ce5b995f45ba5e647d8dff3db53
	I0731 17:33:45.738447   44770 command_runner.go:130] > 87a364936bdef18ba9cf82cf399cac1f0c9d855f2bbd704616f08ba3c0154e86
	I0731 17:33:45.738453   44770 command_runner.go:130] > fc640bbf40886bebe629913240d2f5ee56a9b60ed6a6ce6e7da42e278f0303e3
	I0731 17:33:45.738464   44770 command_runner.go:130] > 741c5d19093c08d4889a8536dba6fa217e98a0ff4f96ef237f57d12dd34a844b
	I0731 17:33:45.738473   44770 command_runner.go:130] > 7e257027c671ae2fa9681210deb701fd3b243b02260ef45b7eb80c12efe72477
	I0731 17:33:45.738489   44770 command_runner.go:130] > 7706d4d5c5b20f986a98dbe39b9b5f757972cfa89dd0f91e0d780ab4229cd836
	I0731 17:33:45.738516   44770 cri.go:89] found id: "6f5a861ab18dbae1101a6f14bc50fff1a1ae8370bcdabb6eaa0a7f743c803677"
	I0731 17:33:45.738527   44770 cri.go:89] found id: "f713a250ad9e752c0b2d45c5140624e047b77ac24ace1449a8979bc7ce711e59"
	I0731 17:33:45.738531   44770 cri.go:89] found id: "11f278a9b370395f6b3a7cd171498efe28e15ce5b995f45ba5e647d8dff3db53"
	I0731 17:33:45.738536   44770 cri.go:89] found id: "87a364936bdef18ba9cf82cf399cac1f0c9d855f2bbd704616f08ba3c0154e86"
	I0731 17:33:45.738540   44770 cri.go:89] found id: "fc640bbf40886bebe629913240d2f5ee56a9b60ed6a6ce6e7da42e278f0303e3"
	I0731 17:33:45.738554   44770 cri.go:89] found id: "741c5d19093c08d4889a8536dba6fa217e98a0ff4f96ef237f57d12dd34a844b"
	I0731 17:33:45.738559   44770 cri.go:89] found id: "7e257027c671ae2fa9681210deb701fd3b243b02260ef45b7eb80c12efe72477"
	I0731 17:33:45.738562   44770 cri.go:89] found id: "7706d4d5c5b20f986a98dbe39b9b5f757972cfa89dd0f91e0d780ab4229cd836"
	I0731 17:33:45.738565   44770 cri.go:89] found id: ""
	I0731 17:33:45.738602   44770 ssh_runner.go:195] Run: sudo runc list -f json
	I0731 17:33:45.767561   44770 command_runner.go:130] ! load container 6edfcf47092f7b6539d7a28ad02cdce800748c9d7c70a7bf92a9dee67cb4d6e3: container does not exist
	
	
	==> CRI-O <==
	Jul 31 17:35:36 multinode-498089 crio[2873]: time="2024-07-31 17:35:36.928470640Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=133641a6-4626-4208-8568-5b106998ddf9 name=/runtime.v1.RuntimeService/Version
	Jul 31 17:35:36 multinode-498089 crio[2873]: time="2024-07-31 17:35:36.929614286Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=172658a5-a870-4826-963c-18fa8eb7fc34 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:35:36 multinode-498089 crio[2873]: time="2024-07-31 17:35:36.930095150Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722447336930069167,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=172658a5-a870-4826-963c-18fa8eb7fc34 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:35:36 multinode-498089 crio[2873]: time="2024-07-31 17:35:36.930579548Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ae0b4f07-f073-447d-b5da-b4093f252cb0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:35:36 multinode-498089 crio[2873]: time="2024-07-31 17:35:36.930637271Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ae0b4f07-f073-447d-b5da-b4093f252cb0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:35:36 multinode-498089 crio[2873]: time="2024-07-31 17:35:36.930988597Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6c307c10d491e1b37eefec0b1f0f26f544260f80f7b5121d2175ba0f280a1993,PodSandboxId:1eda2cad03dd79c622a01ae97c4428d2c284ef771ff3a5bcb6e37a51060dbb03,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722447265355133149,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tm4jn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1616740e-4445-47e5-9891-6dc753c5f655,},Annotations:map[string]string{io.kubernetes.container.hash: 81b7c80c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04f3c08ad545a96ddd065cd0e6e06a3ea8e50d3fe11d31e16e77052c6179dc24,PodSandboxId:c48f791ae73c77a09e60b2525cf0448c9575d1888a59b781db0854018f9b239b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722447238113423306,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8qccd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa7bae7b-0ce2-407a-b46f-94178fb43071,},Annotations:map[string]string{io.kubernetes.container.hash: c6c137a2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:220f5364b7adf5b5afb91f5fb001a204f8d65e35d532d4414f05c18f55443647,PodSandboxId:9fb2f68daf6b4c3f642fe0bf15c8564304161c163b9a7841e7199e290f8266d0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722447232191060397,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da4e724
7-c042-4da0-9015-e4242d18d043,},Annotations:map[string]string{io.kubernetes.container.hash: 3feca24c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:382ba3eb1c2830200fa38a7ce4a97e818badbd910e8576f35bdf33aca05b304d,PodSandboxId:a02944a7e020ead4730ba570dbf278d1dca969c8908f82d409692291b67b11a2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722447231968256795,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pklkm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23b4d2f3-c925-4a0e-8c9a-ecda421332bf,},An
notations:map[string]string{io.kubernetes.container.hash: 3edbfc23,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468fe46f35c8cc80128ee1ec6a6050cf94aef0cb919789ed8d25c9880b9dddfc,PodSandboxId:cf7fedce8a8e2283c2bf7e044adbfee19fd6000751ae43fca8a4acec4b252b04,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722447231885288652,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v6xrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb9f0873-0309-40a8-a2f1-c1c6f0713034,},Annotations:map[string]string{io.ku
bernetes.container.hash: bd0970e4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c69ac13c491b850716408eaae15dee0ae649f67e725ee682990cc4a93bea42c7,PodSandboxId:2bf9cbf0d84160d2a35a75b8ead180d4b5d00a2b6df4167e059c390adc4feb5b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722447231787642917,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2950d4410b3899ccbdb0536947b29f40,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 836ad03f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba2f3c3d981875bdc99216720fb78f7187ed031bddd10f4b276b85fbc1df0506,PodSandboxId:3d398e58b156ce29cb02ed249f6dc03fa818bd96b64d85485f9caed70558b00e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722447231774717639,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e893babbbefa5fea11a4d995a36606db,},Annotations:map[string]string{io.kub
ernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac3076f8ec41fd7a430cd72c45fdcc849cb9cf7d80dfe935dd4dd7508fb27c0b,PodSandboxId:e682678fcf5c0d70faf9f60078593d5aac3cfa6f19ed070b054499cd1777ac4c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722447231731178026,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be60b8808f43a8b7b2c4a1a6190bedac,},Annotations:map[string]string{io.kubernetes.container.hash: ed86d2d2,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7df64e7974e8c57b3618f920f29d46ee4cfe200717f7cba6834e6697f01fc257,PodSandboxId:55e5e7a975946471ddc15d954d8aae8ead7fdf08b655cb2a094568d0b96448e5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722447231698881591,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da27596f1275a6292fe83598b4e87488,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6edfcf47092f7b6539d7a28ad02cdce800748c9d7c70a7bf92a9dee67cb4d6e3,PodSandboxId:c48f791ae73c77a09e60b2525cf0448c9575d1888a59b781db0854018f9b239b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722447225720418667,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8qccd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa7bae7b-0ce2-407a-b46f-94178fb43071,},Annotations:map[string]string{io.kubernetes.container.hash: c6c137a2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3483951aa5a43223a694e974bdd881840004a97e4f10e615f4b7e77226b510b,PodSandboxId:1b608d19493aec6aee4416c17b5fa94aca430d3038a95515eeb351fb189a7485,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722446905505296513,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tm4jn,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1616740e-4445-47e5-9891-6dc753c5f655,},Annotations:map[string]string{io.kubernetes.container.hash: 81b7c80c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f713a250ad9e752c0b2d45c5140624e047b77ac24ace1449a8979bc7ce711e59,PodSandboxId:5f299cbe8df6c27e091c1d433f7b097f9ffa033439712432efb2428b97ec1448,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722446853710746781,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: da4e7247-c042-4da0-9015-e4242d18d043,},Annotations:map[string]string{io.kubernetes.container.hash: 3feca24c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11f278a9b370395f6b3a7cd171498efe28e15ce5b995f45ba5e647d8dff3db53,PodSandboxId:2a0003bbb9ba5f20c9d30e31392966f73c497407209243ed9580004366cf542e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722446841873004606,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pklkm,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 23b4d2f3-c925-4a0e-8c9a-ecda421332bf,},Annotations:map[string]string{io.kubernetes.container.hash: 3edbfc23,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87a364936bdef18ba9cf82cf399cac1f0c9d855f2bbd704616f08ba3c0154e86,PodSandboxId:91d6c19702c62d87dc679aec3596bc68e23bf1a4709fa745a785db0238ce000a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722446838113033393,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v6xrd,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: cb9f0873-0309-40a8-a2f1-c1c6f0713034,},Annotations:map[string]string{io.kubernetes.container.hash: bd0970e4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc640bbf40886bebe629913240d2f5ee56a9b60ed6a6ce6e7da42e278f0303e3,PodSandboxId:49b123da2a4acafe5dd6d86339589e7ed5e3cf1c97e5e26ebe6872488e23e4f1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722446819083942198,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
da27596f1275a6292fe83598b4e87488,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:741c5d19093c08d4889a8536dba6fa217e98a0ff4f96ef237f57d12dd34a844b,PodSandboxId:9cd9353b23377bcce311f56786acd497e80b965f81b4be89c96f16a3864b308b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722446819078861479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: e893babbbefa5fea11a4d995a36606db,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e257027c671ae2fa9681210deb701fd3b243b02260ef45b7eb80c12efe72477,PodSandboxId:6d8470cc9c9f5bb56d63f5c516d5fdf90527f9a92dd424db9d3c91087b7531e1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722446819035923110,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be60b8808f43a8b7b2c4a1a6190bedac,
},Annotations:map[string]string{io.kubernetes.container.hash: ed86d2d2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7706d4d5c5b20f986a98dbe39b9b5f757972cfa89dd0f91e0d780ab4229cd836,PodSandboxId:0d0cc81cdaa8e63316d97c94da40dea16ff0b5d9045b12763cdd34428d116101,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722446819025862909,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2950d4410b3899ccbdb0536947b29f40,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 836ad03f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ae0b4f07-f073-447d-b5da-b4093f252cb0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:35:36 multinode-498089 crio[2873]: time="2024-07-31 17:35:36.953057786Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=af6f46c8-ac7c-43af-8a95-426bb4c894b0 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 31 17:35:36 multinode-498089 crio[2873]: time="2024-07-31 17:35:36.953760413Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:1eda2cad03dd79c622a01ae97c4428d2c284ef771ff3a5bcb6e37a51060dbb03,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-tm4jn,Uid:1616740e-4445-47e5-9891-6dc753c5f655,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722447265235946558,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-tm4jn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1616740e-4445-47e5-9891-6dc753c5f655,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T17:33:57.613668791Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3d398e58b156ce29cb02ed249f6dc03fa818bd96b64d85485f9caed70558b00e,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-498089,Uid:e893babbbefa5fea11a4d995a36606db,Namespace:kube-system
,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722447231475693471,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e893babbbefa5fea11a4d995a36606db,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e893babbbefa5fea11a4d995a36606db,kubernetes.io/config.seen: 2024-07-31T17:27:04.063689255Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2bf9cbf0d84160d2a35a75b8ead180d4b5d00a2b6df4167e059c390adc4feb5b,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-498089,Uid:2950d4410b3899ccbdb0536947b29f40,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722447231470161592,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2950d4410
b3899ccbdb0536947b29f40,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.100:8443,kubernetes.io/config.hash: 2950d4410b3899ccbdb0536947b29f40,kubernetes.io/config.seen: 2024-07-31T17:27:04.063688103Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e682678fcf5c0d70faf9f60078593d5aac3cfa6f19ed070b054499cd1777ac4c,Metadata:&PodSandboxMetadata{Name:etcd-multinode-498089,Uid:be60b8808f43a8b7b2c4a1a6190bedac,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722447231470030814,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be60b8808f43a8b7b2c4a1a6190bedac,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.100:2379,kubernetes.io/config.hash: be60b8808f43a8b7b2c4a1a6190bedac,kubernetes.io/config.seen:
2024-07-31T17:27:04.063683720Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9fb2f68daf6b4c3f642fe0bf15c8564304161c163b9a7841e7199e290f8266d0,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:da4e7247-c042-4da0-9015-e4242d18d043,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722447231466197488,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da4e7247-c042-4da0-9015-e4242d18d043,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\
"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-31T17:27:33.287558839Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cf7fedce8a8e2283c2bf7e044adbfee19fd6000751ae43fca8a4acec4b252b04,Metadata:&PodSandboxMetadata{Name:kube-proxy-v6xrd,Uid:cb9f0873-0309-40a8-a2f1-c1c6f0713034,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722447231464642812,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-v6xrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb9f0873-0309-40a8-a2f1-c1c6f0713034,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[stri
ng]string{kubernetes.io/config.seen: 2024-07-31T17:27:17.528864744Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a02944a7e020ead4730ba570dbf278d1dca969c8908f82d409692291b67b11a2,Metadata:&PodSandboxMetadata{Name:kindnet-pklkm,Uid:23b4d2f3-c925-4a0e-8c9a-ecda421332bf,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722447231449770462,Labels:map[string]string{app: kindnet,controller-revision-hash: 549967b474,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-pklkm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23b4d2f3-c925-4a0e-8c9a-ecda421332bf,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T17:27:17.543153791Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:55e5e7a975946471ddc15d954d8aae8ead7fdf08b655cb2a094568d0b96448e5,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-498089,Uid:da27596f1275a6292fe83598b4e87488,Namespace:kube-system
,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722447231446620113,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da27596f1275a6292fe83598b4e87488,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: da27596f1275a6292fe83598b4e87488,kubernetes.io/config.seen: 2024-07-31T17:27:04.063690453Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c48f791ae73c77a09e60b2525cf0448c9575d1888a59b781db0854018f9b239b,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-8qccd,Uid:aa7bae7b-0ce2-407a-b46f-94178fb43071,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722447225576544700,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-8qccd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa7bae7b-0ce2-407a-b46f-94178fb43071,k8s-app: kube-dns,pod-temp
late-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T17:27:33.281395213Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1b608d19493aec6aee4416c17b5fa94aca430d3038a95515eeb351fb189a7485,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-tm4jn,Uid:1616740e-4445-47e5-9891-6dc753c5f655,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722446903053692990,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-tm4jn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1616740e-4445-47e5-9891-6dc753c5f655,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T17:28:22.738953325Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5f299cbe8df6c27e091c1d433f7b097f9ffa033439712432efb2428b97ec1448,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:da4e7247-c042-4da0-9015-e4242d18d043,Namespace:kube-system,Attempt:0,}
,State:SANDBOX_NOTREADY,CreatedAt:1722446853594266062,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da4e7247-c042-4da0-9015-e4242d18d043,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path
\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-31T17:27:33.287558839Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2a0003bbb9ba5f20c9d30e31392966f73c497407209243ed9580004366cf542e,Metadata:&PodSandboxMetadata{Name:kindnet-pklkm,Uid:23b4d2f3-c925-4a0e-8c9a-ecda421332bf,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722446837877159863,Labels:map[string]string{app: kindnet,controller-revision-hash: 549967b474,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-pklkm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23b4d2f3-c925-4a0e-8c9a-ecda421332bf,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T17:27:17.543153791Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:91d6c19702c62d87dc679aec3596bc68e23bf1a4709fa745a785db0238ce000a,Metadata:&PodSandboxMetadata{Name:kube-proxy-v6xrd,Uid:cb9f0873-0309-40a
8-a2f1-c1c6f0713034,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722446837843814541,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-v6xrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb9f0873-0309-40a8-a2f1-c1c6f0713034,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T17:27:17.528864744Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:49b123da2a4acafe5dd6d86339589e7ed5e3cf1c97e5e26ebe6872488e23e4f1,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-498089,Uid:da27596f1275a6292fe83598b4e87488,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722446818873047648,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da27596f1275a6
292fe83598b4e87488,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: da27596f1275a6292fe83598b4e87488,kubernetes.io/config.seen: 2024-07-31T17:26:58.397261640Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6d8470cc9c9f5bb56d63f5c516d5fdf90527f9a92dd424db9d3c91087b7531e1,Metadata:&PodSandboxMetadata{Name:etcd-multinode-498089,Uid:be60b8808f43a8b7b2c4a1a6190bedac,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722446818847169707,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be60b8808f43a8b7b2c4a1a6190bedac,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.100:2379,kubernetes.io/config.hash: be60b8808f43a8b7b2c4a1a6190bedac,kubernetes.io/config.seen: 2024-07-31T17:26:58.397254772Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&
PodSandbox{Id:0d0cc81cdaa8e63316d97c94da40dea16ff0b5d9045b12763cdd34428d116101,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-498089,Uid:2950d4410b3899ccbdb0536947b29f40,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722446818845657372,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2950d4410b3899ccbdb0536947b29f40,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.100:8443,kubernetes.io/config.hash: 2950d4410b3899ccbdb0536947b29f40,kubernetes.io/config.seen: 2024-07-31T17:26:58.397259294Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9cd9353b23377bcce311f56786acd497e80b965f81b4be89c96f16a3864b308b,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-498089,Uid:e893babbbefa5fea11a4d995a36606db,Namespace:kube-s
ystem,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722446818838970893,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e893babbbefa5fea11a4d995a36606db,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e893babbbefa5fea11a4d995a36606db,kubernetes.io/config.seen: 2024-07-31T17:26:58.397260705Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=af6f46c8-ac7c-43af-8a95-426bb4c894b0 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 31 17:35:36 multinode-498089 crio[2873]: time="2024-07-31 17:35:36.956487199Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2618e400-fef0-44ab-b73b-9cd2f24a92c3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:35:36 multinode-498089 crio[2873]: time="2024-07-31 17:35:36.956541650Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2618e400-fef0-44ab-b73b-9cd2f24a92c3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:35:36 multinode-498089 crio[2873]: time="2024-07-31 17:35:36.956896985Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6c307c10d491e1b37eefec0b1f0f26f544260f80f7b5121d2175ba0f280a1993,PodSandboxId:1eda2cad03dd79c622a01ae97c4428d2c284ef771ff3a5bcb6e37a51060dbb03,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722447265355133149,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tm4jn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1616740e-4445-47e5-9891-6dc753c5f655,},Annotations:map[string]string{io.kubernetes.container.hash: 81b7c80c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04f3c08ad545a96ddd065cd0e6e06a3ea8e50d3fe11d31e16e77052c6179dc24,PodSandboxId:c48f791ae73c77a09e60b2525cf0448c9575d1888a59b781db0854018f9b239b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722447238113423306,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8qccd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa7bae7b-0ce2-407a-b46f-94178fb43071,},Annotations:map[string]string{io.kubernetes.container.hash: c6c137a2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:220f5364b7adf5b5afb91f5fb001a204f8d65e35d532d4414f05c18f55443647,PodSandboxId:9fb2f68daf6b4c3f642fe0bf15c8564304161c163b9a7841e7199e290f8266d0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722447232191060397,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da4e724
7-c042-4da0-9015-e4242d18d043,},Annotations:map[string]string{io.kubernetes.container.hash: 3feca24c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:382ba3eb1c2830200fa38a7ce4a97e818badbd910e8576f35bdf33aca05b304d,PodSandboxId:a02944a7e020ead4730ba570dbf278d1dca969c8908f82d409692291b67b11a2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722447231968256795,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pklkm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23b4d2f3-c925-4a0e-8c9a-ecda421332bf,},An
notations:map[string]string{io.kubernetes.container.hash: 3edbfc23,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468fe46f35c8cc80128ee1ec6a6050cf94aef0cb919789ed8d25c9880b9dddfc,PodSandboxId:cf7fedce8a8e2283c2bf7e044adbfee19fd6000751ae43fca8a4acec4b252b04,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722447231885288652,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v6xrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb9f0873-0309-40a8-a2f1-c1c6f0713034,},Annotations:map[string]string{io.ku
bernetes.container.hash: bd0970e4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c69ac13c491b850716408eaae15dee0ae649f67e725ee682990cc4a93bea42c7,PodSandboxId:2bf9cbf0d84160d2a35a75b8ead180d4b5d00a2b6df4167e059c390adc4feb5b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722447231787642917,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2950d4410b3899ccbdb0536947b29f40,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 836ad03f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba2f3c3d981875bdc99216720fb78f7187ed031bddd10f4b276b85fbc1df0506,PodSandboxId:3d398e58b156ce29cb02ed249f6dc03fa818bd96b64d85485f9caed70558b00e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722447231774717639,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e893babbbefa5fea11a4d995a36606db,},Annotations:map[string]string{io.kub
ernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac3076f8ec41fd7a430cd72c45fdcc849cb9cf7d80dfe935dd4dd7508fb27c0b,PodSandboxId:e682678fcf5c0d70faf9f60078593d5aac3cfa6f19ed070b054499cd1777ac4c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722447231731178026,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be60b8808f43a8b7b2c4a1a6190bedac,},Annotations:map[string]string{io.kubernetes.container.hash: ed86d2d2,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7df64e7974e8c57b3618f920f29d46ee4cfe200717f7cba6834e6697f01fc257,PodSandboxId:55e5e7a975946471ddc15d954d8aae8ead7fdf08b655cb2a094568d0b96448e5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722447231698881591,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da27596f1275a6292fe83598b4e87488,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6edfcf47092f7b6539d7a28ad02cdce800748c9d7c70a7bf92a9dee67cb4d6e3,PodSandboxId:c48f791ae73c77a09e60b2525cf0448c9575d1888a59b781db0854018f9b239b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722447225720418667,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8qccd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa7bae7b-0ce2-407a-b46f-94178fb43071,},Annotations:map[string]string{io.kubernetes.container.hash: c6c137a2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3483951aa5a43223a694e974bdd881840004a97e4f10e615f4b7e77226b510b,PodSandboxId:1b608d19493aec6aee4416c17b5fa94aca430d3038a95515eeb351fb189a7485,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722446905505296513,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tm4jn,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1616740e-4445-47e5-9891-6dc753c5f655,},Annotations:map[string]string{io.kubernetes.container.hash: 81b7c80c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f713a250ad9e752c0b2d45c5140624e047b77ac24ace1449a8979bc7ce711e59,PodSandboxId:5f299cbe8df6c27e091c1d433f7b097f9ffa033439712432efb2428b97ec1448,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722446853710746781,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: da4e7247-c042-4da0-9015-e4242d18d043,},Annotations:map[string]string{io.kubernetes.container.hash: 3feca24c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11f278a9b370395f6b3a7cd171498efe28e15ce5b995f45ba5e647d8dff3db53,PodSandboxId:2a0003bbb9ba5f20c9d30e31392966f73c497407209243ed9580004366cf542e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722446841873004606,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pklkm,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 23b4d2f3-c925-4a0e-8c9a-ecda421332bf,},Annotations:map[string]string{io.kubernetes.container.hash: 3edbfc23,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87a364936bdef18ba9cf82cf399cac1f0c9d855f2bbd704616f08ba3c0154e86,PodSandboxId:91d6c19702c62d87dc679aec3596bc68e23bf1a4709fa745a785db0238ce000a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722446838113033393,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v6xrd,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: cb9f0873-0309-40a8-a2f1-c1c6f0713034,},Annotations:map[string]string{io.kubernetes.container.hash: bd0970e4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc640bbf40886bebe629913240d2f5ee56a9b60ed6a6ce6e7da42e278f0303e3,PodSandboxId:49b123da2a4acafe5dd6d86339589e7ed5e3cf1c97e5e26ebe6872488e23e4f1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722446819083942198,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
da27596f1275a6292fe83598b4e87488,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:741c5d19093c08d4889a8536dba6fa217e98a0ff4f96ef237f57d12dd34a844b,PodSandboxId:9cd9353b23377bcce311f56786acd497e80b965f81b4be89c96f16a3864b308b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722446819078861479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: e893babbbefa5fea11a4d995a36606db,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e257027c671ae2fa9681210deb701fd3b243b02260ef45b7eb80c12efe72477,PodSandboxId:6d8470cc9c9f5bb56d63f5c516d5fdf90527f9a92dd424db9d3c91087b7531e1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722446819035923110,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be60b8808f43a8b7b2c4a1a6190bedac,
},Annotations:map[string]string{io.kubernetes.container.hash: ed86d2d2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7706d4d5c5b20f986a98dbe39b9b5f757972cfa89dd0f91e0d780ab4229cd836,PodSandboxId:0d0cc81cdaa8e63316d97c94da40dea16ff0b5d9045b12763cdd34428d116101,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722446819025862909,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2950d4410b3899ccbdb0536947b29f40,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 836ad03f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2618e400-fef0-44ab-b73b-9cd2f24a92c3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:35:36 multinode-498089 crio[2873]: time="2024-07-31 17:35:36.987421489Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a9dca75f-9568-4c19-b354-e3b32ab4f68b name=/runtime.v1.RuntimeService/Version
	Jul 31 17:35:36 multinode-498089 crio[2873]: time="2024-07-31 17:35:36.987504794Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a9dca75f-9568-4c19-b354-e3b32ab4f68b name=/runtime.v1.RuntimeService/Version
	Jul 31 17:35:36 multinode-498089 crio[2873]: time="2024-07-31 17:35:36.988575356Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6a0dc0d5-028c-4b81-aacc-2434aa1fb731 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:35:36 multinode-498089 crio[2873]: time="2024-07-31 17:35:36.989012312Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722447336988989188,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6a0dc0d5-028c-4b81-aacc-2434aa1fb731 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:35:36 multinode-498089 crio[2873]: time="2024-07-31 17:35:36.989566909Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=083e8034-356f-4310-a19a-a3f72edc0158 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:35:36 multinode-498089 crio[2873]: time="2024-07-31 17:35:36.989758062Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=083e8034-356f-4310-a19a-a3f72edc0158 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:35:36 multinode-498089 crio[2873]: time="2024-07-31 17:35:36.990111873Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6c307c10d491e1b37eefec0b1f0f26f544260f80f7b5121d2175ba0f280a1993,PodSandboxId:1eda2cad03dd79c622a01ae97c4428d2c284ef771ff3a5bcb6e37a51060dbb03,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722447265355133149,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tm4jn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1616740e-4445-47e5-9891-6dc753c5f655,},Annotations:map[string]string{io.kubernetes.container.hash: 81b7c80c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04f3c08ad545a96ddd065cd0e6e06a3ea8e50d3fe11d31e16e77052c6179dc24,PodSandboxId:c48f791ae73c77a09e60b2525cf0448c9575d1888a59b781db0854018f9b239b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722447238113423306,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8qccd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa7bae7b-0ce2-407a-b46f-94178fb43071,},Annotations:map[string]string{io.kubernetes.container.hash: c6c137a2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:220f5364b7adf5b5afb91f5fb001a204f8d65e35d532d4414f05c18f55443647,PodSandboxId:9fb2f68daf6b4c3f642fe0bf15c8564304161c163b9a7841e7199e290f8266d0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722447232191060397,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da4e724
7-c042-4da0-9015-e4242d18d043,},Annotations:map[string]string{io.kubernetes.container.hash: 3feca24c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:382ba3eb1c2830200fa38a7ce4a97e818badbd910e8576f35bdf33aca05b304d,PodSandboxId:a02944a7e020ead4730ba570dbf278d1dca969c8908f82d409692291b67b11a2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722447231968256795,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pklkm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23b4d2f3-c925-4a0e-8c9a-ecda421332bf,},An
notations:map[string]string{io.kubernetes.container.hash: 3edbfc23,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468fe46f35c8cc80128ee1ec6a6050cf94aef0cb919789ed8d25c9880b9dddfc,PodSandboxId:cf7fedce8a8e2283c2bf7e044adbfee19fd6000751ae43fca8a4acec4b252b04,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722447231885288652,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v6xrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb9f0873-0309-40a8-a2f1-c1c6f0713034,},Annotations:map[string]string{io.ku
bernetes.container.hash: bd0970e4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c69ac13c491b850716408eaae15dee0ae649f67e725ee682990cc4a93bea42c7,PodSandboxId:2bf9cbf0d84160d2a35a75b8ead180d4b5d00a2b6df4167e059c390adc4feb5b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722447231787642917,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2950d4410b3899ccbdb0536947b29f40,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 836ad03f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba2f3c3d981875bdc99216720fb78f7187ed031bddd10f4b276b85fbc1df0506,PodSandboxId:3d398e58b156ce29cb02ed249f6dc03fa818bd96b64d85485f9caed70558b00e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722447231774717639,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e893babbbefa5fea11a4d995a36606db,},Annotations:map[string]string{io.kub
ernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac3076f8ec41fd7a430cd72c45fdcc849cb9cf7d80dfe935dd4dd7508fb27c0b,PodSandboxId:e682678fcf5c0d70faf9f60078593d5aac3cfa6f19ed070b054499cd1777ac4c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722447231731178026,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be60b8808f43a8b7b2c4a1a6190bedac,},Annotations:map[string]string{io.kubernetes.container.hash: ed86d2d2,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7df64e7974e8c57b3618f920f29d46ee4cfe200717f7cba6834e6697f01fc257,PodSandboxId:55e5e7a975946471ddc15d954d8aae8ead7fdf08b655cb2a094568d0b96448e5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722447231698881591,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da27596f1275a6292fe83598b4e87488,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6edfcf47092f7b6539d7a28ad02cdce800748c9d7c70a7bf92a9dee67cb4d6e3,PodSandboxId:c48f791ae73c77a09e60b2525cf0448c9575d1888a59b781db0854018f9b239b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722447225720418667,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8qccd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa7bae7b-0ce2-407a-b46f-94178fb43071,},Annotations:map[string]string{io.kubernetes.container.hash: c6c137a2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3483951aa5a43223a694e974bdd881840004a97e4f10e615f4b7e77226b510b,PodSandboxId:1b608d19493aec6aee4416c17b5fa94aca430d3038a95515eeb351fb189a7485,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722446905505296513,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tm4jn,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1616740e-4445-47e5-9891-6dc753c5f655,},Annotations:map[string]string{io.kubernetes.container.hash: 81b7c80c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f713a250ad9e752c0b2d45c5140624e047b77ac24ace1449a8979bc7ce711e59,PodSandboxId:5f299cbe8df6c27e091c1d433f7b097f9ffa033439712432efb2428b97ec1448,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722446853710746781,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: da4e7247-c042-4da0-9015-e4242d18d043,},Annotations:map[string]string{io.kubernetes.container.hash: 3feca24c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11f278a9b370395f6b3a7cd171498efe28e15ce5b995f45ba5e647d8dff3db53,PodSandboxId:2a0003bbb9ba5f20c9d30e31392966f73c497407209243ed9580004366cf542e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722446841873004606,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pklkm,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 23b4d2f3-c925-4a0e-8c9a-ecda421332bf,},Annotations:map[string]string{io.kubernetes.container.hash: 3edbfc23,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87a364936bdef18ba9cf82cf399cac1f0c9d855f2bbd704616f08ba3c0154e86,PodSandboxId:91d6c19702c62d87dc679aec3596bc68e23bf1a4709fa745a785db0238ce000a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722446838113033393,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v6xrd,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: cb9f0873-0309-40a8-a2f1-c1c6f0713034,},Annotations:map[string]string{io.kubernetes.container.hash: bd0970e4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc640bbf40886bebe629913240d2f5ee56a9b60ed6a6ce6e7da42e278f0303e3,PodSandboxId:49b123da2a4acafe5dd6d86339589e7ed5e3cf1c97e5e26ebe6872488e23e4f1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722446819083942198,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
da27596f1275a6292fe83598b4e87488,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:741c5d19093c08d4889a8536dba6fa217e98a0ff4f96ef237f57d12dd34a844b,PodSandboxId:9cd9353b23377bcce311f56786acd497e80b965f81b4be89c96f16a3864b308b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722446819078861479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: e893babbbefa5fea11a4d995a36606db,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e257027c671ae2fa9681210deb701fd3b243b02260ef45b7eb80c12efe72477,PodSandboxId:6d8470cc9c9f5bb56d63f5c516d5fdf90527f9a92dd424db9d3c91087b7531e1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722446819035923110,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be60b8808f43a8b7b2c4a1a6190bedac,
},Annotations:map[string]string{io.kubernetes.container.hash: ed86d2d2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7706d4d5c5b20f986a98dbe39b9b5f757972cfa89dd0f91e0d780ab4229cd836,PodSandboxId:0d0cc81cdaa8e63316d97c94da40dea16ff0b5d9045b12763cdd34428d116101,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722446819025862909,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2950d4410b3899ccbdb0536947b29f40,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 836ad03f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=083e8034-356f-4310-a19a-a3f72edc0158 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:35:37 multinode-498089 crio[2873]: time="2024-07-31 17:35:37.028998963Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d774b9e1-591f-431a-9892-defcdab98e48 name=/runtime.v1.RuntimeService/Version
	Jul 31 17:35:37 multinode-498089 crio[2873]: time="2024-07-31 17:35:37.029157918Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d774b9e1-591f-431a-9892-defcdab98e48 name=/runtime.v1.RuntimeService/Version
	Jul 31 17:35:37 multinode-498089 crio[2873]: time="2024-07-31 17:35:37.030576591Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9d7ce407-c027-4efb-b908-30e472b67d8f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:35:37 multinode-498089 crio[2873]: time="2024-07-31 17:35:37.031055438Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722447337031026077,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9d7ce407-c027-4efb-b908-30e472b67d8f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:35:37 multinode-498089 crio[2873]: time="2024-07-31 17:35:37.031620718Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fe65a0ea-139a-4e46-9da7-82c4c462a795 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:35:37 multinode-498089 crio[2873]: time="2024-07-31 17:35:37.031676383Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fe65a0ea-139a-4e46-9da7-82c4c462a795 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:35:37 multinode-498089 crio[2873]: time="2024-07-31 17:35:37.032375274Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6c307c10d491e1b37eefec0b1f0f26f544260f80f7b5121d2175ba0f280a1993,PodSandboxId:1eda2cad03dd79c622a01ae97c4428d2c284ef771ff3a5bcb6e37a51060dbb03,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722447265355133149,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tm4jn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1616740e-4445-47e5-9891-6dc753c5f655,},Annotations:map[string]string{io.kubernetes.container.hash: 81b7c80c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04f3c08ad545a96ddd065cd0e6e06a3ea8e50d3fe11d31e16e77052c6179dc24,PodSandboxId:c48f791ae73c77a09e60b2525cf0448c9575d1888a59b781db0854018f9b239b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722447238113423306,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8qccd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa7bae7b-0ce2-407a-b46f-94178fb43071,},Annotations:map[string]string{io.kubernetes.container.hash: c6c137a2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:220f5364b7adf5b5afb91f5fb001a204f8d65e35d532d4414f05c18f55443647,PodSandboxId:9fb2f68daf6b4c3f642fe0bf15c8564304161c163b9a7841e7199e290f8266d0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722447232191060397,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da4e724
7-c042-4da0-9015-e4242d18d043,},Annotations:map[string]string{io.kubernetes.container.hash: 3feca24c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:382ba3eb1c2830200fa38a7ce4a97e818badbd910e8576f35bdf33aca05b304d,PodSandboxId:a02944a7e020ead4730ba570dbf278d1dca969c8908f82d409692291b67b11a2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722447231968256795,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pklkm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23b4d2f3-c925-4a0e-8c9a-ecda421332bf,},An
notations:map[string]string{io.kubernetes.container.hash: 3edbfc23,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468fe46f35c8cc80128ee1ec6a6050cf94aef0cb919789ed8d25c9880b9dddfc,PodSandboxId:cf7fedce8a8e2283c2bf7e044adbfee19fd6000751ae43fca8a4acec4b252b04,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722447231885288652,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v6xrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb9f0873-0309-40a8-a2f1-c1c6f0713034,},Annotations:map[string]string{io.ku
bernetes.container.hash: bd0970e4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c69ac13c491b850716408eaae15dee0ae649f67e725ee682990cc4a93bea42c7,PodSandboxId:2bf9cbf0d84160d2a35a75b8ead180d4b5d00a2b6df4167e059c390adc4feb5b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722447231787642917,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2950d4410b3899ccbdb0536947b29f40,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 836ad03f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba2f3c3d981875bdc99216720fb78f7187ed031bddd10f4b276b85fbc1df0506,PodSandboxId:3d398e58b156ce29cb02ed249f6dc03fa818bd96b64d85485f9caed70558b00e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722447231774717639,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e893babbbefa5fea11a4d995a36606db,},Annotations:map[string]string{io.kub
ernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac3076f8ec41fd7a430cd72c45fdcc849cb9cf7d80dfe935dd4dd7508fb27c0b,PodSandboxId:e682678fcf5c0d70faf9f60078593d5aac3cfa6f19ed070b054499cd1777ac4c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722447231731178026,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be60b8808f43a8b7b2c4a1a6190bedac,},Annotations:map[string]string{io.kubernetes.container.hash: ed86d2d2,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7df64e7974e8c57b3618f920f29d46ee4cfe200717f7cba6834e6697f01fc257,PodSandboxId:55e5e7a975946471ddc15d954d8aae8ead7fdf08b655cb2a094568d0b96448e5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722447231698881591,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da27596f1275a6292fe83598b4e87488,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6edfcf47092f7b6539d7a28ad02cdce800748c9d7c70a7bf92a9dee67cb4d6e3,PodSandboxId:c48f791ae73c77a09e60b2525cf0448c9575d1888a59b781db0854018f9b239b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722447225720418667,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8qccd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa7bae7b-0ce2-407a-b46f-94178fb43071,},Annotations:map[string]string{io.kubernetes.container.hash: c6c137a2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3483951aa5a43223a694e974bdd881840004a97e4f10e615f4b7e77226b510b,PodSandboxId:1b608d19493aec6aee4416c17b5fa94aca430d3038a95515eeb351fb189a7485,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722446905505296513,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tm4jn,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1616740e-4445-47e5-9891-6dc753c5f655,},Annotations:map[string]string{io.kubernetes.container.hash: 81b7c80c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f713a250ad9e752c0b2d45c5140624e047b77ac24ace1449a8979bc7ce711e59,PodSandboxId:5f299cbe8df6c27e091c1d433f7b097f9ffa033439712432efb2428b97ec1448,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722446853710746781,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: da4e7247-c042-4da0-9015-e4242d18d043,},Annotations:map[string]string{io.kubernetes.container.hash: 3feca24c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11f278a9b370395f6b3a7cd171498efe28e15ce5b995f45ba5e647d8dff3db53,PodSandboxId:2a0003bbb9ba5f20c9d30e31392966f73c497407209243ed9580004366cf542e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722446841873004606,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pklkm,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 23b4d2f3-c925-4a0e-8c9a-ecda421332bf,},Annotations:map[string]string{io.kubernetes.container.hash: 3edbfc23,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87a364936bdef18ba9cf82cf399cac1f0c9d855f2bbd704616f08ba3c0154e86,PodSandboxId:91d6c19702c62d87dc679aec3596bc68e23bf1a4709fa745a785db0238ce000a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722446838113033393,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v6xrd,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: cb9f0873-0309-40a8-a2f1-c1c6f0713034,},Annotations:map[string]string{io.kubernetes.container.hash: bd0970e4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc640bbf40886bebe629913240d2f5ee56a9b60ed6a6ce6e7da42e278f0303e3,PodSandboxId:49b123da2a4acafe5dd6d86339589e7ed5e3cf1c97e5e26ebe6872488e23e4f1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722446819083942198,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
da27596f1275a6292fe83598b4e87488,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:741c5d19093c08d4889a8536dba6fa217e98a0ff4f96ef237f57d12dd34a844b,PodSandboxId:9cd9353b23377bcce311f56786acd497e80b965f81b4be89c96f16a3864b308b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722446819078861479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: e893babbbefa5fea11a4d995a36606db,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e257027c671ae2fa9681210deb701fd3b243b02260ef45b7eb80c12efe72477,PodSandboxId:6d8470cc9c9f5bb56d63f5c516d5fdf90527f9a92dd424db9d3c91087b7531e1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722446819035923110,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be60b8808f43a8b7b2c4a1a6190bedac,
},Annotations:map[string]string{io.kubernetes.container.hash: ed86d2d2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7706d4d5c5b20f986a98dbe39b9b5f757972cfa89dd0f91e0d780ab4229cd836,PodSandboxId:0d0cc81cdaa8e63316d97c94da40dea16ff0b5d9045b12763cdd34428d116101,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722446819025862909,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2950d4410b3899ccbdb0536947b29f40,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 836ad03f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fe65a0ea-139a-4e46-9da7-82c4c462a795 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	6c307c10d491e       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   1eda2cad03dd7       busybox-fc5497c4f-tm4jn
	04f3c08ad545a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   2                   c48f791ae73c7       coredns-7db6d8ff4d-8qccd
	220f5364b7adf       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   9fb2f68daf6b4       storage-provisioner
	382ba3eb1c283       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      About a minute ago   Running             kindnet-cni               1                   a02944a7e020e       kindnet-pklkm
	468fe46f35c8c       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      About a minute ago   Running             kube-proxy                1                   cf7fedce8a8e2       kube-proxy-v6xrd
	c69ac13c491b8       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      About a minute ago   Running             kube-apiserver            1                   2bf9cbf0d8416       kube-apiserver-multinode-498089
	ba2f3c3d98187       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      About a minute ago   Running             kube-controller-manager   1                   3d398e58b156c       kube-controller-manager-multinode-498089
	ac3076f8ec41f       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   e682678fcf5c0       etcd-multinode-498089
	7df64e7974e8c       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      About a minute ago   Running             kube-scheduler            1                   55e5e7a975946       kube-scheduler-multinode-498089
	6edfcf47092f7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Exited              coredns                   1                   c48f791ae73c7       coredns-7db6d8ff4d-8qccd
	b3483951aa5a4       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   1b608d19493ae       busybox-fc5497c4f-tm4jn
	f713a250ad9e7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago        Exited              storage-provisioner       0                   5f299cbe8df6c       storage-provisioner
	11f278a9b3703       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    8 minutes ago        Exited              kindnet-cni               0                   2a0003bbb9ba5       kindnet-pklkm
	87a364936bdef       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      8 minutes ago        Exited              kube-proxy                0                   91d6c19702c62       kube-proxy-v6xrd
	fc640bbf40886       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      8 minutes ago        Exited              kube-scheduler            0                   49b123da2a4ac       kube-scheduler-multinode-498089
	741c5d19093c0       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      8 minutes ago        Exited              kube-controller-manager   0                   9cd9353b23377       kube-controller-manager-multinode-498089
	7e257027c671a       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      8 minutes ago        Exited              etcd                      0                   6d8470cc9c9f5       etcd-multinode-498089
	7706d4d5c5b20       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      8 minutes ago        Exited              kube-apiserver            0                   0d0cc81cdaa8e       kube-apiserver-multinode-498089
	
	
	==> coredns [04f3c08ad545a96ddd065cd0e6e06a3ea8e50d3fe11d31e16e77052c6179dc24] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:37019 - 17009 "HINFO IN 935776094504024851.1920634029359281869. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.011088744s
	
	
	==> coredns [6edfcf47092f7b6539d7a28ad02cdce800748c9d7c70a7bf92a9dee67cb4d6e3] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:50152 - 8875 "HINFO IN 4546732856793314619.2462259501056362839. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011965382s
	
	
	==> describe nodes <==
	Name:               multinode-498089
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-498089
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1d737dad7efa60c56d30434fcd857dd3b14c91d9
	                    minikube.k8s.io/name=multinode-498089
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T17_27_05_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 17:27:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-498089
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 17:35:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 17:33:57 +0000   Wed, 31 Jul 2024 17:26:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 17:33:57 +0000   Wed, 31 Jul 2024 17:26:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 17:33:57 +0000   Wed, 31 Jul 2024 17:26:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 17:33:57 +0000   Wed, 31 Jul 2024 17:27:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.100
	  Hostname:    multinode-498089
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4bbeac4f7a8446b2b13fe255fcd04320
	  System UUID:                4bbeac4f-7a84-46b2-b13f-e255fcd04320
	  Boot ID:                    6eea6dfe-41da-4ca0-a8df-788bf7c1456e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-tm4jn                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m15s
	  kube-system                 coredns-7db6d8ff4d-8qccd                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m20s
	  kube-system                 etcd-multinode-498089                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m33s
	  kube-system                 kindnet-pklkm                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m20s
	  kube-system                 kube-apiserver-multinode-498089             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m35s
	  kube-system                 kube-controller-manager-multinode-498089    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m33s
	  kube-system                 kube-proxy-v6xrd                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m20s
	  kube-system                 kube-scheduler-multinode-498089             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m33s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 101s   kube-proxy       
	  Normal   Starting                 8m18s  kube-proxy       
	  Normal   Starting                 8m33s  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8m33s  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8m33s  kubelet          Node multinode-498089 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m33s  kubelet          Node multinode-498089 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m33s  kubelet          Node multinode-498089 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           8m21s  node-controller  Node multinode-498089 event: Registered Node multinode-498089 in Controller
	  Normal   NodeReady                8m4s   kubelet          Node multinode-498089 status is now: NodeReady
	  Warning  ContainerGCFailed        2m33s  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   Starting                 100s   kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  100s   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  100s   kubelet          Node multinode-498089 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    100s   kubelet          Node multinode-498089 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     100s   kubelet          Node multinode-498089 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           90s    node-controller  Node multinode-498089 event: Registered Node multinode-498089 in Controller
	
	
	Name:               multinode-498089-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-498089-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1d737dad7efa60c56d30434fcd857dd3b14c91d9
	                    minikube.k8s.io/name=multinode-498089
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T17_34_36_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 17:34:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-498089-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 17:35:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 17:35:06 +0000   Wed, 31 Jul 2024 17:34:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 17:35:06 +0000   Wed, 31 Jul 2024 17:34:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 17:35:06 +0000   Wed, 31 Jul 2024 17:34:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 17:35:06 +0000   Wed, 31 Jul 2024 17:34:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.228
	  Hostname:    multinode-498089-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6666b100748c404c89f1f7ec662d9f1f
	  System UUID:                6666b100-748c-404c-89f1-f7ec662d9f1f
	  Boot ID:                    9c7b8a87-0000-49d5-bf02-b17e82c6b0e8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-tzt7d    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	  kube-system                 kindnet-5lbxl              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m36s
	  kube-system                 kube-proxy-kwpbv           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m32s                  kube-proxy  
	  Normal  Starting                 56s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m37s (x2 over 7m37s)  kubelet     Node multinode-498089-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m37s (x2 over 7m37s)  kubelet     Node multinode-498089-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m37s (x2 over 7m37s)  kubelet     Node multinode-498089-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m36s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m17s                  kubelet     Node multinode-498089-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  61s (x2 over 61s)      kubelet     Node multinode-498089-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x2 over 61s)      kubelet     Node multinode-498089-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x2 over 61s)      kubelet     Node multinode-498089-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  61s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                42s                    kubelet     Node multinode-498089-m02 status is now: NodeReady
	
	
	Name:               multinode-498089-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-498089-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1d737dad7efa60c56d30434fcd857dd3b14c91d9
	                    minikube.k8s.io/name=multinode-498089
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T17_35_15_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 17:35:14 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-498089-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 17:35:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 17:35:34 +0000   Wed, 31 Jul 2024 17:35:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 17:35:34 +0000   Wed, 31 Jul 2024 17:35:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 17:35:34 +0000   Wed, 31 Jul 2024 17:35:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 17:35:34 +0000   Wed, 31 Jul 2024 17:35:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.204
	  Hostname:    multinode-498089-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5fda031417bb4b35a5f72c67bb783813
	  System UUID:                5fda0314-17bb-4b35-a5f7-2c67bb783813
	  Boot ID:                    6490a1ae-4cb2-462b-bee9-03fb708b8f65
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-pckbq       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m45s
	  kube-system                 kube-proxy-ppb5z    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m40s                  kube-proxy  
	  Normal  Starting                 17s                    kube-proxy  
	  Normal  Starting                 5m51s                  kube-proxy  
	  Normal  NodeHasNoDiskPressure    6m45s (x2 over 6m45s)  kubelet     Node multinode-498089-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m45s (x2 over 6m45s)  kubelet     Node multinode-498089-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m45s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m45s (x2 over 6m45s)  kubelet     Node multinode-498089-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                6m25s                  kubelet     Node multinode-498089-m03 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  5m57s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     5m56s (x2 over 5m57s)  kubelet     Node multinode-498089-m03 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    5m56s (x2 over 5m57s)  kubelet     Node multinode-498089-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m56s (x2 over 5m57s)  kubelet     Node multinode-498089-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m37s                  kubelet     Node multinode-498089-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  23s (x2 over 23s)      kubelet     Node multinode-498089-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x2 over 23s)      kubelet     Node multinode-498089-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x2 over 23s)      kubelet     Node multinode-498089-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                     kubelet     Node multinode-498089-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.186092] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.132350] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.246079] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +3.876296] systemd-fstab-generator[757]: Ignoring "noauto" option for root device
	[  +4.442091] systemd-fstab-generator[938]: Ignoring "noauto" option for root device
	[  +0.055536] kauditd_printk_skb: 158 callbacks suppressed
	[Jul31 17:27] systemd-fstab-generator[1268]: Ignoring "noauto" option for root device
	[  +0.079775] kauditd_printk_skb: 69 callbacks suppressed
	[  +7.157910] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.394220] systemd-fstab-generator[1456]: Ignoring "noauto" option for root device
	[  +5.222878] kauditd_printk_skb: 57 callbacks suppressed
	[Jul31 17:28] kauditd_printk_skb: 12 callbacks suppressed
	[Jul31 17:33] systemd-fstab-generator[2793]: Ignoring "noauto" option for root device
	[  +0.138604] systemd-fstab-generator[2805]: Ignoring "noauto" option for root device
	[  +0.165295] systemd-fstab-generator[2819]: Ignoring "noauto" option for root device
	[  +0.142416] systemd-fstab-generator[2831]: Ignoring "noauto" option for root device
	[  +0.265089] systemd-fstab-generator[2859]: Ignoring "noauto" option for root device
	[  +7.376665] systemd-fstab-generator[2957]: Ignoring "noauto" option for root device
	[  +0.083685] kauditd_printk_skb: 100 callbacks suppressed
	[  +6.367429] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.792878] systemd-fstab-generator[3827]: Ignoring "noauto" option for root device
	[  +0.095197] kauditd_printk_skb: 62 callbacks suppressed
	[Jul31 17:34] kauditd_printk_skb: 19 callbacks suppressed
	[  +4.896080] systemd-fstab-generator[3990]: Ignoring "noauto" option for root device
	[ +12.856099] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [7e257027c671ae2fa9681210deb701fd3b243b02260ef45b7eb80c12efe72477] <==
	{"level":"info","ts":"2024-07-31T17:27:00.217593Z","caller":"traceutil/trace.go:171","msg":"trace[1565674192] range","detail":"{range_begin:/registry/namespaces/; range_end:/registry/namespaces0; response_count:0; response_revision:1; }","duration":"102.991985ms","start":"2024-07-31T17:27:00.114586Z","end":"2024-07-31T17:27:00.217578Z","steps":["trace[1565674192] 'range keys from in-memory index tree'  (duration: 102.708148ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T17:27:00.216593Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.933947ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/\" range_end:\"/registry/namespaces0\" count_only:true ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2024-07-31T17:27:00.218495Z","caller":"traceutil/trace.go:171","msg":"trace[2126615920] range","detail":"{range_begin:/registry/namespaces/; range_end:/registry/namespaces0; response_count:0; response_revision:1; }","duration":"103.84998ms","start":"2024-07-31T17:27:00.114634Z","end":"2024-07-31T17:27:00.218484Z","steps":["trace[2126615920] 'count revisions from in-memory index tree'  (duration: 101.912998ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T17:28:01.069355Z","caller":"traceutil/trace.go:171","msg":"trace[810982051] transaction","detail":"{read_only:false; response_revision:442; number_of_response:1; }","duration":"168.052498ms","start":"2024-07-31T17:28:00.901218Z","end":"2024-07-31T17:28:01.06927Z","steps":["trace[810982051] 'process raft request'  (duration: 167.994131ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T17:28:01.069615Z","caller":"traceutil/trace.go:171","msg":"trace[760668613] transaction","detail":"{read_only:false; response_revision:441; number_of_response:1; }","duration":"230.533677ms","start":"2024-07-31T17:28:00.839064Z","end":"2024-07-31T17:28:01.069598Z","steps":["trace[760668613] 'process raft request'  (duration: 134.638601ms)","trace[760668613] 'compare'  (duration: 94.999503ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-31T17:28:01.069781Z","caller":"traceutil/trace.go:171","msg":"trace[1606502940] linearizableReadLoop","detail":"{readStateIndex:463; appliedIndex:462; }","duration":"207.956319ms","start":"2024-07-31T17:28:00.861813Z","end":"2024-07-31T17:28:01.06977Z","steps":["trace[1606502940] 'read index received'  (duration: 112.043139ms)","trace[1606502940] 'applied index is now lower than readState.Index'  (duration: 95.912259ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-31T17:28:01.070156Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"208.253258ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices/\" range_end:\"/registry/apiregistration.k8s.io/apiservices0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-07-31T17:28:01.070218Z","caller":"traceutil/trace.go:171","msg":"trace[692972716] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices/; range_end:/registry/apiregistration.k8s.io/apiservices0; response_count:0; response_revision:442; }","duration":"208.412365ms","start":"2024-07-31T17:28:00.86179Z","end":"2024-07-31T17:28:01.070202Z","steps":["trace[692972716] 'agreement among raft nodes before linearized reading'  (duration: 208.198087ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T17:28:52.871542Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"158.136735ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2176523991335837742 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-498089-m03.17e75c5bfac225e3\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-498089-m03.17e75c5bfac225e3\" value_size:642 lease:2176523991335837389 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-07-31T17:28:52.871737Z","caller":"traceutil/trace.go:171","msg":"trace[49114635] transaction","detail":"{read_only:false; response_revision:574; number_of_response:1; }","duration":"177.190214ms","start":"2024-07-31T17:28:52.69453Z","end":"2024-07-31T17:28:52.87172Z","steps":["trace[49114635] 'process raft request'  (duration: 177.145285ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T17:28:52.871818Z","caller":"traceutil/trace.go:171","msg":"trace[127697052] linearizableReadLoop","detail":"{readStateIndex:609; appliedIndex:608; }","duration":"190.248395ms","start":"2024-07-31T17:28:52.681556Z","end":"2024-07-31T17:28:52.871804Z","steps":["trace[127697052] 'read index received'  (duration: 31.293419ms)","trace[127697052] 'applied index is now lower than readState.Index'  (duration: 158.953245ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-31T17:28:52.87197Z","caller":"traceutil/trace.go:171","msg":"trace[1484313498] transaction","detail":"{read_only:false; response_revision:573; number_of_response:1; }","duration":"236.038193ms","start":"2024-07-31T17:28:52.635925Z","end":"2024-07-31T17:28:52.871963Z","steps":["trace[1484313498] 'process raft request'  (duration: 76.917235ms)","trace[1484313498] 'compare'  (duration: 158.030867ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-31T17:28:52.872062Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"190.491237ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-498089-m03\" ","response":"range_response_count:1 size:1926"}
	{"level":"info","ts":"2024-07-31T17:28:52.875132Z","caller":"traceutil/trace.go:171","msg":"trace[413867055] range","detail":"{range_begin:/registry/minions/multinode-498089-m03; range_end:; response_count:1; response_revision:574; }","duration":"193.58366ms","start":"2024-07-31T17:28:52.681531Z","end":"2024-07-31T17:28:52.875115Z","steps":["trace[413867055] 'agreement among raft nodes before linearized reading'  (duration: 190.461391ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T17:32:05.825562Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-31T17:32:05.825803Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-498089","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.100:2380"],"advertise-client-urls":["https://192.168.39.100:2379"]}
	{"level":"warn","ts":"2024-07-31T17:32:05.839775Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T17:32:05.839946Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	2024/07/31 17:32:05 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-31T17:32:05.885774Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.100:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T17:32:05.885862Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.100:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-31T17:32:05.887301Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"3276445ff8d31e34","current-leader-member-id":"3276445ff8d31e34"}
	{"level":"info","ts":"2024-07-31T17:32:05.889712Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.100:2380"}
	{"level":"info","ts":"2024-07-31T17:32:05.889882Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.100:2380"}
	{"level":"info","ts":"2024-07-31T17:32:05.889913Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-498089","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.100:2380"],"advertise-client-urls":["https://192.168.39.100:2379"]}
	
	
	==> etcd [ac3076f8ec41fd7a430cd72c45fdcc849cb9cf7d80dfe935dd4dd7508fb27c0b] <==
	{"level":"info","ts":"2024-07-31T17:33:52.249042Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-31T17:33:52.249271Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3276445ff8d31e34 switched to configuration voters=(3636168928135421492)"}
	{"level":"info","ts":"2024-07-31T17:33:52.249409Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6cf58294dcaef1c8","local-member-id":"3276445ff8d31e34","added-peer-id":"3276445ff8d31e34","added-peer-peer-urls":["https://192.168.39.100:2380"]}
	{"level":"info","ts":"2024-07-31T17:33:52.249524Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6cf58294dcaef1c8","local-member-id":"3276445ff8d31e34","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T17:33:52.249561Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T17:33:52.250067Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.100:2380"}
	{"level":"info","ts":"2024-07-31T17:33:52.250103Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.100:2380"}
	{"level":"info","ts":"2024-07-31T17:33:52.256077Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-31T17:33:52.256143Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-31T17:33:52.256153Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-31T17:33:53.897681Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3276445ff8d31e34 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-31T17:33:53.897821Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3276445ff8d31e34 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-31T17:33:53.897886Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3276445ff8d31e34 received MsgPreVoteResp from 3276445ff8d31e34 at term 2"}
	{"level":"info","ts":"2024-07-31T17:33:53.897926Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3276445ff8d31e34 became candidate at term 3"}
	{"level":"info","ts":"2024-07-31T17:33:53.89795Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3276445ff8d31e34 received MsgVoteResp from 3276445ff8d31e34 at term 3"}
	{"level":"info","ts":"2024-07-31T17:33:53.897977Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3276445ff8d31e34 became leader at term 3"}
	{"level":"info","ts":"2024-07-31T17:33:53.898008Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3276445ff8d31e34 elected leader 3276445ff8d31e34 at term 3"}
	{"level":"info","ts":"2024-07-31T17:33:53.902645Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T17:33:53.902886Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T17:33:53.903136Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T17:33:53.903177Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-31T17:33:53.902643Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"3276445ff8d31e34","local-member-attributes":"{Name:multinode-498089 ClientURLs:[https://192.168.39.100:2379]}","request-path":"/0/members/3276445ff8d31e34/attributes","cluster-id":"6cf58294dcaef1c8","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T17:33:53.904874Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-31T17:33:53.904875Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.100:2379"}
	{"level":"info","ts":"2024-07-31T17:35:19.149738Z","caller":"traceutil/trace.go:171","msg":"trace[1335440042] transaction","detail":"{read_only:false; response_revision:1188; number_of_response:1; }","duration":"156.181866ms","start":"2024-07-31T17:35:18.993453Z","end":"2024-07-31T17:35:19.149635Z","steps":["trace[1335440042] 'process raft request'  (duration: 156.03787ms)"],"step_count":1}
	
	
	==> kernel <==
	 17:35:37 up 9 min,  0 users,  load average: 0.37, 0.30, 0.18
	Linux multinode-498089 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [11f278a9b370395f6b3a7cd171498efe28e15ce5b995f45ba5e647d8dff3db53] <==
	I0731 17:31:22.810829       1 main.go:322] Node multinode-498089-m03 has CIDR [10.244.3.0/24] 
	I0731 17:31:32.817996       1 main.go:295] Handling node with IPs: map[192.168.39.100:{}]
	I0731 17:31:32.818174       1 main.go:299] handling current node
	I0731 17:31:32.818220       1 main.go:295] Handling node with IPs: map[192.168.39.228:{}]
	I0731 17:31:32.818244       1 main.go:322] Node multinode-498089-m02 has CIDR [10.244.1.0/24] 
	I0731 17:31:32.818477       1 main.go:295] Handling node with IPs: map[192.168.39.204:{}]
	I0731 17:31:32.818516       1 main.go:322] Node multinode-498089-m03 has CIDR [10.244.3.0/24] 
	I0731 17:31:42.816702       1 main.go:295] Handling node with IPs: map[192.168.39.100:{}]
	I0731 17:31:42.816804       1 main.go:299] handling current node
	I0731 17:31:42.816836       1 main.go:295] Handling node with IPs: map[192.168.39.228:{}]
	I0731 17:31:42.816844       1 main.go:322] Node multinode-498089-m02 has CIDR [10.244.1.0/24] 
	I0731 17:31:42.817003       1 main.go:295] Handling node with IPs: map[192.168.39.204:{}]
	I0731 17:31:42.817023       1 main.go:322] Node multinode-498089-m03 has CIDR [10.244.3.0/24] 
	I0731 17:31:52.816936       1 main.go:295] Handling node with IPs: map[192.168.39.228:{}]
	I0731 17:31:52.817075       1 main.go:322] Node multinode-498089-m02 has CIDR [10.244.1.0/24] 
	I0731 17:31:52.817262       1 main.go:295] Handling node with IPs: map[192.168.39.204:{}]
	I0731 17:31:52.817289       1 main.go:322] Node multinode-498089-m03 has CIDR [10.244.3.0/24] 
	I0731 17:31:52.817434       1 main.go:295] Handling node with IPs: map[192.168.39.100:{}]
	I0731 17:31:52.817463       1 main.go:299] handling current node
	I0731 17:32:02.809971       1 main.go:295] Handling node with IPs: map[192.168.39.100:{}]
	I0731 17:32:02.810146       1 main.go:299] handling current node
	I0731 17:32:02.810186       1 main.go:295] Handling node with IPs: map[192.168.39.228:{}]
	I0731 17:32:02.810205       1 main.go:322] Node multinode-498089-m02 has CIDR [10.244.1.0/24] 
	I0731 17:32:02.810462       1 main.go:295] Handling node with IPs: map[192.168.39.204:{}]
	I0731 17:32:02.810526       1 main.go:322] Node multinode-498089-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [382ba3eb1c2830200fa38a7ce4a97e818badbd910e8576f35bdf33aca05b304d] <==
	I0731 17:34:52.922859       1 main.go:322] Node multinode-498089-m03 has CIDR [10.244.3.0/24] 
	I0731 17:35:02.924538       1 main.go:295] Handling node with IPs: map[192.168.39.100:{}]
	I0731 17:35:02.924662       1 main.go:299] handling current node
	I0731 17:35:02.924701       1 main.go:295] Handling node with IPs: map[192.168.39.228:{}]
	I0731 17:35:02.924721       1 main.go:322] Node multinode-498089-m02 has CIDR [10.244.1.0/24] 
	I0731 17:35:02.924873       1 main.go:295] Handling node with IPs: map[192.168.39.204:{}]
	I0731 17:35:02.924899       1 main.go:322] Node multinode-498089-m03 has CIDR [10.244.3.0/24] 
	I0731 17:35:12.928381       1 main.go:295] Handling node with IPs: map[192.168.39.100:{}]
	I0731 17:35:12.928424       1 main.go:299] handling current node
	I0731 17:35:12.928439       1 main.go:295] Handling node with IPs: map[192.168.39.228:{}]
	I0731 17:35:12.928445       1 main.go:322] Node multinode-498089-m02 has CIDR [10.244.1.0/24] 
	I0731 17:35:12.928738       1 main.go:295] Handling node with IPs: map[192.168.39.204:{}]
	I0731 17:35:12.928823       1 main.go:322] Node multinode-498089-m03 has CIDR [10.244.3.0/24] 
	I0731 17:35:22.922672       1 main.go:295] Handling node with IPs: map[192.168.39.100:{}]
	I0731 17:35:22.922791       1 main.go:299] handling current node
	I0731 17:35:22.922826       1 main.go:295] Handling node with IPs: map[192.168.39.228:{}]
	I0731 17:35:22.922845       1 main.go:322] Node multinode-498089-m02 has CIDR [10.244.1.0/24] 
	I0731 17:35:22.923012       1 main.go:295] Handling node with IPs: map[192.168.39.204:{}]
	I0731 17:35:22.923034       1 main.go:322] Node multinode-498089-m03 has CIDR [10.244.2.0/24] 
	I0731 17:35:32.926211       1 main.go:295] Handling node with IPs: map[192.168.39.100:{}]
	I0731 17:35:32.926286       1 main.go:299] handling current node
	I0731 17:35:32.926342       1 main.go:295] Handling node with IPs: map[192.168.39.228:{}]
	I0731 17:35:32.926352       1 main.go:322] Node multinode-498089-m02 has CIDR [10.244.1.0/24] 
	I0731 17:35:32.926608       1 main.go:295] Handling node with IPs: map[192.168.39.204:{}]
	I0731 17:35:32.926627       1 main.go:322] Node multinode-498089-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [7706d4d5c5b20f986a98dbe39b9b5f757972cfa89dd0f91e0d780ab4229cd836] <==
	I0731 17:32:05.844423       1 crd_finalizer.go:278] Shutting down CRDFinalizer
	I0731 17:32:05.844472       1 apiapproval_controller.go:198] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I0731 17:32:05.844525       1 nonstructuralschema_controller.go:204] Shutting down NonStructuralSchemaConditionController
	I0731 17:32:05.844546       1 establishing_controller.go:87] Shutting down EstablishingController
	I0731 17:32:05.844581       1 naming_controller.go:302] Shutting down NamingConditionController
	I0731 17:32:05.844626       1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
	I0731 17:32:05.844656       1 system_namespaces_controller.go:77] Shutting down system namespaces controller
	I0731 17:32:05.844678       1 apiservice_controller.go:131] Shutting down APIServiceRegistrationController
	I0731 17:32:05.844715       1 customresource_discovery_controller.go:325] Shutting down DiscoveryController
	I0731 17:32:05.844739       1 apf_controller.go:386] Shutting down API Priority and Fairness config worker
	I0731 17:32:05.844788       1 controller.go:129] Ending legacy_token_tracking_controller
	I0731 17:32:05.844813       1 controller.go:130] Shutting down legacy_token_tracking_controller
	I0731 17:32:05.844841       1 controller.go:117] Shutting down OpenAPI V3 controller
	I0731 17:32:05.844883       1 controller.go:167] Shutting down OpenAPI controller
	I0731 17:32:05.844920       1 available_controller.go:439] Shutting down AvailableConditionController
	I0731 17:32:05.845012       1 dynamic_serving_content.go:146] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0731 17:32:05.846474       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0731 17:32:05.846572       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0731 17:32:05.846619       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0731 17:32:05.846658       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0731 17:32:05.846682       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0731 17:32:05.846974       1 controller.go:176] quota evaluator worker shutdown
	I0731 17:32:05.847007       1 controller.go:176] quota evaluator worker shutdown
	I0731 17:32:05.847015       1 controller.go:176] quota evaluator worker shutdown
	I0731 17:32:05.847024       1 controller.go:176] quota evaluator worker shutdown
	
	
	==> kube-apiserver [c69ac13c491b850716408eaae15dee0ae649f67e725ee682990cc4a93bea42c7] <==
	I0731 17:33:55.166875       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0731 17:33:55.223839       1 shared_informer.go:320] Caches are synced for configmaps
	I0731 17:33:55.226012       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0731 17:33:55.228294       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0731 17:33:55.228414       1 policy_source.go:224] refreshing policies
	I0731 17:33:55.229241       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0731 17:33:55.229293       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0731 17:33:55.230114       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0731 17:33:55.267641       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0731 17:33:55.267820       1 aggregator.go:165] initial CRD sync complete...
	I0731 17:33:55.267864       1 autoregister_controller.go:141] Starting autoregister controller
	I0731 17:33:55.267889       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0731 17:33:55.267912       1 cache.go:39] Caches are synced for autoregister controller
	I0731 17:33:55.309879       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0731 17:33:55.318264       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0731 17:33:55.320541       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0731 17:33:55.333817       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0731 17:33:56.121106       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0731 17:33:58.245379       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0731 17:33:58.364074       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0731 17:33:58.375456       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0731 17:33:58.437856       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0731 17:33:58.443020       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0731 17:34:07.533249       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0731 17:34:07.581837       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [741c5d19093c08d4889a8536dba6fa217e98a0ff4f96ef237f57d12dd34a844b] <==
	I0731 17:28:01.071204       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-498089-m02\" does not exist"
	I0731 17:28:01.101996       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-498089-m02" podCIDRs=["10.244.1.0/24"]
	I0731 17:28:01.598130       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-498089-m02"
	I0731 17:28:20.298672       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-498089-m02"
	I0731 17:28:22.744590       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.571731ms"
	I0731 17:28:22.769456       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="24.803508ms"
	I0731 17:28:22.769796       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.885µs"
	I0731 17:28:22.769917       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.109µs"
	I0731 17:28:26.022002       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.936772ms"
	I0731 17:28:26.023255       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="75.696µs"
	I0731 17:28:26.435181       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="6.031937ms"
	I0731 17:28:26.435385       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="86.519µs"
	I0731 17:28:52.873864       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-498089-m03\" does not exist"
	I0731 17:28:52.873917       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-498089-m02"
	I0731 17:28:52.882620       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-498089-m03" podCIDRs=["10.244.2.0/24"]
	I0731 17:28:56.614570       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-498089-m03"
	I0731 17:29:12.288472       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-498089-m02"
	I0731 17:29:40.039580       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-498089-m02"
	I0731 17:29:41.037376       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-498089-m02"
	I0731 17:29:41.039533       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-498089-m03\" does not exist"
	I0731 17:29:41.049940       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-498089-m03" podCIDRs=["10.244.3.0/24"]
	I0731 17:30:00.392946       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-498089-m02"
	I0731 17:30:41.671528       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-498089-m02"
	I0731 17:30:46.744740       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.966199ms"
	I0731 17:30:46.745227       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="71.331µs"
	
	
	==> kube-controller-manager [ba2f3c3d981875bdc99216720fb78f7187ed031bddd10f4b276b85fbc1df0506] <==
	I0731 17:34:07.775871       1 shared_informer.go:320] Caches are synced for resource quota
	I0731 17:34:08.187198       1 shared_informer.go:320] Caches are synced for garbage collector
	I0731 17:34:08.187242       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0731 17:34:08.217215       1 shared_informer.go:320] Caches are synced for garbage collector
	I0731 17:34:29.663965       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="68.003µs"
	I0731 17:34:32.161986       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.310778ms"
	I0731 17:34:32.171345       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.761246ms"
	I0731 17:34:32.171759       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="122.089µs"
	I0731 17:34:36.330463       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-498089-m02\" does not exist"
	I0731 17:34:36.344966       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-498089-m02" podCIDRs=["10.244.1.0/24"]
	I0731 17:34:38.278649       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.396µs"
	I0731 17:34:38.298732       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.263µs"
	I0731 17:34:38.315778       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.297µs"
	I0731 17:34:38.323530       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.229µs"
	I0731 17:34:38.325820       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.306µs"
	I0731 17:34:55.781291       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-498089-m02"
	I0731 17:34:55.800723       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="188.351µs"
	I0731 17:34:55.815260       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.934µs"
	I0731 17:34:59.369363       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.865655ms"
	I0731 17:34:59.369469       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.23µs"
	I0731 17:35:13.845487       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-498089-m02"
	I0731 17:35:14.842438       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-498089-m02"
	I0731 17:35:14.842954       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-498089-m03\" does not exist"
	I0731 17:35:14.868148       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-498089-m03" podCIDRs=["10.244.2.0/24"]
	I0731 17:35:34.240820       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-498089-m02"
	
	
	==> kube-proxy [468fe46f35c8cc80128ee1ec6a6050cf94aef0cb919789ed8d25c9880b9dddfc] <==
	I0731 17:33:53.234727       1 server_linux.go:69] "Using iptables proxy"
	I0731 17:33:55.259776       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.100"]
	I0731 17:33:55.360861       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 17:33:55.360925       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 17:33:55.360944       1 server_linux.go:165] "Using iptables Proxier"
	I0731 17:33:55.370578       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 17:33:55.371043       1 server.go:872] "Version info" version="v1.30.3"
	I0731 17:33:55.371083       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 17:33:55.373882       1 config.go:192] "Starting service config controller"
	I0731 17:33:55.373923       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 17:33:55.375226       1 config.go:101] "Starting endpoint slice config controller"
	I0731 17:33:55.375246       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 17:33:55.376236       1 config.go:319] "Starting node config controller"
	I0731 17:33:55.376256       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 17:33:55.476072       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 17:33:55.476420       1 shared_informer.go:320] Caches are synced for node config
	I0731 17:33:55.476447       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [87a364936bdef18ba9cf82cf399cac1f0c9d855f2bbd704616f08ba3c0154e86] <==
	I0731 17:27:18.507586       1 server_linux.go:69] "Using iptables proxy"
	I0731 17:27:18.537337       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.100"]
	I0731 17:27:18.604697       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 17:27:18.604739       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 17:27:18.604756       1 server_linux.go:165] "Using iptables Proxier"
	I0731 17:27:18.607226       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 17:27:18.607905       1 server.go:872] "Version info" version="v1.30.3"
	I0731 17:27:18.607921       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 17:27:18.609625       1 config.go:192] "Starting service config controller"
	I0731 17:27:18.609675       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 17:27:18.609714       1 config.go:101] "Starting endpoint slice config controller"
	I0731 17:27:18.609732       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 17:27:18.611733       1 config.go:319] "Starting node config controller"
	I0731 17:27:18.611776       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 17:27:18.709869       1 shared_informer.go:320] Caches are synced for service config
	I0731 17:27:18.709868       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 17:27:18.712657       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7df64e7974e8c57b3618f920f29d46ee4cfe200717f7cba6834e6697f01fc257] <==
	W0731 17:33:55.180694       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0731 17:33:55.180774       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 17:33:55.180804       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0731 17:33:55.180828       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0731 17:33:55.216047       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0731 17:33:55.216148       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 17:33:55.217744       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0731 17:33:55.220404       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0731 17:33:55.220504       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0731 17:33:55.226661       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	W0731 17:33:55.229720       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0731 17:33:55.229798       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0731 17:33:55.229894       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0731 17:33:55.229927       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0731 17:33:55.229991       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope: RBAC: clusterrole.rbac.authorization.k8s.io "system:basic-user" not found
	E0731 17:33:55.230024       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope: RBAC: clusterrole.rbac.authorization.k8s.io "system:basic-user" not found
	W0731 17:33:55.232634       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 17:33:55.232737       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0731 17:33:55.232847       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 17:33:55.232882       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0731 17:33:55.232939       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 17:33:55.232968       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0731 17:33:55.233023       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 17:33:55.233051       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0731 17:33:55.327368       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [fc640bbf40886bebe629913240d2f5ee56a9b60ed6a6ce6e7da42e278f0303e3] <==
	W0731 17:27:01.405651       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 17:27:01.406216       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0731 17:27:01.405707       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 17:27:01.406242       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0731 17:27:02.435489       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 17:27:02.435592       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0731 17:27:02.445901       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 17:27:02.446014       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 17:27:02.459579       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 17:27:02.459672       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0731 17:27:02.495280       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 17:27:02.495419       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0731 17:27:02.532977       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 17:27:02.533110       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0731 17:27:02.537874       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 17:27:02.537923       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0731 17:27:02.583027       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 17:27:02.583068       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0731 17:27:02.609935       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0731 17:27:02.609980       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0731 17:27:04.894386       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 17:32:05.817704       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0731 17:32:05.817835       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0731 17:32:05.818217       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0731 17:32:05.819703       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 31 17:33:57 multinode-498089 kubelet[3834]: I0731 17:33:57.788160    3834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b608d19493aec6aee4416c17b5fa94aca430d3038a95515eeb351fb189a7485"
	Jul 31 17:33:57 multinode-498089 kubelet[3834]: I0731 17:33:57.812942    3834 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 31 17:33:57 multinode-498089 kubelet[3834]: I0731 17:33:57.852766    3834 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e893babbbefa5fea11a4d995a36606db-k8s-certs\") pod \"kube-controller-manager-multinode-498089\" (UID: \"e893babbbefa5fea11a4d995a36606db\") " pod="kube-system/kube-controller-manager-multinode-498089"
	Jul 31 17:33:57 multinode-498089 kubelet[3834]: I0731 17:33:57.852814    3834 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e893babbbefa5fea11a4d995a36606db-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-498089\" (UID: \"e893babbbefa5fea11a4d995a36606db\") " pod="kube-system/kube-controller-manager-multinode-498089"
	Jul 31 17:33:57 multinode-498089 kubelet[3834]: I0731 17:33:57.852910    3834 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2950d4410b3899ccbdb0536947b29f40-ca-certs\") pod \"kube-apiserver-multinode-498089\" (UID: \"2950d4410b3899ccbdb0536947b29f40\") " pod="kube-system/kube-apiserver-multinode-498089"
	Jul 31 17:33:57 multinode-498089 kubelet[3834]: I0731 17:33:57.852930    3834 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/23b4d2f3-c925-4a0e-8c9a-ecda421332bf-cni-cfg\") pod \"kindnet-pklkm\" (UID: \"23b4d2f3-c925-4a0e-8c9a-ecda421332bf\") " pod="kube-system/kindnet-pklkm"
	Jul 31 17:33:57 multinode-498089 kubelet[3834]: I0731 17:33:57.852945    3834 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/23b4d2f3-c925-4a0e-8c9a-ecda421332bf-lib-modules\") pod \"kindnet-pklkm\" (UID: \"23b4d2f3-c925-4a0e-8c9a-ecda421332bf\") " pod="kube-system/kindnet-pklkm"
	Jul 31 17:33:57 multinode-498089 kubelet[3834]: I0731 17:33:57.852958    3834 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/be60b8808f43a8b7b2c4a1a6190bedac-etcd-data\") pod \"etcd-multinode-498089\" (UID: \"be60b8808f43a8b7b2c4a1a6190bedac\") " pod="kube-system/etcd-multinode-498089"
	Jul 31 17:33:57 multinode-498089 kubelet[3834]: I0731 17:33:57.852975    3834 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2950d4410b3899ccbdb0536947b29f40-k8s-certs\") pod \"kube-apiserver-multinode-498089\" (UID: \"2950d4410b3899ccbdb0536947b29f40\") " pod="kube-system/kube-apiserver-multinode-498089"
	Jul 31 17:33:57 multinode-498089 kubelet[3834]: I0731 17:33:57.852989    3834 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e893babbbefa5fea11a4d995a36606db-flexvolume-dir\") pod \"kube-controller-manager-multinode-498089\" (UID: \"e893babbbefa5fea11a4d995a36606db\") " pod="kube-system/kube-controller-manager-multinode-498089"
	Jul 31 17:33:57 multinode-498089 kubelet[3834]: I0731 17:33:57.853022    3834 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cb9f0873-0309-40a8-a2f1-c1c6f0713034-xtables-lock\") pod \"kube-proxy-v6xrd\" (UID: \"cb9f0873-0309-40a8-a2f1-c1c6f0713034\") " pod="kube-system/kube-proxy-v6xrd"
	Jul 31 17:33:57 multinode-498089 kubelet[3834]: I0731 17:33:57.853043    3834 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/da4e7247-c042-4da0-9015-e4242d18d043-tmp\") pod \"storage-provisioner\" (UID: \"da4e7247-c042-4da0-9015-e4242d18d043\") " pod="kube-system/storage-provisioner"
	Jul 31 17:33:57 multinode-498089 kubelet[3834]: I0731 17:33:57.853061    3834 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2950d4410b3899ccbdb0536947b29f40-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-498089\" (UID: \"2950d4410b3899ccbdb0536947b29f40\") " pod="kube-system/kube-apiserver-multinode-498089"
	Jul 31 17:33:57 multinode-498089 kubelet[3834]: I0731 17:33:57.853078    3834 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e893babbbefa5fea11a4d995a36606db-kubeconfig\") pod \"kube-controller-manager-multinode-498089\" (UID: \"e893babbbefa5fea11a4d995a36606db\") " pod="kube-system/kube-controller-manager-multinode-498089"
	Jul 31 17:33:57 multinode-498089 kubelet[3834]: I0731 17:33:57.853112    3834 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/be60b8808f43a8b7b2c4a1a6190bedac-etcd-certs\") pod \"etcd-multinode-498089\" (UID: \"be60b8808f43a8b7b2c4a1a6190bedac\") " pod="kube-system/etcd-multinode-498089"
	Jul 31 17:33:57 multinode-498089 kubelet[3834]: I0731 17:33:57.853126    3834 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/23b4d2f3-c925-4a0e-8c9a-ecda421332bf-xtables-lock\") pod \"kindnet-pklkm\" (UID: \"23b4d2f3-c925-4a0e-8c9a-ecda421332bf\") " pod="kube-system/kindnet-pklkm"
	Jul 31 17:33:57 multinode-498089 kubelet[3834]: I0731 17:33:57.853188    3834 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e893babbbefa5fea11a4d995a36606db-ca-certs\") pod \"kube-controller-manager-multinode-498089\" (UID: \"e893babbbefa5fea11a4d995a36606db\") " pod="kube-system/kube-controller-manager-multinode-498089"
	Jul 31 17:33:57 multinode-498089 kubelet[3834]: I0731 17:33:57.853205    3834 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cb9f0873-0309-40a8-a2f1-c1c6f0713034-lib-modules\") pod \"kube-proxy-v6xrd\" (UID: \"cb9f0873-0309-40a8-a2f1-c1c6f0713034\") " pod="kube-system/kube-proxy-v6xrd"
	Jul 31 17:33:57 multinode-498089 kubelet[3834]: I0731 17:33:57.853229    3834 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/da27596f1275a6292fe83598b4e87488-kubeconfig\") pod \"kube-scheduler-multinode-498089\" (UID: \"da27596f1275a6292fe83598b4e87488\") " pod="kube-system/kube-scheduler-multinode-498089"
	Jul 31 17:33:58 multinode-498089 kubelet[3834]: I0731 17:33:58.089457    3834 scope.go:117] "RemoveContainer" containerID="6edfcf47092f7b6539d7a28ad02cdce800748c9d7c70a7bf92a9dee67cb4d6e3"
	Jul 31 17:34:57 multinode-498089 kubelet[3834]: E0731 17:34:57.743636    3834 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 17:34:57 multinode-498089 kubelet[3834]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 17:34:57 multinode-498089 kubelet[3834]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 17:34:57 multinode-498089 kubelet[3834]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 17:34:57 multinode-498089 kubelet[3834]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 17:35:36.619646   45914 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19349-8084/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-498089 -n multinode-498089
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-498089 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (335.37s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498089 stop
E0731 17:36:00.053591   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/functional-496242/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-498089 stop: exit status 82 (2m0.451029081s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-498089-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-498089 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498089 status
E0731 17:37:57.004771   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/functional-496242/client.crt: no such file or directory
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-498089 status: exit status 3 (18.865712882s)

                                                
                                                
-- stdout --
	multinode-498089
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-498089-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 17:38:00.099407   46569 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.228:22: connect: no route to host
	E0731 17:38:00.099448   46569 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.228:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-498089 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-498089 -n multinode-498089
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498089 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-498089 logs -n 25: (1.381223217s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-498089 ssh -n                                                                 | multinode-498089 | jenkins | v1.33.1 | 31 Jul 24 17:29 UTC | 31 Jul 24 17:29 UTC |
	|         | multinode-498089-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-498089 cp multinode-498089-m02:/home/docker/cp-test.txt                       | multinode-498089 | jenkins | v1.33.1 | 31 Jul 24 17:29 UTC | 31 Jul 24 17:29 UTC |
	|         | multinode-498089:/home/docker/cp-test_multinode-498089-m02_multinode-498089.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-498089 ssh -n                                                                 | multinode-498089 | jenkins | v1.33.1 | 31 Jul 24 17:29 UTC | 31 Jul 24 17:29 UTC |
	|         | multinode-498089-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-498089 ssh -n multinode-498089 sudo cat                                       | multinode-498089 | jenkins | v1.33.1 | 31 Jul 24 17:29 UTC | 31 Jul 24 17:29 UTC |
	|         | /home/docker/cp-test_multinode-498089-m02_multinode-498089.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-498089 cp multinode-498089-m02:/home/docker/cp-test.txt                       | multinode-498089 | jenkins | v1.33.1 | 31 Jul 24 17:29 UTC | 31 Jul 24 17:29 UTC |
	|         | multinode-498089-m03:/home/docker/cp-test_multinode-498089-m02_multinode-498089-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-498089 ssh -n                                                                 | multinode-498089 | jenkins | v1.33.1 | 31 Jul 24 17:29 UTC | 31 Jul 24 17:29 UTC |
	|         | multinode-498089-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-498089 ssh -n multinode-498089-m03 sudo cat                                   | multinode-498089 | jenkins | v1.33.1 | 31 Jul 24 17:29 UTC | 31 Jul 24 17:29 UTC |
	|         | /home/docker/cp-test_multinode-498089-m02_multinode-498089-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-498089 cp testdata/cp-test.txt                                                | multinode-498089 | jenkins | v1.33.1 | 31 Jul 24 17:29 UTC | 31 Jul 24 17:29 UTC |
	|         | multinode-498089-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-498089 ssh -n                                                                 | multinode-498089 | jenkins | v1.33.1 | 31 Jul 24 17:29 UTC | 31 Jul 24 17:29 UTC |
	|         | multinode-498089-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-498089 cp multinode-498089-m03:/home/docker/cp-test.txt                       | multinode-498089 | jenkins | v1.33.1 | 31 Jul 24 17:29 UTC | 31 Jul 24 17:29 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile179218134/001/cp-test_multinode-498089-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-498089 ssh -n                                                                 | multinode-498089 | jenkins | v1.33.1 | 31 Jul 24 17:29 UTC | 31 Jul 24 17:29 UTC |
	|         | multinode-498089-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-498089 cp multinode-498089-m03:/home/docker/cp-test.txt                       | multinode-498089 | jenkins | v1.33.1 | 31 Jul 24 17:29 UTC | 31 Jul 24 17:29 UTC |
	|         | multinode-498089:/home/docker/cp-test_multinode-498089-m03_multinode-498089.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-498089 ssh -n                                                                 | multinode-498089 | jenkins | v1.33.1 | 31 Jul 24 17:29 UTC | 31 Jul 24 17:29 UTC |
	|         | multinode-498089-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-498089 ssh -n multinode-498089 sudo cat                                       | multinode-498089 | jenkins | v1.33.1 | 31 Jul 24 17:29 UTC | 31 Jul 24 17:29 UTC |
	|         | /home/docker/cp-test_multinode-498089-m03_multinode-498089.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-498089 cp multinode-498089-m03:/home/docker/cp-test.txt                       | multinode-498089 | jenkins | v1.33.1 | 31 Jul 24 17:29 UTC | 31 Jul 24 17:29 UTC |
	|         | multinode-498089-m02:/home/docker/cp-test_multinode-498089-m03_multinode-498089-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-498089 ssh -n                                                                 | multinode-498089 | jenkins | v1.33.1 | 31 Jul 24 17:29 UTC | 31 Jul 24 17:29 UTC |
	|         | multinode-498089-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-498089 ssh -n multinode-498089-m02 sudo cat                                   | multinode-498089 | jenkins | v1.33.1 | 31 Jul 24 17:29 UTC | 31 Jul 24 17:29 UTC |
	|         | /home/docker/cp-test_multinode-498089-m03_multinode-498089-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-498089 node stop m03                                                          | multinode-498089 | jenkins | v1.33.1 | 31 Jul 24 17:29 UTC | 31 Jul 24 17:29 UTC |
	| node    | multinode-498089 node start                                                             | multinode-498089 | jenkins | v1.33.1 | 31 Jul 24 17:29 UTC | 31 Jul 24 17:30 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-498089                                                                | multinode-498089 | jenkins | v1.33.1 | 31 Jul 24 17:30 UTC |                     |
	| stop    | -p multinode-498089                                                                     | multinode-498089 | jenkins | v1.33.1 | 31 Jul 24 17:30 UTC |                     |
	| start   | -p multinode-498089                                                                     | multinode-498089 | jenkins | v1.33.1 | 31 Jul 24 17:32 UTC | 31 Jul 24 17:35 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-498089                                                                | multinode-498089 | jenkins | v1.33.1 | 31 Jul 24 17:35 UTC |                     |
	| node    | multinode-498089 node delete                                                            | multinode-498089 | jenkins | v1.33.1 | 31 Jul 24 17:35 UTC | 31 Jul 24 17:35 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-498089 stop                                                                   | multinode-498089 | jenkins | v1.33.1 | 31 Jul 24 17:35 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 17:32:04
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 17:32:04.858081   44770 out.go:291] Setting OutFile to fd 1 ...
	I0731 17:32:04.858323   44770 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 17:32:04.858333   44770 out.go:304] Setting ErrFile to fd 2...
	I0731 17:32:04.858339   44770 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 17:32:04.858549   44770 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19349-8084/.minikube/bin
	I0731 17:32:04.859081   44770 out.go:298] Setting JSON to false
	I0731 17:32:04.859996   44770 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4469,"bootTime":1722442656,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 17:32:04.860046   44770 start.go:139] virtualization: kvm guest
	I0731 17:32:04.862270   44770 out.go:177] * [multinode-498089] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 17:32:04.863505   44770 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 17:32:04.863518   44770 notify.go:220] Checking for updates...
	I0731 17:32:04.865687   44770 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 17:32:04.866837   44770 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 17:32:04.867941   44770 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19349-8084/.minikube
	I0731 17:32:04.869044   44770 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 17:32:04.870223   44770 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 17:32:04.871695   44770 config.go:182] Loaded profile config "multinode-498089": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 17:32:04.871777   44770 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 17:32:04.872269   44770 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:32:04.872347   44770 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:32:04.887540   44770 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32893
	I0731 17:32:04.887927   44770 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:32:04.888530   44770 main.go:141] libmachine: Using API Version  1
	I0731 17:32:04.888552   44770 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:32:04.888922   44770 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:32:04.889099   44770 main.go:141] libmachine: (multinode-498089) Calling .DriverName
	I0731 17:32:04.923316   44770 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 17:32:04.924753   44770 start.go:297] selected driver: kvm2
	I0731 17:32:04.924767   44770 start.go:901] validating driver "kvm2" against &{Name:multinode-498089 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-498089 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.204 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 17:32:04.924931   44770 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 17:32:04.925276   44770 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 17:32:04.925360   44770 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19349-8084/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 17:32:04.939542   44770 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 17:32:04.940209   44770 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 17:32:04.940279   44770 cni.go:84] Creating CNI manager for ""
	I0731 17:32:04.940293   44770 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0731 17:32:04.940355   44770 start.go:340] cluster config:
	{Name:multinode-498089 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-498089 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.204 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 17:32:04.940498   44770 iso.go:125] acquiring lock: {Name:mk4494bb5a63902bf12a909c3109db85229bc516 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 17:32:04.942246   44770 out.go:177] * Starting "multinode-498089" primary control-plane node in "multinode-498089" cluster
	I0731 17:32:04.943354   44770 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 17:32:04.943385   44770 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0731 17:32:04.943392   44770 cache.go:56] Caching tarball of preloaded images
	I0731 17:32:04.943463   44770 preload.go:172] Found /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 17:32:04.943473   44770 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 17:32:04.943581   44770 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/multinode-498089/config.json ...
	I0731 17:32:04.943837   44770 start.go:360] acquireMachinesLock for multinode-498089: {Name:mkd78eacfab7d6c3058c6674434b3d889ec957e0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 17:32:04.943898   44770 start.go:364] duration metric: took 38.407µs to acquireMachinesLock for "multinode-498089"
	I0731 17:32:04.943916   44770 start.go:96] Skipping create...Using existing machine configuration
	I0731 17:32:04.943923   44770 fix.go:54] fixHost starting: 
	I0731 17:32:04.944189   44770 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:32:04.944225   44770 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:32:04.958902   44770 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39019
	I0731 17:32:04.959335   44770 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:32:04.959754   44770 main.go:141] libmachine: Using API Version  1
	I0731 17:32:04.959769   44770 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:32:04.960127   44770 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:32:04.960325   44770 main.go:141] libmachine: (multinode-498089) Calling .DriverName
	I0731 17:32:04.960447   44770 main.go:141] libmachine: (multinode-498089) Calling .GetState
	I0731 17:32:04.962243   44770 fix.go:112] recreateIfNeeded on multinode-498089: state=Running err=<nil>
	W0731 17:32:04.962258   44770 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 17:32:04.964305   44770 out.go:177] * Updating the running kvm2 "multinode-498089" VM ...
	I0731 17:32:04.965538   44770 machine.go:94] provisionDockerMachine start ...
	I0731 17:32:04.965559   44770 main.go:141] libmachine: (multinode-498089) Calling .DriverName
	I0731 17:32:04.965773   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHHostname
	I0731 17:32:04.968333   44770 main.go:141] libmachine: (multinode-498089) DBG | domain multinode-498089 has defined MAC address 52:54:00:b0:35:3d in network mk-multinode-498089
	I0731 17:32:04.968788   44770 main.go:141] libmachine: (multinode-498089) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:35:3d", ip: ""} in network mk-multinode-498089: {Iface:virbr1 ExpiryTime:2024-07-31 18:26:40 +0000 UTC Type:0 Mac:52:54:00:b0:35:3d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-498089 Clientid:01:52:54:00:b0:35:3d}
	I0731 17:32:04.968814   44770 main.go:141] libmachine: (multinode-498089) DBG | domain multinode-498089 has defined IP address 192.168.39.100 and MAC address 52:54:00:b0:35:3d in network mk-multinode-498089
	I0731 17:32:04.968964   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHPort
	I0731 17:32:04.969131   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHKeyPath
	I0731 17:32:04.969275   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHKeyPath
	I0731 17:32:04.969411   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHUsername
	I0731 17:32:04.969569   44770 main.go:141] libmachine: Using SSH client type: native
	I0731 17:32:04.969747   44770 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0731 17:32:04.969757   44770 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 17:32:05.079751   44770 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-498089
	
	I0731 17:32:05.079775   44770 main.go:141] libmachine: (multinode-498089) Calling .GetMachineName
	I0731 17:32:05.080010   44770 buildroot.go:166] provisioning hostname "multinode-498089"
	I0731 17:32:05.080036   44770 main.go:141] libmachine: (multinode-498089) Calling .GetMachineName
	I0731 17:32:05.080184   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHHostname
	I0731 17:32:05.082906   44770 main.go:141] libmachine: (multinode-498089) DBG | domain multinode-498089 has defined MAC address 52:54:00:b0:35:3d in network mk-multinode-498089
	I0731 17:32:05.083273   44770 main.go:141] libmachine: (multinode-498089) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:35:3d", ip: ""} in network mk-multinode-498089: {Iface:virbr1 ExpiryTime:2024-07-31 18:26:40 +0000 UTC Type:0 Mac:52:54:00:b0:35:3d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-498089 Clientid:01:52:54:00:b0:35:3d}
	I0731 17:32:05.083302   44770 main.go:141] libmachine: (multinode-498089) DBG | domain multinode-498089 has defined IP address 192.168.39.100 and MAC address 52:54:00:b0:35:3d in network mk-multinode-498089
	I0731 17:32:05.083471   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHPort
	I0731 17:32:05.083626   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHKeyPath
	I0731 17:32:05.083812   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHKeyPath
	I0731 17:32:05.083963   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHUsername
	I0731 17:32:05.084116   44770 main.go:141] libmachine: Using SSH client type: native
	I0731 17:32:05.084263   44770 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0731 17:32:05.084276   44770 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-498089 && echo "multinode-498089" | sudo tee /etc/hostname
	I0731 17:32:05.212409   44770 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-498089
	
	I0731 17:32:05.212438   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHHostname
	I0731 17:32:05.215195   44770 main.go:141] libmachine: (multinode-498089) DBG | domain multinode-498089 has defined MAC address 52:54:00:b0:35:3d in network mk-multinode-498089
	I0731 17:32:05.215633   44770 main.go:141] libmachine: (multinode-498089) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:35:3d", ip: ""} in network mk-multinode-498089: {Iface:virbr1 ExpiryTime:2024-07-31 18:26:40 +0000 UTC Type:0 Mac:52:54:00:b0:35:3d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-498089 Clientid:01:52:54:00:b0:35:3d}
	I0731 17:32:05.215678   44770 main.go:141] libmachine: (multinode-498089) DBG | domain multinode-498089 has defined IP address 192.168.39.100 and MAC address 52:54:00:b0:35:3d in network mk-multinode-498089
	I0731 17:32:05.215795   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHPort
	I0731 17:32:05.215985   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHKeyPath
	I0731 17:32:05.216158   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHKeyPath
	I0731 17:32:05.216293   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHUsername
	I0731 17:32:05.216490   44770 main.go:141] libmachine: Using SSH client type: native
	I0731 17:32:05.216722   44770 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0731 17:32:05.216741   44770 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-498089' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-498089/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-498089' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 17:32:05.323809   44770 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 17:32:05.323838   44770 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19349-8084/.minikube CaCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19349-8084/.minikube}
	I0731 17:32:05.323885   44770 buildroot.go:174] setting up certificates
	I0731 17:32:05.323900   44770 provision.go:84] configureAuth start
	I0731 17:32:05.323916   44770 main.go:141] libmachine: (multinode-498089) Calling .GetMachineName
	I0731 17:32:05.324174   44770 main.go:141] libmachine: (multinode-498089) Calling .GetIP
	I0731 17:32:05.326844   44770 main.go:141] libmachine: (multinode-498089) DBG | domain multinode-498089 has defined MAC address 52:54:00:b0:35:3d in network mk-multinode-498089
	I0731 17:32:05.327214   44770 main.go:141] libmachine: (multinode-498089) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:35:3d", ip: ""} in network mk-multinode-498089: {Iface:virbr1 ExpiryTime:2024-07-31 18:26:40 +0000 UTC Type:0 Mac:52:54:00:b0:35:3d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-498089 Clientid:01:52:54:00:b0:35:3d}
	I0731 17:32:05.327237   44770 main.go:141] libmachine: (multinode-498089) DBG | domain multinode-498089 has defined IP address 192.168.39.100 and MAC address 52:54:00:b0:35:3d in network mk-multinode-498089
	I0731 17:32:05.327389   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHHostname
	I0731 17:32:05.329551   44770 main.go:141] libmachine: (multinode-498089) DBG | domain multinode-498089 has defined MAC address 52:54:00:b0:35:3d in network mk-multinode-498089
	I0731 17:32:05.329954   44770 main.go:141] libmachine: (multinode-498089) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:35:3d", ip: ""} in network mk-multinode-498089: {Iface:virbr1 ExpiryTime:2024-07-31 18:26:40 +0000 UTC Type:0 Mac:52:54:00:b0:35:3d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-498089 Clientid:01:52:54:00:b0:35:3d}
	I0731 17:32:05.329977   44770 main.go:141] libmachine: (multinode-498089) DBG | domain multinode-498089 has defined IP address 192.168.39.100 and MAC address 52:54:00:b0:35:3d in network mk-multinode-498089
	I0731 17:32:05.330101   44770 provision.go:143] copyHostCerts
	I0731 17:32:05.330128   44770 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem
	I0731 17:32:05.330158   44770 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem, removing ...
	I0731 17:32:05.330166   44770 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem
	I0731 17:32:05.330229   44770 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem (1082 bytes)
	I0731 17:32:05.330321   44770 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem
	I0731 17:32:05.330355   44770 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem, removing ...
	I0731 17:32:05.330364   44770 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem
	I0731 17:32:05.330416   44770 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem (1123 bytes)
	I0731 17:32:05.330468   44770 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem
	I0731 17:32:05.330485   44770 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem, removing ...
	I0731 17:32:05.330492   44770 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem
	I0731 17:32:05.330514   44770 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem (1675 bytes)
	I0731 17:32:05.330558   44770 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem org=jenkins.multinode-498089 san=[127.0.0.1 192.168.39.100 localhost minikube multinode-498089]
	I0731 17:32:05.542031   44770 provision.go:177] copyRemoteCerts
	I0731 17:32:05.542088   44770 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 17:32:05.542111   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHHostname
	I0731 17:32:05.544909   44770 main.go:141] libmachine: (multinode-498089) DBG | domain multinode-498089 has defined MAC address 52:54:00:b0:35:3d in network mk-multinode-498089
	I0731 17:32:05.545284   44770 main.go:141] libmachine: (multinode-498089) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:35:3d", ip: ""} in network mk-multinode-498089: {Iface:virbr1 ExpiryTime:2024-07-31 18:26:40 +0000 UTC Type:0 Mac:52:54:00:b0:35:3d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-498089 Clientid:01:52:54:00:b0:35:3d}
	I0731 17:32:05.545305   44770 main.go:141] libmachine: (multinode-498089) DBG | domain multinode-498089 has defined IP address 192.168.39.100 and MAC address 52:54:00:b0:35:3d in network mk-multinode-498089
	I0731 17:32:05.545466   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHPort
	I0731 17:32:05.545667   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHKeyPath
	I0731 17:32:05.545801   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHUsername
	I0731 17:32:05.545932   44770 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/multinode-498089/id_rsa Username:docker}
	I0731 17:32:05.628730   44770 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0731 17:32:05.628807   44770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0731 17:32:05.651861   44770 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0731 17:32:05.651921   44770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 17:32:05.674267   44770 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0731 17:32:05.674345   44770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 17:32:05.696904   44770 provision.go:87] duration metric: took 372.990017ms to configureAuth
	I0731 17:32:05.696930   44770 buildroot.go:189] setting minikube options for container-runtime
	I0731 17:32:05.697130   44770 config.go:182] Loaded profile config "multinode-498089": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 17:32:05.697197   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHHostname
	I0731 17:32:05.699779   44770 main.go:141] libmachine: (multinode-498089) DBG | domain multinode-498089 has defined MAC address 52:54:00:b0:35:3d in network mk-multinode-498089
	I0731 17:32:05.700212   44770 main.go:141] libmachine: (multinode-498089) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:35:3d", ip: ""} in network mk-multinode-498089: {Iface:virbr1 ExpiryTime:2024-07-31 18:26:40 +0000 UTC Type:0 Mac:52:54:00:b0:35:3d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-498089 Clientid:01:52:54:00:b0:35:3d}
	I0731 17:32:05.700240   44770 main.go:141] libmachine: (multinode-498089) DBG | domain multinode-498089 has defined IP address 192.168.39.100 and MAC address 52:54:00:b0:35:3d in network mk-multinode-498089
	I0731 17:32:05.700405   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHPort
	I0731 17:32:05.700592   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHKeyPath
	I0731 17:32:05.700780   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHKeyPath
	I0731 17:32:05.700896   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHUsername
	I0731 17:32:05.701034   44770 main.go:141] libmachine: Using SSH client type: native
	I0731 17:32:05.701197   44770 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0731 17:32:05.701212   44770 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 17:33:36.388058   44770 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 17:33:36.388081   44770 machine.go:97] duration metric: took 1m31.4225305s to provisionDockerMachine
	I0731 17:33:36.388094   44770 start.go:293] postStartSetup for "multinode-498089" (driver="kvm2")
	I0731 17:33:36.388103   44770 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 17:33:36.388135   44770 main.go:141] libmachine: (multinode-498089) Calling .DriverName
	I0731 17:33:36.388476   44770 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 17:33:36.388513   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHHostname
	I0731 17:33:36.391503   44770 main.go:141] libmachine: (multinode-498089) DBG | domain multinode-498089 has defined MAC address 52:54:00:b0:35:3d in network mk-multinode-498089
	I0731 17:33:36.391952   44770 main.go:141] libmachine: (multinode-498089) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:35:3d", ip: ""} in network mk-multinode-498089: {Iface:virbr1 ExpiryTime:2024-07-31 18:26:40 +0000 UTC Type:0 Mac:52:54:00:b0:35:3d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-498089 Clientid:01:52:54:00:b0:35:3d}
	I0731 17:33:36.391981   44770 main.go:141] libmachine: (multinode-498089) DBG | domain multinode-498089 has defined IP address 192.168.39.100 and MAC address 52:54:00:b0:35:3d in network mk-multinode-498089
	I0731 17:33:36.392120   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHPort
	I0731 17:33:36.392309   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHKeyPath
	I0731 17:33:36.392451   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHUsername
	I0731 17:33:36.392619   44770 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/multinode-498089/id_rsa Username:docker}
	I0731 17:33:36.478415   44770 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 17:33:36.482171   44770 command_runner.go:130] > NAME=Buildroot
	I0731 17:33:36.482191   44770 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0731 17:33:36.482197   44770 command_runner.go:130] > ID=buildroot
	I0731 17:33:36.482204   44770 command_runner.go:130] > VERSION_ID=2023.02.9
	I0731 17:33:36.482212   44770 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0731 17:33:36.482359   44770 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 17:33:36.482374   44770 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/addons for local assets ...
	I0731 17:33:36.482426   44770 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/files for local assets ...
	I0731 17:33:36.482492   44770 filesync.go:149] local asset: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem -> 152592.pem in /etc/ssl/certs
	I0731 17:33:36.482500   44770 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem -> /etc/ssl/certs/152592.pem
	I0731 17:33:36.482594   44770 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 17:33:36.491281   44770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /etc/ssl/certs/152592.pem (1708 bytes)
	I0731 17:33:36.513081   44770 start.go:296] duration metric: took 124.973721ms for postStartSetup
	I0731 17:33:36.513132   44770 fix.go:56] duration metric: took 1m31.569207544s for fixHost
	I0731 17:33:36.513159   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHHostname
	I0731 17:33:36.515858   44770 main.go:141] libmachine: (multinode-498089) DBG | domain multinode-498089 has defined MAC address 52:54:00:b0:35:3d in network mk-multinode-498089
	I0731 17:33:36.516231   44770 main.go:141] libmachine: (multinode-498089) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:35:3d", ip: ""} in network mk-multinode-498089: {Iface:virbr1 ExpiryTime:2024-07-31 18:26:40 +0000 UTC Type:0 Mac:52:54:00:b0:35:3d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-498089 Clientid:01:52:54:00:b0:35:3d}
	I0731 17:33:36.516252   44770 main.go:141] libmachine: (multinode-498089) DBG | domain multinode-498089 has defined IP address 192.168.39.100 and MAC address 52:54:00:b0:35:3d in network mk-multinode-498089
	I0731 17:33:36.516427   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHPort
	I0731 17:33:36.516624   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHKeyPath
	I0731 17:33:36.516793   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHKeyPath
	I0731 17:33:36.516944   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHUsername
	I0731 17:33:36.517100   44770 main.go:141] libmachine: Using SSH client type: native
	I0731 17:33:36.517312   44770 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0731 17:33:36.517325   44770 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 17:33:36.623347   44770 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722447216.595759064
	
	I0731 17:33:36.623367   44770 fix.go:216] guest clock: 1722447216.595759064
	I0731 17:33:36.623394   44770 fix.go:229] Guest: 2024-07-31 17:33:36.595759064 +0000 UTC Remote: 2024-07-31 17:33:36.513139254 +0000 UTC m=+91.689064171 (delta=82.61981ms)
	I0731 17:33:36.623417   44770 fix.go:200] guest clock delta is within tolerance: 82.61981ms
	I0731 17:33:36.623424   44770 start.go:83] releasing machines lock for "multinode-498089", held for 1m31.679514445s
	I0731 17:33:36.623448   44770 main.go:141] libmachine: (multinode-498089) Calling .DriverName
	I0731 17:33:36.623727   44770 main.go:141] libmachine: (multinode-498089) Calling .GetIP
	I0731 17:33:36.626386   44770 main.go:141] libmachine: (multinode-498089) DBG | domain multinode-498089 has defined MAC address 52:54:00:b0:35:3d in network mk-multinode-498089
	I0731 17:33:36.626818   44770 main.go:141] libmachine: (multinode-498089) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:35:3d", ip: ""} in network mk-multinode-498089: {Iface:virbr1 ExpiryTime:2024-07-31 18:26:40 +0000 UTC Type:0 Mac:52:54:00:b0:35:3d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-498089 Clientid:01:52:54:00:b0:35:3d}
	I0731 17:33:36.626844   44770 main.go:141] libmachine: (multinode-498089) DBG | domain multinode-498089 has defined IP address 192.168.39.100 and MAC address 52:54:00:b0:35:3d in network mk-multinode-498089
	I0731 17:33:36.626977   44770 main.go:141] libmachine: (multinode-498089) Calling .DriverName
	I0731 17:33:36.627481   44770 main.go:141] libmachine: (multinode-498089) Calling .DriverName
	I0731 17:33:36.627676   44770 main.go:141] libmachine: (multinode-498089) Calling .DriverName
	I0731 17:33:36.627776   44770 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 17:33:36.627814   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHHostname
	I0731 17:33:36.627856   44770 ssh_runner.go:195] Run: cat /version.json
	I0731 17:33:36.627879   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHHostname
	I0731 17:33:36.630486   44770 main.go:141] libmachine: (multinode-498089) DBG | domain multinode-498089 has defined MAC address 52:54:00:b0:35:3d in network mk-multinode-498089
	I0731 17:33:36.630771   44770 main.go:141] libmachine: (multinode-498089) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:35:3d", ip: ""} in network mk-multinode-498089: {Iface:virbr1 ExpiryTime:2024-07-31 18:26:40 +0000 UTC Type:0 Mac:52:54:00:b0:35:3d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-498089 Clientid:01:52:54:00:b0:35:3d}
	I0731 17:33:36.630797   44770 main.go:141] libmachine: (multinode-498089) DBG | domain multinode-498089 has defined IP address 192.168.39.100 and MAC address 52:54:00:b0:35:3d in network mk-multinode-498089
	I0731 17:33:36.630935   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHPort
	I0731 17:33:36.631078   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHKeyPath
	I0731 17:33:36.631102   44770 main.go:141] libmachine: (multinode-498089) DBG | domain multinode-498089 has defined MAC address 52:54:00:b0:35:3d in network mk-multinode-498089
	I0731 17:33:36.631236   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHUsername
	I0731 17:33:36.631386   44770 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/multinode-498089/id_rsa Username:docker}
	I0731 17:33:36.631528   44770 main.go:141] libmachine: (multinode-498089) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:35:3d", ip: ""} in network mk-multinode-498089: {Iface:virbr1 ExpiryTime:2024-07-31 18:26:40 +0000 UTC Type:0 Mac:52:54:00:b0:35:3d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-498089 Clientid:01:52:54:00:b0:35:3d}
	I0731 17:33:36.631553   44770 main.go:141] libmachine: (multinode-498089) DBG | domain multinode-498089 has defined IP address 192.168.39.100 and MAC address 52:54:00:b0:35:3d in network mk-multinode-498089
	I0731 17:33:36.631707   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHPort
	I0731 17:33:36.631850   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHKeyPath
	I0731 17:33:36.631978   44770 main.go:141] libmachine: (multinode-498089) Calling .GetSSHUsername
	I0731 17:33:36.632084   44770 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/multinode-498089/id_rsa Username:docker}
	I0731 17:33:36.707368   44770 command_runner.go:130] > {"iso_version": "v1.33.1-1722248113-19339", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "b8389556a97747a5bbaa1906d238251ad536d76e"}
	I0731 17:33:36.733194   44770 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0731 17:33:36.733996   44770 ssh_runner.go:195] Run: systemctl --version
	I0731 17:33:36.739650   44770 command_runner.go:130] > systemd 252 (252)
	I0731 17:33:36.739674   44770 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0731 17:33:36.739878   44770 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 17:33:36.899731   44770 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0731 17:33:36.906085   44770 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0731 17:33:36.906440   44770 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 17:33:36.906507   44770 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 17:33:36.915290   44770 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0731 17:33:36.915325   44770 start.go:495] detecting cgroup driver to use...
	I0731 17:33:36.915384   44770 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 17:33:36.930656   44770 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 17:33:36.944288   44770 docker.go:217] disabling cri-docker service (if available) ...
	I0731 17:33:36.944355   44770 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 17:33:36.957127   44770 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 17:33:36.969429   44770 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 17:33:37.107233   44770 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 17:33:37.244913   44770 docker.go:233] disabling docker service ...
	I0731 17:33:37.245025   44770 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 17:33:37.262053   44770 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 17:33:37.275191   44770 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 17:33:37.417025   44770 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 17:33:37.554431   44770 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 17:33:37.568310   44770 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 17:33:37.585678   44770 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0731 17:33:37.585718   44770 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 17:33:37.585773   44770 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:33:37.596050   44770 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 17:33:37.596101   44770 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:33:37.606013   44770 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:33:37.615771   44770 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:33:37.625468   44770 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 17:33:37.635825   44770 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:33:37.645486   44770 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:33:37.656233   44770 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:33:37.666052   44770 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 17:33:37.675501   44770 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0731 17:33:37.675548   44770 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 17:33:37.684175   44770 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 17:33:37.823283   44770 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 17:33:44.743654   44770 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.920338488s)
	I0731 17:33:44.743679   44770 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 17:33:44.743719   44770 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 17:33:44.748369   44770 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0731 17:33:44.748389   44770 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0731 17:33:44.748411   44770 command_runner.go:130] > Device: 0,22	Inode: 1332        Links: 1
	I0731 17:33:44.748420   44770 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0731 17:33:44.748429   44770 command_runner.go:130] > Access: 2024-07-31 17:33:44.612491405 +0000
	I0731 17:33:44.748437   44770 command_runner.go:130] > Modify: 2024-07-31 17:33:44.612491405 +0000
	I0731 17:33:44.748450   44770 command_runner.go:130] > Change: 2024-07-31 17:33:44.612491405 +0000
	I0731 17:33:44.748455   44770 command_runner.go:130] >  Birth: -
	I0731 17:33:44.748540   44770 start.go:563] Will wait 60s for crictl version
	I0731 17:33:44.748588   44770 ssh_runner.go:195] Run: which crictl
	I0731 17:33:44.752043   44770 command_runner.go:130] > /usr/bin/crictl
	I0731 17:33:44.752654   44770 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 17:33:44.798701   44770 command_runner.go:130] > Version:  0.1.0
	I0731 17:33:44.798728   44770 command_runner.go:130] > RuntimeName:  cri-o
	I0731 17:33:44.798734   44770 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0731 17:33:44.798739   44770 command_runner.go:130] > RuntimeApiVersion:  v1
	I0731 17:33:44.799872   44770 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 17:33:44.799956   44770 ssh_runner.go:195] Run: crio --version
	I0731 17:33:44.830303   44770 command_runner.go:130] > crio version 1.29.1
	I0731 17:33:44.830323   44770 command_runner.go:130] > Version:        1.29.1
	I0731 17:33:44.830331   44770 command_runner.go:130] > GitCommit:      unknown
	I0731 17:33:44.830337   44770 command_runner.go:130] > GitCommitDate:  unknown
	I0731 17:33:44.830341   44770 command_runner.go:130] > GitTreeState:   clean
	I0731 17:33:44.830346   44770 command_runner.go:130] > BuildDate:      2024-07-29T16:04:01Z
	I0731 17:33:44.830354   44770 command_runner.go:130] > GoVersion:      go1.21.6
	I0731 17:33:44.830358   44770 command_runner.go:130] > Compiler:       gc
	I0731 17:33:44.830363   44770 command_runner.go:130] > Platform:       linux/amd64
	I0731 17:33:44.830369   44770 command_runner.go:130] > Linkmode:       dynamic
	I0731 17:33:44.830384   44770 command_runner.go:130] > BuildTags:      
	I0731 17:33:44.830393   44770 command_runner.go:130] >   containers_image_ostree_stub
	I0731 17:33:44.830401   44770 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0731 17:33:44.830408   44770 command_runner.go:130] >   btrfs_noversion
	I0731 17:33:44.830415   44770 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0731 17:33:44.830420   44770 command_runner.go:130] >   libdm_no_deferred_remove
	I0731 17:33:44.830423   44770 command_runner.go:130] >   seccomp
	I0731 17:33:44.830427   44770 command_runner.go:130] > LDFlags:          unknown
	I0731 17:33:44.830431   44770 command_runner.go:130] > SeccompEnabled:   true
	I0731 17:33:44.830435   44770 command_runner.go:130] > AppArmorEnabled:  false
	I0731 17:33:44.830500   44770 ssh_runner.go:195] Run: crio --version
	I0731 17:33:44.858545   44770 command_runner.go:130] > crio version 1.29.1
	I0731 17:33:44.858561   44770 command_runner.go:130] > Version:        1.29.1
	I0731 17:33:44.858567   44770 command_runner.go:130] > GitCommit:      unknown
	I0731 17:33:44.858571   44770 command_runner.go:130] > GitCommitDate:  unknown
	I0731 17:33:44.858575   44770 command_runner.go:130] > GitTreeState:   clean
	I0731 17:33:44.858582   44770 command_runner.go:130] > BuildDate:      2024-07-29T16:04:01Z
	I0731 17:33:44.858587   44770 command_runner.go:130] > GoVersion:      go1.21.6
	I0731 17:33:44.858590   44770 command_runner.go:130] > Compiler:       gc
	I0731 17:33:44.858595   44770 command_runner.go:130] > Platform:       linux/amd64
	I0731 17:33:44.858607   44770 command_runner.go:130] > Linkmode:       dynamic
	I0731 17:33:44.858613   44770 command_runner.go:130] > BuildTags:      
	I0731 17:33:44.858698   44770 command_runner.go:130] >   containers_image_ostree_stub
	I0731 17:33:44.858706   44770 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0731 17:33:44.858712   44770 command_runner.go:130] >   btrfs_noversion
	I0731 17:33:44.858723   44770 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0731 17:33:44.858730   44770 command_runner.go:130] >   libdm_no_deferred_remove
	I0731 17:33:44.858737   44770 command_runner.go:130] >   seccomp
	I0731 17:33:44.858743   44770 command_runner.go:130] > LDFlags:          unknown
	I0731 17:33:44.858749   44770 command_runner.go:130] > SeccompEnabled:   true
	I0731 17:33:44.858753   44770 command_runner.go:130] > AppArmorEnabled:  false
	I0731 17:33:44.861646   44770 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 17:33:44.862886   44770 main.go:141] libmachine: (multinode-498089) Calling .GetIP
	I0731 17:33:44.865474   44770 main.go:141] libmachine: (multinode-498089) DBG | domain multinode-498089 has defined MAC address 52:54:00:b0:35:3d in network mk-multinode-498089
	I0731 17:33:44.865851   44770 main.go:141] libmachine: (multinode-498089) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:35:3d", ip: ""} in network mk-multinode-498089: {Iface:virbr1 ExpiryTime:2024-07-31 18:26:40 +0000 UTC Type:0 Mac:52:54:00:b0:35:3d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-498089 Clientid:01:52:54:00:b0:35:3d}
	I0731 17:33:44.865879   44770 main.go:141] libmachine: (multinode-498089) DBG | domain multinode-498089 has defined IP address 192.168.39.100 and MAC address 52:54:00:b0:35:3d in network mk-multinode-498089
	I0731 17:33:44.866066   44770 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 17:33:44.869745   44770 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0731 17:33:44.869903   44770 kubeadm.go:883] updating cluster {Name:multinode-498089 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-498089 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.204 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fa
lse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 17:33:44.870084   44770 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 17:33:44.870146   44770 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 17:33:44.909541   44770 command_runner.go:130] > {
	I0731 17:33:44.909561   44770 command_runner.go:130] >   "images": [
	I0731 17:33:44.909566   44770 command_runner.go:130] >     {
	I0731 17:33:44.909577   44770 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0731 17:33:44.909592   44770 command_runner.go:130] >       "repoTags": [
	I0731 17:33:44.909601   44770 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0731 17:33:44.909608   44770 command_runner.go:130] >       ],
	I0731 17:33:44.909617   44770 command_runner.go:130] >       "repoDigests": [
	I0731 17:33:44.909635   44770 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0731 17:33:44.909650   44770 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0731 17:33:44.909659   44770 command_runner.go:130] >       ],
	I0731 17:33:44.909667   44770 command_runner.go:130] >       "size": "87165492",
	I0731 17:33:44.909677   44770 command_runner.go:130] >       "uid": null,
	I0731 17:33:44.909686   44770 command_runner.go:130] >       "username": "",
	I0731 17:33:44.909699   44770 command_runner.go:130] >       "spec": null,
	I0731 17:33:44.909709   44770 command_runner.go:130] >       "pinned": false
	I0731 17:33:44.909717   44770 command_runner.go:130] >     },
	I0731 17:33:44.909726   44770 command_runner.go:130] >     {
	I0731 17:33:44.909739   44770 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0731 17:33:44.909749   44770 command_runner.go:130] >       "repoTags": [
	I0731 17:33:44.909760   44770 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0731 17:33:44.909768   44770 command_runner.go:130] >       ],
	I0731 17:33:44.909776   44770 command_runner.go:130] >       "repoDigests": [
	I0731 17:33:44.909792   44770 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0731 17:33:44.909810   44770 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0731 17:33:44.909819   44770 command_runner.go:130] >       ],
	I0731 17:33:44.909827   44770 command_runner.go:130] >       "size": "87174707",
	I0731 17:33:44.909836   44770 command_runner.go:130] >       "uid": null,
	I0731 17:33:44.909856   44770 command_runner.go:130] >       "username": "",
	I0731 17:33:44.909865   44770 command_runner.go:130] >       "spec": null,
	I0731 17:33:44.909872   44770 command_runner.go:130] >       "pinned": false
	I0731 17:33:44.909877   44770 command_runner.go:130] >     },
	I0731 17:33:44.909883   44770 command_runner.go:130] >     {
	I0731 17:33:44.909896   44770 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0731 17:33:44.909905   44770 command_runner.go:130] >       "repoTags": [
	I0731 17:33:44.909915   44770 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0731 17:33:44.909923   44770 command_runner.go:130] >       ],
	I0731 17:33:44.909930   44770 command_runner.go:130] >       "repoDigests": [
	I0731 17:33:44.909945   44770 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0731 17:33:44.909960   44770 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0731 17:33:44.909968   44770 command_runner.go:130] >       ],
	I0731 17:33:44.909976   44770 command_runner.go:130] >       "size": "1363676",
	I0731 17:33:44.909985   44770 command_runner.go:130] >       "uid": null,
	I0731 17:33:44.909994   44770 command_runner.go:130] >       "username": "",
	I0731 17:33:44.910002   44770 command_runner.go:130] >       "spec": null,
	I0731 17:33:44.910009   44770 command_runner.go:130] >       "pinned": false
	I0731 17:33:44.910016   44770 command_runner.go:130] >     },
	I0731 17:33:44.910025   44770 command_runner.go:130] >     {
	I0731 17:33:44.910036   44770 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0731 17:33:44.910045   44770 command_runner.go:130] >       "repoTags": [
	I0731 17:33:44.910054   44770 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0731 17:33:44.910061   44770 command_runner.go:130] >       ],
	I0731 17:33:44.910069   44770 command_runner.go:130] >       "repoDigests": [
	I0731 17:33:44.910084   44770 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0731 17:33:44.910107   44770 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0731 17:33:44.910115   44770 command_runner.go:130] >       ],
	I0731 17:33:44.910123   44770 command_runner.go:130] >       "size": "31470524",
	I0731 17:33:44.910133   44770 command_runner.go:130] >       "uid": null,
	I0731 17:33:44.910142   44770 command_runner.go:130] >       "username": "",
	I0731 17:33:44.910151   44770 command_runner.go:130] >       "spec": null,
	I0731 17:33:44.910165   44770 command_runner.go:130] >       "pinned": false
	I0731 17:33:44.910173   44770 command_runner.go:130] >     },
	I0731 17:33:44.910180   44770 command_runner.go:130] >     {
	I0731 17:33:44.910191   44770 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0731 17:33:44.910209   44770 command_runner.go:130] >       "repoTags": [
	I0731 17:33:44.910220   44770 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0731 17:33:44.910226   44770 command_runner.go:130] >       ],
	I0731 17:33:44.910235   44770 command_runner.go:130] >       "repoDigests": [
	I0731 17:33:44.910252   44770 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0731 17:33:44.910267   44770 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0731 17:33:44.910276   44770 command_runner.go:130] >       ],
	I0731 17:33:44.910283   44770 command_runner.go:130] >       "size": "61245718",
	I0731 17:33:44.910292   44770 command_runner.go:130] >       "uid": null,
	I0731 17:33:44.910300   44770 command_runner.go:130] >       "username": "nonroot",
	I0731 17:33:44.910309   44770 command_runner.go:130] >       "spec": null,
	I0731 17:33:44.910317   44770 command_runner.go:130] >       "pinned": false
	I0731 17:33:44.910325   44770 command_runner.go:130] >     },
	I0731 17:33:44.910331   44770 command_runner.go:130] >     {
	I0731 17:33:44.910340   44770 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0731 17:33:44.910346   44770 command_runner.go:130] >       "repoTags": [
	I0731 17:33:44.910360   44770 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0731 17:33:44.910370   44770 command_runner.go:130] >       ],
	I0731 17:33:44.910377   44770 command_runner.go:130] >       "repoDigests": [
	I0731 17:33:44.910391   44770 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0731 17:33:44.910410   44770 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0731 17:33:44.910418   44770 command_runner.go:130] >       ],
	I0731 17:33:44.910426   44770 command_runner.go:130] >       "size": "150779692",
	I0731 17:33:44.910436   44770 command_runner.go:130] >       "uid": {
	I0731 17:33:44.910445   44770 command_runner.go:130] >         "value": "0"
	I0731 17:33:44.910452   44770 command_runner.go:130] >       },
	I0731 17:33:44.910462   44770 command_runner.go:130] >       "username": "",
	I0731 17:33:44.910471   44770 command_runner.go:130] >       "spec": null,
	I0731 17:33:44.910480   44770 command_runner.go:130] >       "pinned": false
	I0731 17:33:44.910486   44770 command_runner.go:130] >     },
	I0731 17:33:44.910494   44770 command_runner.go:130] >     {
	I0731 17:33:44.910505   44770 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0731 17:33:44.910514   44770 command_runner.go:130] >       "repoTags": [
	I0731 17:33:44.910525   44770 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0731 17:33:44.910533   44770 command_runner.go:130] >       ],
	I0731 17:33:44.910541   44770 command_runner.go:130] >       "repoDigests": [
	I0731 17:33:44.910560   44770 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0731 17:33:44.910574   44770 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0731 17:33:44.910581   44770 command_runner.go:130] >       ],
	I0731 17:33:44.910594   44770 command_runner.go:130] >       "size": "117609954",
	I0731 17:33:44.910602   44770 command_runner.go:130] >       "uid": {
	I0731 17:33:44.910608   44770 command_runner.go:130] >         "value": "0"
	I0731 17:33:44.910617   44770 command_runner.go:130] >       },
	I0731 17:33:44.910624   44770 command_runner.go:130] >       "username": "",
	I0731 17:33:44.910633   44770 command_runner.go:130] >       "spec": null,
	I0731 17:33:44.910647   44770 command_runner.go:130] >       "pinned": false
	I0731 17:33:44.910655   44770 command_runner.go:130] >     },
	I0731 17:33:44.910661   44770 command_runner.go:130] >     {
	I0731 17:33:44.910672   44770 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0731 17:33:44.910682   44770 command_runner.go:130] >       "repoTags": [
	I0731 17:33:44.910693   44770 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0731 17:33:44.910701   44770 command_runner.go:130] >       ],
	I0731 17:33:44.910709   44770 command_runner.go:130] >       "repoDigests": [
	I0731 17:33:44.910738   44770 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0731 17:33:44.910754   44770 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0731 17:33:44.910760   44770 command_runner.go:130] >       ],
	I0731 17:33:44.910767   44770 command_runner.go:130] >       "size": "112198984",
	I0731 17:33:44.910775   44770 command_runner.go:130] >       "uid": {
	I0731 17:33:44.910780   44770 command_runner.go:130] >         "value": "0"
	I0731 17:33:44.910785   44770 command_runner.go:130] >       },
	I0731 17:33:44.910789   44770 command_runner.go:130] >       "username": "",
	I0731 17:33:44.910798   44770 command_runner.go:130] >       "spec": null,
	I0731 17:33:44.910803   44770 command_runner.go:130] >       "pinned": false
	I0731 17:33:44.910808   44770 command_runner.go:130] >     },
	I0731 17:33:44.910812   44770 command_runner.go:130] >     {
	I0731 17:33:44.910820   44770 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0731 17:33:44.910824   44770 command_runner.go:130] >       "repoTags": [
	I0731 17:33:44.910831   44770 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0731 17:33:44.910835   44770 command_runner.go:130] >       ],
	I0731 17:33:44.910840   44770 command_runner.go:130] >       "repoDigests": [
	I0731 17:33:44.910849   44770 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0731 17:33:44.910859   44770 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0731 17:33:44.910871   44770 command_runner.go:130] >       ],
	I0731 17:33:44.910876   44770 command_runner.go:130] >       "size": "85953945",
	I0731 17:33:44.910882   44770 command_runner.go:130] >       "uid": null,
	I0731 17:33:44.910888   44770 command_runner.go:130] >       "username": "",
	I0731 17:33:44.910894   44770 command_runner.go:130] >       "spec": null,
	I0731 17:33:44.910900   44770 command_runner.go:130] >       "pinned": false
	I0731 17:33:44.910905   44770 command_runner.go:130] >     },
	I0731 17:33:44.910910   44770 command_runner.go:130] >     {
	I0731 17:33:44.910921   44770 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0731 17:33:44.910927   44770 command_runner.go:130] >       "repoTags": [
	I0731 17:33:44.910934   44770 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0731 17:33:44.910940   44770 command_runner.go:130] >       ],
	I0731 17:33:44.910947   44770 command_runner.go:130] >       "repoDigests": [
	I0731 17:33:44.910957   44770 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0731 17:33:44.910972   44770 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0731 17:33:44.910978   44770 command_runner.go:130] >       ],
	I0731 17:33:44.910986   44770 command_runner.go:130] >       "size": "63051080",
	I0731 17:33:44.910992   44770 command_runner.go:130] >       "uid": {
	I0731 17:33:44.910999   44770 command_runner.go:130] >         "value": "0"
	I0731 17:33:44.911004   44770 command_runner.go:130] >       },
	I0731 17:33:44.911009   44770 command_runner.go:130] >       "username": "",
	I0731 17:33:44.911016   44770 command_runner.go:130] >       "spec": null,
	I0731 17:33:44.911020   44770 command_runner.go:130] >       "pinned": false
	I0731 17:33:44.911023   44770 command_runner.go:130] >     },
	I0731 17:33:44.911026   44770 command_runner.go:130] >     {
	I0731 17:33:44.911032   44770 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0731 17:33:44.911037   44770 command_runner.go:130] >       "repoTags": [
	I0731 17:33:44.911041   44770 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0731 17:33:44.911045   44770 command_runner.go:130] >       ],
	I0731 17:33:44.911049   44770 command_runner.go:130] >       "repoDigests": [
	I0731 17:33:44.911056   44770 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0731 17:33:44.911065   44770 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0731 17:33:44.911068   44770 command_runner.go:130] >       ],
	I0731 17:33:44.911072   44770 command_runner.go:130] >       "size": "750414",
	I0731 17:33:44.911076   44770 command_runner.go:130] >       "uid": {
	I0731 17:33:44.911080   44770 command_runner.go:130] >         "value": "65535"
	I0731 17:33:44.911091   44770 command_runner.go:130] >       },
	I0731 17:33:44.911097   44770 command_runner.go:130] >       "username": "",
	I0731 17:33:44.911101   44770 command_runner.go:130] >       "spec": null,
	I0731 17:33:44.911105   44770 command_runner.go:130] >       "pinned": true
	I0731 17:33:44.911123   44770 command_runner.go:130] >     }
	I0731 17:33:44.911129   44770 command_runner.go:130] >   ]
	I0731 17:33:44.911135   44770 command_runner.go:130] > }
	I0731 17:33:44.911363   44770 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 17:33:44.911376   44770 crio.go:433] Images already preloaded, skipping extraction
	I0731 17:33:44.911416   44770 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 17:33:44.943099   44770 command_runner.go:130] > {
	I0731 17:33:44.943143   44770 command_runner.go:130] >   "images": [
	I0731 17:33:44.943149   44770 command_runner.go:130] >     {
	I0731 17:33:44.943158   44770 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0731 17:33:44.943164   44770 command_runner.go:130] >       "repoTags": [
	I0731 17:33:44.943173   44770 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0731 17:33:44.943178   44770 command_runner.go:130] >       ],
	I0731 17:33:44.943184   44770 command_runner.go:130] >       "repoDigests": [
	I0731 17:33:44.943198   44770 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0731 17:33:44.943213   44770 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0731 17:33:44.943223   44770 command_runner.go:130] >       ],
	I0731 17:33:44.943230   44770 command_runner.go:130] >       "size": "87165492",
	I0731 17:33:44.943239   44770 command_runner.go:130] >       "uid": null,
	I0731 17:33:44.943244   44770 command_runner.go:130] >       "username": "",
	I0731 17:33:44.943250   44770 command_runner.go:130] >       "spec": null,
	I0731 17:33:44.943254   44770 command_runner.go:130] >       "pinned": false
	I0731 17:33:44.943258   44770 command_runner.go:130] >     },
	I0731 17:33:44.943266   44770 command_runner.go:130] >     {
	I0731 17:33:44.943275   44770 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0731 17:33:44.943285   44770 command_runner.go:130] >       "repoTags": [
	I0731 17:33:44.943294   44770 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0731 17:33:44.943302   44770 command_runner.go:130] >       ],
	I0731 17:33:44.943325   44770 command_runner.go:130] >       "repoDigests": [
	I0731 17:33:44.943340   44770 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0731 17:33:44.943350   44770 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0731 17:33:44.943357   44770 command_runner.go:130] >       ],
	I0731 17:33:44.943364   44770 command_runner.go:130] >       "size": "87174707",
	I0731 17:33:44.943373   44770 command_runner.go:130] >       "uid": null,
	I0731 17:33:44.943389   44770 command_runner.go:130] >       "username": "",
	I0731 17:33:44.943399   44770 command_runner.go:130] >       "spec": null,
	I0731 17:33:44.943408   44770 command_runner.go:130] >       "pinned": false
	I0731 17:33:44.943417   44770 command_runner.go:130] >     },
	I0731 17:33:44.943425   44770 command_runner.go:130] >     {
	I0731 17:33:44.943437   44770 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0731 17:33:44.943447   44770 command_runner.go:130] >       "repoTags": [
	I0731 17:33:44.943456   44770 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0731 17:33:44.943462   44770 command_runner.go:130] >       ],
	I0731 17:33:44.943468   44770 command_runner.go:130] >       "repoDigests": [
	I0731 17:33:44.943482   44770 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0731 17:33:44.943497   44770 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0731 17:33:44.943506   44770 command_runner.go:130] >       ],
	I0731 17:33:44.943516   44770 command_runner.go:130] >       "size": "1363676",
	I0731 17:33:44.943525   44770 command_runner.go:130] >       "uid": null,
	I0731 17:33:44.943534   44770 command_runner.go:130] >       "username": "",
	I0731 17:33:44.943543   44770 command_runner.go:130] >       "spec": null,
	I0731 17:33:44.943553   44770 command_runner.go:130] >       "pinned": false
	I0731 17:33:44.943557   44770 command_runner.go:130] >     },
	I0731 17:33:44.943563   44770 command_runner.go:130] >     {
	I0731 17:33:44.943573   44770 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0731 17:33:44.943582   44770 command_runner.go:130] >       "repoTags": [
	I0731 17:33:44.943594   44770 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0731 17:33:44.943604   44770 command_runner.go:130] >       ],
	I0731 17:33:44.943613   44770 command_runner.go:130] >       "repoDigests": [
	I0731 17:33:44.943628   44770 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0731 17:33:44.943650   44770 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0731 17:33:44.943657   44770 command_runner.go:130] >       ],
	I0731 17:33:44.943661   44770 command_runner.go:130] >       "size": "31470524",
	I0731 17:33:44.943668   44770 command_runner.go:130] >       "uid": null,
	I0731 17:33:44.943684   44770 command_runner.go:130] >       "username": "",
	I0731 17:33:44.943693   44770 command_runner.go:130] >       "spec": null,
	I0731 17:33:44.943700   44770 command_runner.go:130] >       "pinned": false
	I0731 17:33:44.943709   44770 command_runner.go:130] >     },
	I0731 17:33:44.943717   44770 command_runner.go:130] >     {
	I0731 17:33:44.943728   44770 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0731 17:33:44.943737   44770 command_runner.go:130] >       "repoTags": [
	I0731 17:33:44.943748   44770 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0731 17:33:44.943756   44770 command_runner.go:130] >       ],
	I0731 17:33:44.943764   44770 command_runner.go:130] >       "repoDigests": [
	I0731 17:33:44.943773   44770 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0731 17:33:44.943786   44770 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0731 17:33:44.943795   44770 command_runner.go:130] >       ],
	I0731 17:33:44.943803   44770 command_runner.go:130] >       "size": "61245718",
	I0731 17:33:44.943812   44770 command_runner.go:130] >       "uid": null,
	I0731 17:33:44.943822   44770 command_runner.go:130] >       "username": "nonroot",
	I0731 17:33:44.943831   44770 command_runner.go:130] >       "spec": null,
	I0731 17:33:44.943840   44770 command_runner.go:130] >       "pinned": false
	I0731 17:33:44.943848   44770 command_runner.go:130] >     },
	I0731 17:33:44.943856   44770 command_runner.go:130] >     {
	I0731 17:33:44.943864   44770 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0731 17:33:44.943870   44770 command_runner.go:130] >       "repoTags": [
	I0731 17:33:44.943877   44770 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0731 17:33:44.943885   44770 command_runner.go:130] >       ],
	I0731 17:33:44.943895   44770 command_runner.go:130] >       "repoDigests": [
	I0731 17:33:44.943906   44770 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0731 17:33:44.943920   44770 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0731 17:33:44.943928   44770 command_runner.go:130] >       ],
	I0731 17:33:44.943935   44770 command_runner.go:130] >       "size": "150779692",
	I0731 17:33:44.943944   44770 command_runner.go:130] >       "uid": {
	I0731 17:33:44.943953   44770 command_runner.go:130] >         "value": "0"
	I0731 17:33:44.943961   44770 command_runner.go:130] >       },
	I0731 17:33:44.943965   44770 command_runner.go:130] >       "username": "",
	I0731 17:33:44.943969   44770 command_runner.go:130] >       "spec": null,
	I0731 17:33:44.943974   44770 command_runner.go:130] >       "pinned": false
	I0731 17:33:44.943982   44770 command_runner.go:130] >     },
	I0731 17:33:44.943997   44770 command_runner.go:130] >     {
	I0731 17:33:44.944011   44770 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0731 17:33:44.944021   44770 command_runner.go:130] >       "repoTags": [
	I0731 17:33:44.944034   44770 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0731 17:33:44.944043   44770 command_runner.go:130] >       ],
	I0731 17:33:44.944052   44770 command_runner.go:130] >       "repoDigests": [
	I0731 17:33:44.944063   44770 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0731 17:33:44.944076   44770 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0731 17:33:44.944085   44770 command_runner.go:130] >       ],
	I0731 17:33:44.944092   44770 command_runner.go:130] >       "size": "117609954",
	I0731 17:33:44.944101   44770 command_runner.go:130] >       "uid": {
	I0731 17:33:44.944107   44770 command_runner.go:130] >         "value": "0"
	I0731 17:33:44.944115   44770 command_runner.go:130] >       },
	I0731 17:33:44.944122   44770 command_runner.go:130] >       "username": "",
	I0731 17:33:44.944134   44770 command_runner.go:130] >       "spec": null,
	I0731 17:33:44.944144   44770 command_runner.go:130] >       "pinned": false
	I0731 17:33:44.944149   44770 command_runner.go:130] >     },
	I0731 17:33:44.944154   44770 command_runner.go:130] >     {
	I0731 17:33:44.944161   44770 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0731 17:33:44.944166   44770 command_runner.go:130] >       "repoTags": [
	I0731 17:33:44.944178   44770 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0731 17:33:44.944186   44770 command_runner.go:130] >       ],
	I0731 17:33:44.944193   44770 command_runner.go:130] >       "repoDigests": [
	I0731 17:33:44.945399   44770 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0731 17:33:44.945427   44770 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0731 17:33:44.945431   44770 command_runner.go:130] >       ],
	I0731 17:33:44.945439   44770 command_runner.go:130] >       "size": "112198984",
	I0731 17:33:44.945442   44770 command_runner.go:130] >       "uid": {
	I0731 17:33:44.945446   44770 command_runner.go:130] >         "value": "0"
	I0731 17:33:44.945451   44770 command_runner.go:130] >       },
	I0731 17:33:44.945458   44770 command_runner.go:130] >       "username": "",
	I0731 17:33:44.945466   44770 command_runner.go:130] >       "spec": null,
	I0731 17:33:44.945476   44770 command_runner.go:130] >       "pinned": false
	I0731 17:33:44.945485   44770 command_runner.go:130] >     },
	I0731 17:33:44.945492   44770 command_runner.go:130] >     {
	I0731 17:33:44.945506   44770 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0731 17:33:44.945522   44770 command_runner.go:130] >       "repoTags": [
	I0731 17:33:44.945541   44770 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0731 17:33:44.945548   44770 command_runner.go:130] >       ],
	I0731 17:33:44.945554   44770 command_runner.go:130] >       "repoDigests": [
	I0731 17:33:44.945565   44770 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0731 17:33:44.945586   44770 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0731 17:33:44.945601   44770 command_runner.go:130] >       ],
	I0731 17:33:44.945612   44770 command_runner.go:130] >       "size": "85953945",
	I0731 17:33:44.945622   44770 command_runner.go:130] >       "uid": null,
	I0731 17:33:44.945636   44770 command_runner.go:130] >       "username": "",
	I0731 17:33:44.945644   44770 command_runner.go:130] >       "spec": null,
	I0731 17:33:44.945652   44770 command_runner.go:130] >       "pinned": false
	I0731 17:33:44.945661   44770 command_runner.go:130] >     },
	I0731 17:33:44.945670   44770 command_runner.go:130] >     {
	I0731 17:33:44.945686   44770 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0731 17:33:44.945695   44770 command_runner.go:130] >       "repoTags": [
	I0731 17:33:44.945706   44770 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0731 17:33:44.945715   44770 command_runner.go:130] >       ],
	I0731 17:33:44.945725   44770 command_runner.go:130] >       "repoDigests": [
	I0731 17:33:44.945742   44770 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0731 17:33:44.945755   44770 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0731 17:33:44.945764   44770 command_runner.go:130] >       ],
	I0731 17:33:44.945780   44770 command_runner.go:130] >       "size": "63051080",
	I0731 17:33:44.945789   44770 command_runner.go:130] >       "uid": {
	I0731 17:33:44.945798   44770 command_runner.go:130] >         "value": "0"
	I0731 17:33:44.945807   44770 command_runner.go:130] >       },
	I0731 17:33:44.945816   44770 command_runner.go:130] >       "username": "",
	I0731 17:33:44.945825   44770 command_runner.go:130] >       "spec": null,
	I0731 17:33:44.945834   44770 command_runner.go:130] >       "pinned": false
	I0731 17:33:44.945843   44770 command_runner.go:130] >     },
	I0731 17:33:44.945851   44770 command_runner.go:130] >     {
	I0731 17:33:44.945861   44770 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0731 17:33:44.945871   44770 command_runner.go:130] >       "repoTags": [
	I0731 17:33:44.945886   44770 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0731 17:33:44.945895   44770 command_runner.go:130] >       ],
	I0731 17:33:44.945904   44770 command_runner.go:130] >       "repoDigests": [
	I0731 17:33:44.945922   44770 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0731 17:33:44.945942   44770 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0731 17:33:44.945952   44770 command_runner.go:130] >       ],
	I0731 17:33:44.945958   44770 command_runner.go:130] >       "size": "750414",
	I0731 17:33:44.945968   44770 command_runner.go:130] >       "uid": {
	I0731 17:33:44.945978   44770 command_runner.go:130] >         "value": "65535"
	I0731 17:33:44.945992   44770 command_runner.go:130] >       },
	I0731 17:33:44.946001   44770 command_runner.go:130] >       "username": "",
	I0731 17:33:44.946019   44770 command_runner.go:130] >       "spec": null,
	I0731 17:33:44.946025   44770 command_runner.go:130] >       "pinned": true
	I0731 17:33:44.946030   44770 command_runner.go:130] >     }
	I0731 17:33:44.946038   44770 command_runner.go:130] >   ]
	I0731 17:33:44.946044   44770 command_runner.go:130] > }
	I0731 17:33:44.946351   44770 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 17:33:44.946666   44770 cache_images.go:84] Images are preloaded, skipping loading
	I0731 17:33:44.946686   44770 kubeadm.go:934] updating node { 192.168.39.100 8443 v1.30.3 crio true true} ...
	I0731 17:33:44.946804   44770 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-498089 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.100
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-498089 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 17:33:44.946887   44770 ssh_runner.go:195] Run: crio config
	I0731 17:33:44.987825   44770 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0731 17:33:44.987854   44770 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0731 17:33:44.987861   44770 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0731 17:33:44.987865   44770 command_runner.go:130] > #
	I0731 17:33:44.987872   44770 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0731 17:33:44.987878   44770 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0731 17:33:44.987886   44770 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0731 17:33:44.987897   44770 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0731 17:33:44.987902   44770 command_runner.go:130] > # reload'.
	I0731 17:33:44.987912   44770 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0731 17:33:44.987921   44770 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0731 17:33:44.987927   44770 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0731 17:33:44.987934   44770 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0731 17:33:44.987938   44770 command_runner.go:130] > [crio]
	I0731 17:33:44.987944   44770 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0731 17:33:44.987950   44770 command_runner.go:130] > # containers images, in this directory.
	I0731 17:33:44.988283   44770 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0731 17:33:44.988325   44770 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0731 17:33:44.988334   44770 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0731 17:33:44.988347   44770 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0731 17:33:44.988356   44770 command_runner.go:130] > # imagestore = ""
	I0731 17:33:44.988366   44770 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0731 17:33:44.988376   44770 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0731 17:33:44.988387   44770 command_runner.go:130] > storage_driver = "overlay"
	I0731 17:33:44.988397   44770 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0731 17:33:44.988410   44770 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0731 17:33:44.988416   44770 command_runner.go:130] > storage_option = [
	I0731 17:33:44.988424   44770 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0731 17:33:44.988430   44770 command_runner.go:130] > ]
	I0731 17:33:44.988440   44770 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0731 17:33:44.988449   44770 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0731 17:33:44.988456   44770 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0731 17:33:44.988466   44770 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0731 17:33:44.988475   44770 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0731 17:33:44.988485   44770 command_runner.go:130] > # always happen on a node reboot
	I0731 17:33:44.988493   44770 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0731 17:33:44.988510   44770 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0731 17:33:44.988523   44770 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0731 17:33:44.988531   44770 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0731 17:33:44.988543   44770 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0731 17:33:44.988561   44770 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0731 17:33:44.988578   44770 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0731 17:33:44.988585   44770 command_runner.go:130] > # internal_wipe = true
	I0731 17:33:44.988596   44770 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0731 17:33:44.988607   44770 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0731 17:33:44.988615   44770 command_runner.go:130] > # internal_repair = false
	I0731 17:33:44.988626   44770 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0731 17:33:44.988635   44770 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0731 17:33:44.988647   44770 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0731 17:33:44.988656   44770 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0731 17:33:44.988669   44770 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0731 17:33:44.988679   44770 command_runner.go:130] > [crio.api]
	I0731 17:33:44.988689   44770 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0731 17:33:44.988700   44770 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0731 17:33:44.988711   44770 command_runner.go:130] > # IP address on which the stream server will listen.
	I0731 17:33:44.988725   44770 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0731 17:33:44.988739   44770 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0731 17:33:44.988748   44770 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0731 17:33:44.988760   44770 command_runner.go:130] > # stream_port = "0"
	I0731 17:33:44.988770   44770 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0731 17:33:44.988780   44770 command_runner.go:130] > # stream_enable_tls = false
	I0731 17:33:44.988789   44770 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0731 17:33:44.988801   44770 command_runner.go:130] > # stream_idle_timeout = ""
	I0731 17:33:44.988816   44770 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0731 17:33:44.988828   44770 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0731 17:33:44.988834   44770 command_runner.go:130] > # minutes.
	I0731 17:33:44.988843   44770 command_runner.go:130] > # stream_tls_cert = ""
	I0731 17:33:44.988853   44770 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0731 17:33:44.988865   44770 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0731 17:33:44.988872   44770 command_runner.go:130] > # stream_tls_key = ""
	I0731 17:33:44.988885   44770 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0731 17:33:44.988899   44770 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0731 17:33:44.988930   44770 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0731 17:33:44.988940   44770 command_runner.go:130] > # stream_tls_ca = ""
	I0731 17:33:44.988952   44770 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0731 17:33:44.988964   44770 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0731 17:33:44.988977   44770 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0731 17:33:44.988987   44770 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0731 17:33:44.988997   44770 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0731 17:33:44.989009   44770 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0731 17:33:44.989018   44770 command_runner.go:130] > [crio.runtime]
	I0731 17:33:44.989028   44770 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0731 17:33:44.989040   44770 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0731 17:33:44.989049   44770 command_runner.go:130] > # "nofile=1024:2048"
	I0731 17:33:44.989063   44770 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0731 17:33:44.989073   44770 command_runner.go:130] > # default_ulimits = [
	I0731 17:33:44.989078   44770 command_runner.go:130] > # ]
	I0731 17:33:44.989088   44770 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0731 17:33:44.989097   44770 command_runner.go:130] > # no_pivot = false
	I0731 17:33:44.989105   44770 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0731 17:33:44.989117   44770 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0731 17:33:44.989125   44770 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0731 17:33:44.989137   44770 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0731 17:33:44.989147   44770 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0731 17:33:44.989158   44770 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0731 17:33:44.989170   44770 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0731 17:33:44.989182   44770 command_runner.go:130] > # Cgroup setting for conmon
	I0731 17:33:44.989194   44770 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0731 17:33:44.989204   44770 command_runner.go:130] > conmon_cgroup = "pod"
	I0731 17:33:44.989213   44770 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0731 17:33:44.989223   44770 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0731 17:33:44.989236   44770 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0731 17:33:44.989245   44770 command_runner.go:130] > conmon_env = [
	I0731 17:33:44.989254   44770 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0731 17:33:44.989262   44770 command_runner.go:130] > ]
	I0731 17:33:44.989271   44770 command_runner.go:130] > # Additional environment variables to set for all the
	I0731 17:33:44.989283   44770 command_runner.go:130] > # containers. These are overridden if set in the
	I0731 17:33:44.989293   44770 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0731 17:33:44.989301   44770 command_runner.go:130] > # default_env = [
	I0731 17:33:44.989307   44770 command_runner.go:130] > # ]
	I0731 17:33:44.989320   44770 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0731 17:33:44.989335   44770 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0731 17:33:44.989341   44770 command_runner.go:130] > # selinux = false
	I0731 17:33:44.989351   44770 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0731 17:33:44.989361   44770 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0731 17:33:44.989372   44770 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0731 17:33:44.989378   44770 command_runner.go:130] > # seccomp_profile = ""
	I0731 17:33:44.989387   44770 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0731 17:33:44.989395   44770 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0731 17:33:44.989408   44770 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0731 17:33:44.989418   44770 command_runner.go:130] > # which might increase security.
	I0731 17:33:44.989425   44770 command_runner.go:130] > # This option is currently deprecated,
	I0731 17:33:44.989438   44770 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0731 17:33:44.989449   44770 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0731 17:33:44.989462   44770 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0731 17:33:44.989476   44770 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0731 17:33:44.989490   44770 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0731 17:33:44.989504   44770 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0731 17:33:44.989516   44770 command_runner.go:130] > # This option supports live configuration reload.
	I0731 17:33:44.989527   44770 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0731 17:33:44.989537   44770 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0731 17:33:44.989556   44770 command_runner.go:130] > # the cgroup blockio controller.
	I0731 17:33:44.989566   44770 command_runner.go:130] > # blockio_config_file = ""
	I0731 17:33:44.989577   44770 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0731 17:33:44.989585   44770 command_runner.go:130] > # blockio parameters.
	I0731 17:33:44.989592   44770 command_runner.go:130] > # blockio_reload = false
	I0731 17:33:44.989602   44770 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0731 17:33:44.989619   44770 command_runner.go:130] > # irqbalance daemon.
	I0731 17:33:44.989632   44770 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0731 17:33:44.989644   44770 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0731 17:33:44.989657   44770 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0731 17:33:44.989672   44770 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0731 17:33:44.989684   44770 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0731 17:33:44.989695   44770 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0731 17:33:44.989707   44770 command_runner.go:130] > # This option supports live configuration reload.
	I0731 17:33:44.989715   44770 command_runner.go:130] > # rdt_config_file = ""
	I0731 17:33:44.989723   44770 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0731 17:33:44.989733   44770 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0731 17:33:44.989815   44770 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0731 17:33:44.989830   44770 command_runner.go:130] > # separate_pull_cgroup = ""
	I0731 17:33:44.989845   44770 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0731 17:33:44.989859   44770 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0731 17:33:44.989868   44770 command_runner.go:130] > # will be added.
	I0731 17:33:44.989875   44770 command_runner.go:130] > # default_capabilities = [
	I0731 17:33:44.989885   44770 command_runner.go:130] > # 	"CHOWN",
	I0731 17:33:44.989891   44770 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0731 17:33:44.989898   44770 command_runner.go:130] > # 	"FSETID",
	I0731 17:33:44.989904   44770 command_runner.go:130] > # 	"FOWNER",
	I0731 17:33:44.989912   44770 command_runner.go:130] > # 	"SETGID",
	I0731 17:33:44.989918   44770 command_runner.go:130] > # 	"SETUID",
	I0731 17:33:44.989926   44770 command_runner.go:130] > # 	"SETPCAP",
	I0731 17:33:44.989934   44770 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0731 17:33:44.989942   44770 command_runner.go:130] > # 	"KILL",
	I0731 17:33:44.989948   44770 command_runner.go:130] > # ]
	I0731 17:33:44.989962   44770 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0731 17:33:44.989975   44770 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0731 17:33:44.989986   44770 command_runner.go:130] > # add_inheritable_capabilities = false
	I0731 17:33:44.989999   44770 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0731 17:33:44.990011   44770 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0731 17:33:44.990020   44770 command_runner.go:130] > default_sysctls = [
	I0731 17:33:44.990028   44770 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0731 17:33:44.990037   44770 command_runner.go:130] > ]
	I0731 17:33:44.990044   44770 command_runner.go:130] > # List of devices on the host that a
	I0731 17:33:44.990058   44770 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0731 17:33:44.990068   44770 command_runner.go:130] > # allowed_devices = [
	I0731 17:33:44.990077   44770 command_runner.go:130] > # 	"/dev/fuse",
	I0731 17:33:44.990082   44770 command_runner.go:130] > # ]
	I0731 17:33:44.990090   44770 command_runner.go:130] > # List of additional devices. specified as
	I0731 17:33:44.990104   44770 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0731 17:33:44.990115   44770 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0731 17:33:44.990124   44770 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0731 17:33:44.990135   44770 command_runner.go:130] > # additional_devices = [
	I0731 17:33:44.990144   44770 command_runner.go:130] > # ]
	I0731 17:33:44.990152   44770 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0731 17:33:44.990170   44770 command_runner.go:130] > # cdi_spec_dirs = [
	I0731 17:33:44.990180   44770 command_runner.go:130] > # 	"/etc/cdi",
	I0731 17:33:44.990187   44770 command_runner.go:130] > # 	"/var/run/cdi",
	I0731 17:33:44.990193   44770 command_runner.go:130] > # ]
	I0731 17:33:44.990203   44770 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0731 17:33:44.990217   44770 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0731 17:33:44.990225   44770 command_runner.go:130] > # Defaults to false.
	I0731 17:33:44.990234   44770 command_runner.go:130] > # device_ownership_from_security_context = false
	I0731 17:33:44.990248   44770 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0731 17:33:44.990260   44770 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0731 17:33:44.990269   44770 command_runner.go:130] > # hooks_dir = [
	I0731 17:33:44.990277   44770 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0731 17:33:44.990286   44770 command_runner.go:130] > # ]
	I0731 17:33:44.990296   44770 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0731 17:33:44.990309   44770 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0731 17:33:44.990320   44770 command_runner.go:130] > # its default mounts from the following two files:
	I0731 17:33:44.990328   44770 command_runner.go:130] > #
	I0731 17:33:44.990337   44770 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0731 17:33:44.990351   44770 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0731 17:33:44.990361   44770 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0731 17:33:44.990369   44770 command_runner.go:130] > #
	I0731 17:33:44.990380   44770 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0731 17:33:44.990393   44770 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0731 17:33:44.990408   44770 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0731 17:33:44.990420   44770 command_runner.go:130] > #      only add mounts it finds in this file.
	I0731 17:33:44.990428   44770 command_runner.go:130] > #
	I0731 17:33:44.990435   44770 command_runner.go:130] > # default_mounts_file = ""
	I0731 17:33:44.990448   44770 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0731 17:33:44.990462   44770 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0731 17:33:44.990472   44770 command_runner.go:130] > pids_limit = 1024
	I0731 17:33:44.990481   44770 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0731 17:33:44.990493   44770 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0731 17:33:44.990506   44770 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0731 17:33:44.990518   44770 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0731 17:33:44.990527   44770 command_runner.go:130] > # log_size_max = -1
	I0731 17:33:44.990541   44770 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0731 17:33:44.990566   44770 command_runner.go:130] > # log_to_journald = false
	I0731 17:33:44.990580   44770 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0731 17:33:44.990591   44770 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0731 17:33:44.990602   44770 command_runner.go:130] > # Path to directory for container attach sockets.
	I0731 17:33:44.990614   44770 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0731 17:33:44.990627   44770 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0731 17:33:44.990637   44770 command_runner.go:130] > # bind_mount_prefix = ""
	I0731 17:33:44.990646   44770 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0731 17:33:44.990655   44770 command_runner.go:130] > # read_only = false
	I0731 17:33:44.990666   44770 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0731 17:33:44.990678   44770 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0731 17:33:44.990687   44770 command_runner.go:130] > # live configuration reload.
	I0731 17:33:44.990694   44770 command_runner.go:130] > # log_level = "info"
	I0731 17:33:44.990704   44770 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0731 17:33:44.990714   44770 command_runner.go:130] > # This option supports live configuration reload.
	I0731 17:33:44.990723   44770 command_runner.go:130] > # log_filter = ""
	I0731 17:33:44.990732   44770 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0731 17:33:44.990748   44770 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0731 17:33:44.990757   44770 command_runner.go:130] > # separated by comma.
	I0731 17:33:44.990767   44770 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0731 17:33:44.990776   44770 command_runner.go:130] > # uid_mappings = ""
	I0731 17:33:44.990785   44770 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0731 17:33:44.990796   44770 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0731 17:33:44.990807   44770 command_runner.go:130] > # separated by comma.
	I0731 17:33:44.990821   44770 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0731 17:33:44.990845   44770 command_runner.go:130] > # gid_mappings = ""
	I0731 17:33:44.990861   44770 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0731 17:33:44.990873   44770 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0731 17:33:44.990885   44770 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0731 17:33:44.990899   44770 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0731 17:33:44.990909   44770 command_runner.go:130] > # minimum_mappable_uid = -1
	I0731 17:33:44.990918   44770 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0731 17:33:44.990930   44770 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0731 17:33:44.990941   44770 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0731 17:33:44.990956   44770 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0731 17:33:44.990966   44770 command_runner.go:130] > # minimum_mappable_gid = -1
	I0731 17:33:44.990986   44770 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0731 17:33:44.990996   44770 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0731 17:33:44.991003   44770 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0731 17:33:44.991010   44770 command_runner.go:130] > # ctr_stop_timeout = 30
	I0731 17:33:44.991015   44770 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0731 17:33:44.991023   44770 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0731 17:33:44.991028   44770 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0731 17:33:44.991035   44770 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0731 17:33:44.991038   44770 command_runner.go:130] > drop_infra_ctr = false
	I0731 17:33:44.991046   44770 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0731 17:33:44.991051   44770 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0731 17:33:44.991059   44770 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0731 17:33:44.991065   44770 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0731 17:33:44.991072   44770 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0731 17:33:44.991078   44770 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0731 17:33:44.991089   44770 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0731 17:33:44.991095   44770 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0731 17:33:44.991104   44770 command_runner.go:130] > # shared_cpuset = ""
	I0731 17:33:44.991125   44770 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0731 17:33:44.991136   44770 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0731 17:33:44.991143   44770 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0731 17:33:44.991156   44770 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0731 17:33:44.991167   44770 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0731 17:33:44.991178   44770 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0731 17:33:44.991191   44770 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0731 17:33:44.991201   44770 command_runner.go:130] > # enable_criu_support = false
	I0731 17:33:44.991212   44770 command_runner.go:130] > # Enable/disable the generation of the container,
	I0731 17:33:44.991224   44770 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0731 17:33:44.991233   44770 command_runner.go:130] > # enable_pod_events = false
	I0731 17:33:44.991240   44770 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0731 17:33:44.991247   44770 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0731 17:33:44.991252   44770 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0731 17:33:44.991259   44770 command_runner.go:130] > # default_runtime = "runc"
	I0731 17:33:44.991265   44770 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0731 17:33:44.991274   44770 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0731 17:33:44.991283   44770 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0731 17:33:44.991297   44770 command_runner.go:130] > # creation as a file is not desired either.
	I0731 17:33:44.991308   44770 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0731 17:33:44.991316   44770 command_runner.go:130] > # the hostname is being managed dynamically.
	I0731 17:33:44.991320   44770 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0731 17:33:44.991326   44770 command_runner.go:130] > # ]
	I0731 17:33:44.991331   44770 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0731 17:33:44.991339   44770 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0731 17:33:44.991345   44770 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0731 17:33:44.991352   44770 command_runner.go:130] > # Each entry in the table should follow the format:
	I0731 17:33:44.991355   44770 command_runner.go:130] > #
	I0731 17:33:44.991359   44770 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0731 17:33:44.991364   44770 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0731 17:33:44.991408   44770 command_runner.go:130] > # runtime_type = "oci"
	I0731 17:33:44.991415   44770 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0731 17:33:44.991419   44770 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0731 17:33:44.991423   44770 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0731 17:33:44.991427   44770 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0731 17:33:44.991431   44770 command_runner.go:130] > # monitor_env = []
	I0731 17:33:44.991436   44770 command_runner.go:130] > # privileged_without_host_devices = false
	I0731 17:33:44.991441   44770 command_runner.go:130] > # allowed_annotations = []
	I0731 17:33:44.991446   44770 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0731 17:33:44.991450   44770 command_runner.go:130] > # Where:
	I0731 17:33:44.991455   44770 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0731 17:33:44.991461   44770 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0731 17:33:44.991469   44770 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0731 17:33:44.991475   44770 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0731 17:33:44.991480   44770 command_runner.go:130] > #   in $PATH.
	I0731 17:33:44.991488   44770 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0731 17:33:44.991494   44770 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0731 17:33:44.991499   44770 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0731 17:33:44.991505   44770 command_runner.go:130] > #   state.
	I0731 17:33:44.991511   44770 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0731 17:33:44.991518   44770 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0731 17:33:44.991524   44770 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0731 17:33:44.991531   44770 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0731 17:33:44.991536   44770 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0731 17:33:44.991553   44770 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0731 17:33:44.991560   44770 command_runner.go:130] > #   The currently recognized values are:
	I0731 17:33:44.991566   44770 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0731 17:33:44.991573   44770 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0731 17:33:44.991581   44770 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0731 17:33:44.991586   44770 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0731 17:33:44.991595   44770 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0731 17:33:44.991601   44770 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0731 17:33:44.991609   44770 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0731 17:33:44.991615   44770 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0731 17:33:44.991622   44770 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0731 17:33:44.991628   44770 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0731 17:33:44.991632   44770 command_runner.go:130] > #   deprecated option "conmon".
	I0731 17:33:44.991640   44770 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0731 17:33:44.991645   44770 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0731 17:33:44.991653   44770 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0731 17:33:44.991658   44770 command_runner.go:130] > #   should be moved to the container's cgroup
	I0731 17:33:44.991666   44770 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0731 17:33:44.991670   44770 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0731 17:33:44.991678   44770 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0731 17:33:44.991683   44770 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0731 17:33:44.991688   44770 command_runner.go:130] > #
	I0731 17:33:44.991692   44770 command_runner.go:130] > # Using the seccomp notifier feature:
	I0731 17:33:44.991695   44770 command_runner.go:130] > #
	I0731 17:33:44.991700   44770 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0731 17:33:44.991707   44770 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0731 17:33:44.991710   44770 command_runner.go:130] > #
	I0731 17:33:44.991718   44770 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0731 17:33:44.991730   44770 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0731 17:33:44.991738   44770 command_runner.go:130] > #
	I0731 17:33:44.991749   44770 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0731 17:33:44.991758   44770 command_runner.go:130] > # feature.
	I0731 17:33:44.991761   44770 command_runner.go:130] > #
	I0731 17:33:44.991767   44770 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0731 17:33:44.991775   44770 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0731 17:33:44.991781   44770 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0731 17:33:44.991794   44770 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0731 17:33:44.991801   44770 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0731 17:33:44.991805   44770 command_runner.go:130] > #
	I0731 17:33:44.991810   44770 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0731 17:33:44.991817   44770 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0731 17:33:44.991820   44770 command_runner.go:130] > #
	I0731 17:33:44.991826   44770 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0731 17:33:44.991833   44770 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0731 17:33:44.991836   44770 command_runner.go:130] > #
	I0731 17:33:44.991842   44770 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0731 17:33:44.991848   44770 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0731 17:33:44.991851   44770 command_runner.go:130] > # limitation.
	I0731 17:33:44.991856   44770 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0731 17:33:44.991861   44770 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0731 17:33:44.991865   44770 command_runner.go:130] > runtime_type = "oci"
	I0731 17:33:44.991871   44770 command_runner.go:130] > runtime_root = "/run/runc"
	I0731 17:33:44.991875   44770 command_runner.go:130] > runtime_config_path = ""
	I0731 17:33:44.991879   44770 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0731 17:33:44.991883   44770 command_runner.go:130] > monitor_cgroup = "pod"
	I0731 17:33:44.991887   44770 command_runner.go:130] > monitor_exec_cgroup = ""
	I0731 17:33:44.991890   44770 command_runner.go:130] > monitor_env = [
	I0731 17:33:44.991897   44770 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0731 17:33:44.991899   44770 command_runner.go:130] > ]
	I0731 17:33:44.991906   44770 command_runner.go:130] > privileged_without_host_devices = false
	I0731 17:33:44.991914   44770 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0731 17:33:44.991919   44770 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0731 17:33:44.991926   44770 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0731 17:33:44.991932   44770 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0731 17:33:44.991942   44770 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0731 17:33:44.991947   44770 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0731 17:33:44.991956   44770 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0731 17:33:44.991965   44770 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0731 17:33:44.991971   44770 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0731 17:33:44.991979   44770 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0731 17:33:44.991983   44770 command_runner.go:130] > # Example:
	I0731 17:33:44.991987   44770 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0731 17:33:44.991997   44770 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0731 17:33:44.992002   44770 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0731 17:33:44.992006   44770 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0731 17:33:44.992010   44770 command_runner.go:130] > # cpuset = 0
	I0731 17:33:44.992013   44770 command_runner.go:130] > # cpushares = "0-1"
	I0731 17:33:44.992016   44770 command_runner.go:130] > # Where:
	I0731 17:33:44.992020   44770 command_runner.go:130] > # The workload name is workload-type.
	I0731 17:33:44.992026   44770 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0731 17:33:44.992031   44770 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0731 17:33:44.992036   44770 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0731 17:33:44.992043   44770 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0731 17:33:44.992048   44770 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0731 17:33:44.992055   44770 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0731 17:33:44.992065   44770 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0731 17:33:44.992072   44770 command_runner.go:130] > # Default value is set to true
	I0731 17:33:44.992078   44770 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0731 17:33:44.992086   44770 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0731 17:33:44.992094   44770 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0731 17:33:44.992100   44770 command_runner.go:130] > # Default value is set to 'false'
	I0731 17:33:44.992107   44770 command_runner.go:130] > # disable_hostport_mapping = false
	I0731 17:33:44.992116   44770 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0731 17:33:44.992120   44770 command_runner.go:130] > #
	I0731 17:33:44.992128   44770 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0731 17:33:44.992137   44770 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0731 17:33:44.992144   44770 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0731 17:33:44.992149   44770 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0731 17:33:44.992154   44770 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0731 17:33:44.992157   44770 command_runner.go:130] > [crio.image]
	I0731 17:33:44.992163   44770 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0731 17:33:44.992167   44770 command_runner.go:130] > # default_transport = "docker://"
	I0731 17:33:44.992172   44770 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0731 17:33:44.992178   44770 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0731 17:33:44.992182   44770 command_runner.go:130] > # global_auth_file = ""
	I0731 17:33:44.992186   44770 command_runner.go:130] > # The image used to instantiate infra containers.
	I0731 17:33:44.992194   44770 command_runner.go:130] > # This option supports live configuration reload.
	I0731 17:33:44.992200   44770 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0731 17:33:44.992216   44770 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0731 17:33:44.992230   44770 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0731 17:33:44.992238   44770 command_runner.go:130] > # This option supports live configuration reload.
	I0731 17:33:44.992248   44770 command_runner.go:130] > # pause_image_auth_file = ""
	I0731 17:33:44.992256   44770 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0731 17:33:44.992268   44770 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0731 17:33:44.992280   44770 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0731 17:33:44.992291   44770 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0731 17:33:44.992301   44770 command_runner.go:130] > # pause_command = "/pause"
	I0731 17:33:44.992311   44770 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0731 17:33:44.992320   44770 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0731 17:33:44.992326   44770 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0731 17:33:44.992332   44770 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0731 17:33:44.992340   44770 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0731 17:33:44.992346   44770 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0731 17:33:44.992352   44770 command_runner.go:130] > # pinned_images = [
	I0731 17:33:44.992358   44770 command_runner.go:130] > # ]
	I0731 17:33:44.992370   44770 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0731 17:33:44.992382   44770 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0731 17:33:44.992395   44770 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0731 17:33:44.992407   44770 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0731 17:33:44.992427   44770 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0731 17:33:44.992433   44770 command_runner.go:130] > # signature_policy = ""
	I0731 17:33:44.992439   44770 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0731 17:33:44.992451   44770 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0731 17:33:44.992465   44770 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0731 17:33:44.992477   44770 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0731 17:33:44.992490   44770 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0731 17:33:44.992500   44770 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0731 17:33:44.992512   44770 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0731 17:33:44.992522   44770 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0731 17:33:44.992530   44770 command_runner.go:130] > # changing them here.
	I0731 17:33:44.992540   44770 command_runner.go:130] > # insecure_registries = [
	I0731 17:33:44.992553   44770 command_runner.go:130] > # ]
	I0731 17:33:44.992566   44770 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0731 17:33:44.992578   44770 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0731 17:33:44.992596   44770 command_runner.go:130] > # image_volumes = "mkdir"
	I0731 17:33:44.992605   44770 command_runner.go:130] > # Temporary directory to use for storing big files
	I0731 17:33:44.992612   44770 command_runner.go:130] > # big_files_temporary_dir = ""
	I0731 17:33:44.992621   44770 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0731 17:33:44.992630   44770 command_runner.go:130] > # CNI plugins.
	I0731 17:33:44.992640   44770 command_runner.go:130] > [crio.network]
	I0731 17:33:44.992649   44770 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0731 17:33:44.992660   44770 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0731 17:33:44.992669   44770 command_runner.go:130] > # cni_default_network = ""
	I0731 17:33:44.992681   44770 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0731 17:33:44.992691   44770 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0731 17:33:44.992699   44770 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0731 17:33:44.992706   44770 command_runner.go:130] > # plugin_dirs = [
	I0731 17:33:44.992710   44770 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0731 17:33:44.992715   44770 command_runner.go:130] > # ]
	I0731 17:33:44.992722   44770 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0731 17:33:44.992731   44770 command_runner.go:130] > [crio.metrics]
	I0731 17:33:44.992740   44770 command_runner.go:130] > # Globally enable or disable metrics support.
	I0731 17:33:44.992749   44770 command_runner.go:130] > enable_metrics = true
	I0731 17:33:44.992760   44770 command_runner.go:130] > # Specify enabled metrics collectors.
	I0731 17:33:44.992770   44770 command_runner.go:130] > # Per default all metrics are enabled.
	I0731 17:33:44.992782   44770 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0731 17:33:44.992795   44770 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0731 17:33:44.992803   44770 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0731 17:33:44.992809   44770 command_runner.go:130] > # metrics_collectors = [
	I0731 17:33:44.992813   44770 command_runner.go:130] > # 	"operations",
	I0731 17:33:44.992820   44770 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0731 17:33:44.992827   44770 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0731 17:33:44.992831   44770 command_runner.go:130] > # 	"operations_errors",
	I0731 17:33:44.992837   44770 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0731 17:33:44.992841   44770 command_runner.go:130] > # 	"image_pulls_by_name",
	I0731 17:33:44.992847   44770 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0731 17:33:44.992851   44770 command_runner.go:130] > # 	"image_pulls_failures",
	I0731 17:33:44.992857   44770 command_runner.go:130] > # 	"image_pulls_successes",
	I0731 17:33:44.992861   44770 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0731 17:33:44.992868   44770 command_runner.go:130] > # 	"image_layer_reuse",
	I0731 17:33:44.992881   44770 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0731 17:33:44.992892   44770 command_runner.go:130] > # 	"containers_oom_total",
	I0731 17:33:44.992898   44770 command_runner.go:130] > # 	"containers_oom",
	I0731 17:33:44.992907   44770 command_runner.go:130] > # 	"processes_defunct",
	I0731 17:33:44.992916   44770 command_runner.go:130] > # 	"operations_total",
	I0731 17:33:44.992926   44770 command_runner.go:130] > # 	"operations_latency_seconds",
	I0731 17:33:44.992936   44770 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0731 17:33:44.992944   44770 command_runner.go:130] > # 	"operations_errors_total",
	I0731 17:33:44.992950   44770 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0731 17:33:44.992954   44770 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0731 17:33:44.992960   44770 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0731 17:33:44.992964   44770 command_runner.go:130] > # 	"image_pulls_success_total",
	I0731 17:33:44.992970   44770 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0731 17:33:44.992974   44770 command_runner.go:130] > # 	"containers_oom_count_total",
	I0731 17:33:44.992982   44770 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0731 17:33:44.992987   44770 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0731 17:33:44.992991   44770 command_runner.go:130] > # ]
	I0731 17:33:44.992997   44770 command_runner.go:130] > # The port on which the metrics server will listen.
	I0731 17:33:44.993001   44770 command_runner.go:130] > # metrics_port = 9090
	I0731 17:33:44.993008   44770 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0731 17:33:44.993012   44770 command_runner.go:130] > # metrics_socket = ""
	I0731 17:33:44.993019   44770 command_runner.go:130] > # The certificate for the secure metrics server.
	I0731 17:33:44.993025   44770 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0731 17:33:44.993032   44770 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0731 17:33:44.993039   44770 command_runner.go:130] > # certificate on any modification event.
	I0731 17:33:44.993043   44770 command_runner.go:130] > # metrics_cert = ""
	I0731 17:33:44.993048   44770 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0731 17:33:44.993054   44770 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0731 17:33:44.993058   44770 command_runner.go:130] > # metrics_key = ""
	I0731 17:33:44.993066   44770 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0731 17:33:44.993072   44770 command_runner.go:130] > [crio.tracing]
	I0731 17:33:44.993078   44770 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0731 17:33:44.993087   44770 command_runner.go:130] > # enable_tracing = false
	I0731 17:33:44.993097   44770 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0731 17:33:44.993106   44770 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0731 17:33:44.993118   44770 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0731 17:33:44.993135   44770 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0731 17:33:44.993144   44770 command_runner.go:130] > # CRI-O NRI configuration.
	I0731 17:33:44.993152   44770 command_runner.go:130] > [crio.nri]
	I0731 17:33:44.993162   44770 command_runner.go:130] > # Globally enable or disable NRI.
	I0731 17:33:44.993169   44770 command_runner.go:130] > # enable_nri = false
	I0731 17:33:44.993179   44770 command_runner.go:130] > # NRI socket to listen on.
	I0731 17:33:44.993189   44770 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0731 17:33:44.993198   44770 command_runner.go:130] > # NRI plugin directory to use.
	I0731 17:33:44.993208   44770 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0731 17:33:44.993217   44770 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0731 17:33:44.993224   44770 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0731 17:33:44.993229   44770 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0731 17:33:44.993236   44770 command_runner.go:130] > # nri_disable_connections = false
	I0731 17:33:44.993241   44770 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0731 17:33:44.993247   44770 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0731 17:33:44.993252   44770 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0731 17:33:44.993258   44770 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0731 17:33:44.993264   44770 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0731 17:33:44.993269   44770 command_runner.go:130] > [crio.stats]
	I0731 17:33:44.993274   44770 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0731 17:33:44.993281   44770 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0731 17:33:44.993285   44770 command_runner.go:130] > # stats_collection_period = 0
	I0731 17:33:44.993319   44770 command_runner.go:130] ! time="2024-07-31 17:33:44.951683195Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0731 17:33:44.993333   44770 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0731 17:33:44.993434   44770 cni.go:84] Creating CNI manager for ""
	I0731 17:33:44.993442   44770 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0731 17:33:44.993450   44770 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 17:33:44.993469   44770 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.100 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-498089 NodeName:multinode-498089 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.100"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.100 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 17:33:44.993609   44770 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.100
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-498089"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.100
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.100"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 17:33:44.993663   44770 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 17:33:45.002855   44770 command_runner.go:130] > kubeadm
	I0731 17:33:45.002873   44770 command_runner.go:130] > kubectl
	I0731 17:33:45.002877   44770 command_runner.go:130] > kubelet
	I0731 17:33:45.002893   44770 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 17:33:45.002942   44770 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 17:33:45.011742   44770 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0731 17:33:45.027439   44770 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 17:33:45.042519   44770 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0731 17:33:45.057536   44770 ssh_runner.go:195] Run: grep 192.168.39.100	control-plane.minikube.internal$ /etc/hosts
	I0731 17:33:45.060949   44770 command_runner.go:130] > 192.168.39.100	control-plane.minikube.internal
	I0731 17:33:45.061020   44770 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 17:33:45.197866   44770 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 17:33:45.212199   44770 certs.go:68] Setting up /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/multinode-498089 for IP: 192.168.39.100
	I0731 17:33:45.212224   44770 certs.go:194] generating shared ca certs ...
	I0731 17:33:45.212238   44770 certs.go:226] acquiring lock for ca certs: {Name:mk49eed439085f9c30746706460ce213570d1997 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 17:33:45.212393   44770 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key
	I0731 17:33:45.212434   44770 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key
	I0731 17:33:45.212444   44770 certs.go:256] generating profile certs ...
	I0731 17:33:45.212517   44770 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/multinode-498089/client.key
	I0731 17:33:45.212579   44770 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/multinode-498089/apiserver.key.4dfe397f
	I0731 17:33:45.212614   44770 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/multinode-498089/proxy-client.key
	I0731 17:33:45.212624   44770 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 17:33:45.212635   44770 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0731 17:33:45.212647   44770 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 17:33:45.212660   44770 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 17:33:45.212672   44770 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/multinode-498089/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 17:33:45.212686   44770 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/multinode-498089/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 17:33:45.212699   44770 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/multinode-498089/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 17:33:45.212710   44770 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/multinode-498089/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 17:33:45.212767   44770 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem (1338 bytes)
	W0731 17:33:45.212794   44770 certs.go:480] ignoring /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259_empty.pem, impossibly tiny 0 bytes
	I0731 17:33:45.212803   44770 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 17:33:45.212825   44770 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem (1082 bytes)
	I0731 17:33:45.212847   44770 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem (1123 bytes)
	I0731 17:33:45.212869   44770 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem (1675 bytes)
	I0731 17:33:45.212906   44770 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem (1708 bytes)
	I0731 17:33:45.212932   44770 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 17:33:45.212945   44770 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem -> /usr/share/ca-certificates/15259.pem
	I0731 17:33:45.212957   44770 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem -> /usr/share/ca-certificates/152592.pem
	I0731 17:33:45.213549   44770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 17:33:45.236566   44770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 17:33:45.259826   44770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 17:33:45.281036   44770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 17:33:45.302707   44770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/multinode-498089/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0731 17:33:45.323906   44770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/multinode-498089/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 17:33:45.344988   44770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/multinode-498089/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 17:33:45.366610   44770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/multinode-498089/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 17:33:45.387934   44770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 17:33:45.410131   44770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem --> /usr/share/ca-certificates/15259.pem (1338 bytes)
	I0731 17:33:45.431621   44770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /usr/share/ca-certificates/152592.pem (1708 bytes)
	I0731 17:33:45.453058   44770 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 17:33:45.468501   44770 ssh_runner.go:195] Run: openssl version
	I0731 17:33:45.474421   44770 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0731 17:33:45.474524   44770 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15259.pem && ln -fs /usr/share/ca-certificates/15259.pem /etc/ssl/certs/15259.pem"
	I0731 17:33:45.484952   44770 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15259.pem
	I0731 17:33:45.488872   44770 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 31 16:54 /usr/share/ca-certificates/15259.pem
	I0731 17:33:45.488898   44770 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 16:54 /usr/share/ca-certificates/15259.pem
	I0731 17:33:45.488929   44770 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15259.pem
	I0731 17:33:45.494098   44770 command_runner.go:130] > 51391683
	I0731 17:33:45.494326   44770 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15259.pem /etc/ssl/certs/51391683.0"
	I0731 17:33:45.502963   44770 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/152592.pem && ln -fs /usr/share/ca-certificates/152592.pem /etc/ssl/certs/152592.pem"
	I0731 17:33:45.512663   44770 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/152592.pem
	I0731 17:33:45.516646   44770 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 31 16:54 /usr/share/ca-certificates/152592.pem
	I0731 17:33:45.516668   44770 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 16:54 /usr/share/ca-certificates/152592.pem
	I0731 17:33:45.516697   44770 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/152592.pem
	I0731 17:33:45.521667   44770 command_runner.go:130] > 3ec20f2e
	I0731 17:33:45.521751   44770 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/152592.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 17:33:45.531731   44770 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 17:33:45.565828   44770 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 17:33:45.574147   44770 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 31 16:42 /usr/share/ca-certificates/minikubeCA.pem
	I0731 17:33:45.574185   44770 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 16:42 /usr/share/ca-certificates/minikubeCA.pem
	I0731 17:33:45.574244   44770 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 17:33:45.587511   44770 command_runner.go:130] > b5213941
	I0731 17:33:45.587595   44770 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 17:33:45.622653   44770 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 17:33:45.630913   44770 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 17:33:45.630943   44770 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0731 17:33:45.630952   44770 command_runner.go:130] > Device: 253,1	Inode: 2103851     Links: 1
	I0731 17:33:45.630963   44770 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0731 17:33:45.630972   44770 command_runner.go:130] > Access: 2024-07-31 17:26:55.141030049 +0000
	I0731 17:33:45.630979   44770 command_runner.go:130] > Modify: 2024-07-31 17:26:55.141030049 +0000
	I0731 17:33:45.630987   44770 command_runner.go:130] > Change: 2024-07-31 17:26:55.141030049 +0000
	I0731 17:33:45.630995   44770 command_runner.go:130] >  Birth: 2024-07-31 17:26:55.141030049 +0000
	I0731 17:33:45.631090   44770 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 17:33:45.637587   44770 command_runner.go:130] > Certificate will not expire
	I0731 17:33:45.637654   44770 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 17:33:45.645645   44770 command_runner.go:130] > Certificate will not expire
	I0731 17:33:45.645937   44770 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 17:33:45.658621   44770 command_runner.go:130] > Certificate will not expire
	I0731 17:33:45.658878   44770 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 17:33:45.664737   44770 command_runner.go:130] > Certificate will not expire
	I0731 17:33:45.665016   44770 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 17:33:45.676183   44770 command_runner.go:130] > Certificate will not expire
	I0731 17:33:45.676451   44770 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 17:33:45.683303   44770 command_runner.go:130] > Certificate will not expire
	I0731 17:33:45.683576   44770 kubeadm.go:392] StartCluster: {Name:multinode-498089 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-498089 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.204 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 17:33:45.683676   44770 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 17:33:45.683736   44770 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 17:33:45.738411   44770 command_runner.go:130] > 6f5a861ab18dbae1101a6f14bc50fff1a1ae8370bcdabb6eaa0a7f743c803677
	I0731 17:33:45.738434   44770 command_runner.go:130] > f713a250ad9e752c0b2d45c5140624e047b77ac24ace1449a8979bc7ce711e59
	I0731 17:33:45.738441   44770 command_runner.go:130] > 11f278a9b370395f6b3a7cd171498efe28e15ce5b995f45ba5e647d8dff3db53
	I0731 17:33:45.738447   44770 command_runner.go:130] > 87a364936bdef18ba9cf82cf399cac1f0c9d855f2bbd704616f08ba3c0154e86
	I0731 17:33:45.738453   44770 command_runner.go:130] > fc640bbf40886bebe629913240d2f5ee56a9b60ed6a6ce6e7da42e278f0303e3
	I0731 17:33:45.738464   44770 command_runner.go:130] > 741c5d19093c08d4889a8536dba6fa217e98a0ff4f96ef237f57d12dd34a844b
	I0731 17:33:45.738473   44770 command_runner.go:130] > 7e257027c671ae2fa9681210deb701fd3b243b02260ef45b7eb80c12efe72477
	I0731 17:33:45.738489   44770 command_runner.go:130] > 7706d4d5c5b20f986a98dbe39b9b5f757972cfa89dd0f91e0d780ab4229cd836
	I0731 17:33:45.738516   44770 cri.go:89] found id: "6f5a861ab18dbae1101a6f14bc50fff1a1ae8370bcdabb6eaa0a7f743c803677"
	I0731 17:33:45.738527   44770 cri.go:89] found id: "f713a250ad9e752c0b2d45c5140624e047b77ac24ace1449a8979bc7ce711e59"
	I0731 17:33:45.738531   44770 cri.go:89] found id: "11f278a9b370395f6b3a7cd171498efe28e15ce5b995f45ba5e647d8dff3db53"
	I0731 17:33:45.738536   44770 cri.go:89] found id: "87a364936bdef18ba9cf82cf399cac1f0c9d855f2bbd704616f08ba3c0154e86"
	I0731 17:33:45.738540   44770 cri.go:89] found id: "fc640bbf40886bebe629913240d2f5ee56a9b60ed6a6ce6e7da42e278f0303e3"
	I0731 17:33:45.738554   44770 cri.go:89] found id: "741c5d19093c08d4889a8536dba6fa217e98a0ff4f96ef237f57d12dd34a844b"
	I0731 17:33:45.738559   44770 cri.go:89] found id: "7e257027c671ae2fa9681210deb701fd3b243b02260ef45b7eb80c12efe72477"
	I0731 17:33:45.738562   44770 cri.go:89] found id: "7706d4d5c5b20f986a98dbe39b9b5f757972cfa89dd0f91e0d780ab4229cd836"
	I0731 17:33:45.738565   44770 cri.go:89] found id: ""
	I0731 17:33:45.738602   44770 ssh_runner.go:195] Run: sudo runc list -f json
	I0731 17:33:45.767561   44770 command_runner.go:130] ! load container 6edfcf47092f7b6539d7a28ad02cdce800748c9d7c70a7bf92a9dee67cb4d6e3: container does not exist
	
	
	==> CRI-O <==
	Jul 31 17:38:00 multinode-498089 crio[2873]: time="2024-07-31 17:38:00.690503681Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722447480690480265,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b62385f3-358d-4539-b5d7-ecb136ac9db8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:38:00 multinode-498089 crio[2873]: time="2024-07-31 17:38:00.690978230Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e793fc66-d208-4bf4-af81-b7c9789571a5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:38:00 multinode-498089 crio[2873]: time="2024-07-31 17:38:00.691038107Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e793fc66-d208-4bf4-af81-b7c9789571a5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:38:00 multinode-498089 crio[2873]: time="2024-07-31 17:38:00.691473815Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6c307c10d491e1b37eefec0b1f0f26f544260f80f7b5121d2175ba0f280a1993,PodSandboxId:1eda2cad03dd79c622a01ae97c4428d2c284ef771ff3a5bcb6e37a51060dbb03,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722447265355133149,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tm4jn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1616740e-4445-47e5-9891-6dc753c5f655,},Annotations:map[string]string{io.kubernetes.container.hash: 81b7c80c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04f3c08ad545a96ddd065cd0e6e06a3ea8e50d3fe11d31e16e77052c6179dc24,PodSandboxId:c48f791ae73c77a09e60b2525cf0448c9575d1888a59b781db0854018f9b239b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722447238113423306,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8qccd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa7bae7b-0ce2-407a-b46f-94178fb43071,},Annotations:map[string]string{io.kubernetes.container.hash: c6c137a2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:220f5364b7adf5b5afb91f5fb001a204f8d65e35d532d4414f05c18f55443647,PodSandboxId:9fb2f68daf6b4c3f642fe0bf15c8564304161c163b9a7841e7199e290f8266d0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722447232191060397,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da4e724
7-c042-4da0-9015-e4242d18d043,},Annotations:map[string]string{io.kubernetes.container.hash: 3feca24c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:382ba3eb1c2830200fa38a7ce4a97e818badbd910e8576f35bdf33aca05b304d,PodSandboxId:a02944a7e020ead4730ba570dbf278d1dca969c8908f82d409692291b67b11a2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722447231968256795,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pklkm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23b4d2f3-c925-4a0e-8c9a-ecda421332bf,},An
notations:map[string]string{io.kubernetes.container.hash: 3edbfc23,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468fe46f35c8cc80128ee1ec6a6050cf94aef0cb919789ed8d25c9880b9dddfc,PodSandboxId:cf7fedce8a8e2283c2bf7e044adbfee19fd6000751ae43fca8a4acec4b252b04,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722447231885288652,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v6xrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb9f0873-0309-40a8-a2f1-c1c6f0713034,},Annotations:map[string]string{io.ku
bernetes.container.hash: bd0970e4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c69ac13c491b850716408eaae15dee0ae649f67e725ee682990cc4a93bea42c7,PodSandboxId:2bf9cbf0d84160d2a35a75b8ead180d4b5d00a2b6df4167e059c390adc4feb5b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722447231787642917,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2950d4410b3899ccbdb0536947b29f40,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 836ad03f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba2f3c3d981875bdc99216720fb78f7187ed031bddd10f4b276b85fbc1df0506,PodSandboxId:3d398e58b156ce29cb02ed249f6dc03fa818bd96b64d85485f9caed70558b00e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722447231774717639,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e893babbbefa5fea11a4d995a36606db,},Annotations:map[string]string{io.kub
ernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac3076f8ec41fd7a430cd72c45fdcc849cb9cf7d80dfe935dd4dd7508fb27c0b,PodSandboxId:e682678fcf5c0d70faf9f60078593d5aac3cfa6f19ed070b054499cd1777ac4c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722447231731178026,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be60b8808f43a8b7b2c4a1a6190bedac,},Annotations:map[string]string{io.kubernetes.container.hash: ed86d2d2,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7df64e7974e8c57b3618f920f29d46ee4cfe200717f7cba6834e6697f01fc257,PodSandboxId:55e5e7a975946471ddc15d954d8aae8ead7fdf08b655cb2a094568d0b96448e5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722447231698881591,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da27596f1275a6292fe83598b4e87488,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6edfcf47092f7b6539d7a28ad02cdce800748c9d7c70a7bf92a9dee67cb4d6e3,PodSandboxId:c48f791ae73c77a09e60b2525cf0448c9575d1888a59b781db0854018f9b239b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722447225720418667,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8qccd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa7bae7b-0ce2-407a-b46f-94178fb43071,},Annotations:map[string]string{io.kubernetes.container.hash: c6c137a2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3483951aa5a43223a694e974bdd881840004a97e4f10e615f4b7e77226b510b,PodSandboxId:1b608d19493aec6aee4416c17b5fa94aca430d3038a95515eeb351fb189a7485,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722446905505296513,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tm4jn,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1616740e-4445-47e5-9891-6dc753c5f655,},Annotations:map[string]string{io.kubernetes.container.hash: 81b7c80c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f713a250ad9e752c0b2d45c5140624e047b77ac24ace1449a8979bc7ce711e59,PodSandboxId:5f299cbe8df6c27e091c1d433f7b097f9ffa033439712432efb2428b97ec1448,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722446853710746781,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: da4e7247-c042-4da0-9015-e4242d18d043,},Annotations:map[string]string{io.kubernetes.container.hash: 3feca24c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11f278a9b370395f6b3a7cd171498efe28e15ce5b995f45ba5e647d8dff3db53,PodSandboxId:2a0003bbb9ba5f20c9d30e31392966f73c497407209243ed9580004366cf542e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722446841873004606,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pklkm,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 23b4d2f3-c925-4a0e-8c9a-ecda421332bf,},Annotations:map[string]string{io.kubernetes.container.hash: 3edbfc23,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87a364936bdef18ba9cf82cf399cac1f0c9d855f2bbd704616f08ba3c0154e86,PodSandboxId:91d6c19702c62d87dc679aec3596bc68e23bf1a4709fa745a785db0238ce000a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722446838113033393,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v6xrd,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: cb9f0873-0309-40a8-a2f1-c1c6f0713034,},Annotations:map[string]string{io.kubernetes.container.hash: bd0970e4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc640bbf40886bebe629913240d2f5ee56a9b60ed6a6ce6e7da42e278f0303e3,PodSandboxId:49b123da2a4acafe5dd6d86339589e7ed5e3cf1c97e5e26ebe6872488e23e4f1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722446819083942198,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
da27596f1275a6292fe83598b4e87488,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:741c5d19093c08d4889a8536dba6fa217e98a0ff4f96ef237f57d12dd34a844b,PodSandboxId:9cd9353b23377bcce311f56786acd497e80b965f81b4be89c96f16a3864b308b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722446819078861479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: e893babbbefa5fea11a4d995a36606db,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e257027c671ae2fa9681210deb701fd3b243b02260ef45b7eb80c12efe72477,PodSandboxId:6d8470cc9c9f5bb56d63f5c516d5fdf90527f9a92dd424db9d3c91087b7531e1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722446819035923110,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be60b8808f43a8b7b2c4a1a6190bedac,
},Annotations:map[string]string{io.kubernetes.container.hash: ed86d2d2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7706d4d5c5b20f986a98dbe39b9b5f757972cfa89dd0f91e0d780ab4229cd836,PodSandboxId:0d0cc81cdaa8e63316d97c94da40dea16ff0b5d9045b12763cdd34428d116101,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722446819025862909,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2950d4410b3899ccbdb0536947b29f40,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 836ad03f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e793fc66-d208-4bf4-af81-b7c9789571a5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:38:00 multinode-498089 crio[2873]: time="2024-07-31 17:38:00.730466060Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fca25f08-83a5-400a-9fad-41dfdd2810d2 name=/runtime.v1.RuntimeService/Version
	Jul 31 17:38:00 multinode-498089 crio[2873]: time="2024-07-31 17:38:00.730542534Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fca25f08-83a5-400a-9fad-41dfdd2810d2 name=/runtime.v1.RuntimeService/Version
	Jul 31 17:38:00 multinode-498089 crio[2873]: time="2024-07-31 17:38:00.731639920Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=07953906-87ee-4203-b222-dd73ee1c9c99 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:38:00 multinode-498089 crio[2873]: time="2024-07-31 17:38:00.732071667Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722447480732051574,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=07953906-87ee-4203-b222-dd73ee1c9c99 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:38:00 multinode-498089 crio[2873]: time="2024-07-31 17:38:00.732739911Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=77f4cc81-54b9-4a0a-ae7e-393726402ca4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:38:00 multinode-498089 crio[2873]: time="2024-07-31 17:38:00.732802309Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=77f4cc81-54b9-4a0a-ae7e-393726402ca4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:38:00 multinode-498089 crio[2873]: time="2024-07-31 17:38:00.733139885Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6c307c10d491e1b37eefec0b1f0f26f544260f80f7b5121d2175ba0f280a1993,PodSandboxId:1eda2cad03dd79c622a01ae97c4428d2c284ef771ff3a5bcb6e37a51060dbb03,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722447265355133149,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tm4jn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1616740e-4445-47e5-9891-6dc753c5f655,},Annotations:map[string]string{io.kubernetes.container.hash: 81b7c80c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04f3c08ad545a96ddd065cd0e6e06a3ea8e50d3fe11d31e16e77052c6179dc24,PodSandboxId:c48f791ae73c77a09e60b2525cf0448c9575d1888a59b781db0854018f9b239b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722447238113423306,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8qccd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa7bae7b-0ce2-407a-b46f-94178fb43071,},Annotations:map[string]string{io.kubernetes.container.hash: c6c137a2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:220f5364b7adf5b5afb91f5fb001a204f8d65e35d532d4414f05c18f55443647,PodSandboxId:9fb2f68daf6b4c3f642fe0bf15c8564304161c163b9a7841e7199e290f8266d0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722447232191060397,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da4e724
7-c042-4da0-9015-e4242d18d043,},Annotations:map[string]string{io.kubernetes.container.hash: 3feca24c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:382ba3eb1c2830200fa38a7ce4a97e818badbd910e8576f35bdf33aca05b304d,PodSandboxId:a02944a7e020ead4730ba570dbf278d1dca969c8908f82d409692291b67b11a2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722447231968256795,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pklkm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23b4d2f3-c925-4a0e-8c9a-ecda421332bf,},An
notations:map[string]string{io.kubernetes.container.hash: 3edbfc23,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468fe46f35c8cc80128ee1ec6a6050cf94aef0cb919789ed8d25c9880b9dddfc,PodSandboxId:cf7fedce8a8e2283c2bf7e044adbfee19fd6000751ae43fca8a4acec4b252b04,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722447231885288652,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v6xrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb9f0873-0309-40a8-a2f1-c1c6f0713034,},Annotations:map[string]string{io.ku
bernetes.container.hash: bd0970e4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c69ac13c491b850716408eaae15dee0ae649f67e725ee682990cc4a93bea42c7,PodSandboxId:2bf9cbf0d84160d2a35a75b8ead180d4b5d00a2b6df4167e059c390adc4feb5b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722447231787642917,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2950d4410b3899ccbdb0536947b29f40,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 836ad03f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba2f3c3d981875bdc99216720fb78f7187ed031bddd10f4b276b85fbc1df0506,PodSandboxId:3d398e58b156ce29cb02ed249f6dc03fa818bd96b64d85485f9caed70558b00e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722447231774717639,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e893babbbefa5fea11a4d995a36606db,},Annotations:map[string]string{io.kub
ernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac3076f8ec41fd7a430cd72c45fdcc849cb9cf7d80dfe935dd4dd7508fb27c0b,PodSandboxId:e682678fcf5c0d70faf9f60078593d5aac3cfa6f19ed070b054499cd1777ac4c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722447231731178026,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be60b8808f43a8b7b2c4a1a6190bedac,},Annotations:map[string]string{io.kubernetes.container.hash: ed86d2d2,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7df64e7974e8c57b3618f920f29d46ee4cfe200717f7cba6834e6697f01fc257,PodSandboxId:55e5e7a975946471ddc15d954d8aae8ead7fdf08b655cb2a094568d0b96448e5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722447231698881591,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da27596f1275a6292fe83598b4e87488,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6edfcf47092f7b6539d7a28ad02cdce800748c9d7c70a7bf92a9dee67cb4d6e3,PodSandboxId:c48f791ae73c77a09e60b2525cf0448c9575d1888a59b781db0854018f9b239b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722447225720418667,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8qccd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa7bae7b-0ce2-407a-b46f-94178fb43071,},Annotations:map[string]string{io.kubernetes.container.hash: c6c137a2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3483951aa5a43223a694e974bdd881840004a97e4f10e615f4b7e77226b510b,PodSandboxId:1b608d19493aec6aee4416c17b5fa94aca430d3038a95515eeb351fb189a7485,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722446905505296513,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tm4jn,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1616740e-4445-47e5-9891-6dc753c5f655,},Annotations:map[string]string{io.kubernetes.container.hash: 81b7c80c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f713a250ad9e752c0b2d45c5140624e047b77ac24ace1449a8979bc7ce711e59,PodSandboxId:5f299cbe8df6c27e091c1d433f7b097f9ffa033439712432efb2428b97ec1448,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722446853710746781,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: da4e7247-c042-4da0-9015-e4242d18d043,},Annotations:map[string]string{io.kubernetes.container.hash: 3feca24c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11f278a9b370395f6b3a7cd171498efe28e15ce5b995f45ba5e647d8dff3db53,PodSandboxId:2a0003bbb9ba5f20c9d30e31392966f73c497407209243ed9580004366cf542e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722446841873004606,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pklkm,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 23b4d2f3-c925-4a0e-8c9a-ecda421332bf,},Annotations:map[string]string{io.kubernetes.container.hash: 3edbfc23,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87a364936bdef18ba9cf82cf399cac1f0c9d855f2bbd704616f08ba3c0154e86,PodSandboxId:91d6c19702c62d87dc679aec3596bc68e23bf1a4709fa745a785db0238ce000a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722446838113033393,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v6xrd,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: cb9f0873-0309-40a8-a2f1-c1c6f0713034,},Annotations:map[string]string{io.kubernetes.container.hash: bd0970e4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc640bbf40886bebe629913240d2f5ee56a9b60ed6a6ce6e7da42e278f0303e3,PodSandboxId:49b123da2a4acafe5dd6d86339589e7ed5e3cf1c97e5e26ebe6872488e23e4f1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722446819083942198,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
da27596f1275a6292fe83598b4e87488,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:741c5d19093c08d4889a8536dba6fa217e98a0ff4f96ef237f57d12dd34a844b,PodSandboxId:9cd9353b23377bcce311f56786acd497e80b965f81b4be89c96f16a3864b308b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722446819078861479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: e893babbbefa5fea11a4d995a36606db,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e257027c671ae2fa9681210deb701fd3b243b02260ef45b7eb80c12efe72477,PodSandboxId:6d8470cc9c9f5bb56d63f5c516d5fdf90527f9a92dd424db9d3c91087b7531e1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722446819035923110,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be60b8808f43a8b7b2c4a1a6190bedac,
},Annotations:map[string]string{io.kubernetes.container.hash: ed86d2d2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7706d4d5c5b20f986a98dbe39b9b5f757972cfa89dd0f91e0d780ab4229cd836,PodSandboxId:0d0cc81cdaa8e63316d97c94da40dea16ff0b5d9045b12763cdd34428d116101,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722446819025862909,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2950d4410b3899ccbdb0536947b29f40,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 836ad03f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=77f4cc81-54b9-4a0a-ae7e-393726402ca4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:38:00 multinode-498089 crio[2873]: time="2024-07-31 17:38:00.778256624Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ba04d4d2-e2eb-4bf1-918e-5f3e4902bd99 name=/runtime.v1.RuntimeService/Version
	Jul 31 17:38:00 multinode-498089 crio[2873]: time="2024-07-31 17:38:00.778393451Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ba04d4d2-e2eb-4bf1-918e-5f3e4902bd99 name=/runtime.v1.RuntimeService/Version
	Jul 31 17:38:00 multinode-498089 crio[2873]: time="2024-07-31 17:38:00.779640549Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c37abb73-745a-4f9a-832a-632bd15d49fc name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:38:00 multinode-498089 crio[2873]: time="2024-07-31 17:38:00.780192899Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722447480780170120,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c37abb73-745a-4f9a-832a-632bd15d49fc name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:38:00 multinode-498089 crio[2873]: time="2024-07-31 17:38:00.780745655Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=de6cf5cb-e6bf-4542-9c65-3fbb1572fd62 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:38:00 multinode-498089 crio[2873]: time="2024-07-31 17:38:00.780805557Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=de6cf5cb-e6bf-4542-9c65-3fbb1572fd62 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:38:00 multinode-498089 crio[2873]: time="2024-07-31 17:38:00.781480512Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6c307c10d491e1b37eefec0b1f0f26f544260f80f7b5121d2175ba0f280a1993,PodSandboxId:1eda2cad03dd79c622a01ae97c4428d2c284ef771ff3a5bcb6e37a51060dbb03,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722447265355133149,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tm4jn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1616740e-4445-47e5-9891-6dc753c5f655,},Annotations:map[string]string{io.kubernetes.container.hash: 81b7c80c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04f3c08ad545a96ddd065cd0e6e06a3ea8e50d3fe11d31e16e77052c6179dc24,PodSandboxId:c48f791ae73c77a09e60b2525cf0448c9575d1888a59b781db0854018f9b239b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722447238113423306,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8qccd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa7bae7b-0ce2-407a-b46f-94178fb43071,},Annotations:map[string]string{io.kubernetes.container.hash: c6c137a2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:220f5364b7adf5b5afb91f5fb001a204f8d65e35d532d4414f05c18f55443647,PodSandboxId:9fb2f68daf6b4c3f642fe0bf15c8564304161c163b9a7841e7199e290f8266d0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722447232191060397,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da4e724
7-c042-4da0-9015-e4242d18d043,},Annotations:map[string]string{io.kubernetes.container.hash: 3feca24c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:382ba3eb1c2830200fa38a7ce4a97e818badbd910e8576f35bdf33aca05b304d,PodSandboxId:a02944a7e020ead4730ba570dbf278d1dca969c8908f82d409692291b67b11a2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722447231968256795,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pklkm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23b4d2f3-c925-4a0e-8c9a-ecda421332bf,},An
notations:map[string]string{io.kubernetes.container.hash: 3edbfc23,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468fe46f35c8cc80128ee1ec6a6050cf94aef0cb919789ed8d25c9880b9dddfc,PodSandboxId:cf7fedce8a8e2283c2bf7e044adbfee19fd6000751ae43fca8a4acec4b252b04,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722447231885288652,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v6xrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb9f0873-0309-40a8-a2f1-c1c6f0713034,},Annotations:map[string]string{io.ku
bernetes.container.hash: bd0970e4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c69ac13c491b850716408eaae15dee0ae649f67e725ee682990cc4a93bea42c7,PodSandboxId:2bf9cbf0d84160d2a35a75b8ead180d4b5d00a2b6df4167e059c390adc4feb5b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722447231787642917,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2950d4410b3899ccbdb0536947b29f40,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 836ad03f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba2f3c3d981875bdc99216720fb78f7187ed031bddd10f4b276b85fbc1df0506,PodSandboxId:3d398e58b156ce29cb02ed249f6dc03fa818bd96b64d85485f9caed70558b00e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722447231774717639,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e893babbbefa5fea11a4d995a36606db,},Annotations:map[string]string{io.kub
ernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac3076f8ec41fd7a430cd72c45fdcc849cb9cf7d80dfe935dd4dd7508fb27c0b,PodSandboxId:e682678fcf5c0d70faf9f60078593d5aac3cfa6f19ed070b054499cd1777ac4c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722447231731178026,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be60b8808f43a8b7b2c4a1a6190bedac,},Annotations:map[string]string{io.kubernetes.container.hash: ed86d2d2,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7df64e7974e8c57b3618f920f29d46ee4cfe200717f7cba6834e6697f01fc257,PodSandboxId:55e5e7a975946471ddc15d954d8aae8ead7fdf08b655cb2a094568d0b96448e5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722447231698881591,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da27596f1275a6292fe83598b4e87488,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6edfcf47092f7b6539d7a28ad02cdce800748c9d7c70a7bf92a9dee67cb4d6e3,PodSandboxId:c48f791ae73c77a09e60b2525cf0448c9575d1888a59b781db0854018f9b239b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722447225720418667,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8qccd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa7bae7b-0ce2-407a-b46f-94178fb43071,},Annotations:map[string]string{io.kubernetes.container.hash: c6c137a2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3483951aa5a43223a694e974bdd881840004a97e4f10e615f4b7e77226b510b,PodSandboxId:1b608d19493aec6aee4416c17b5fa94aca430d3038a95515eeb351fb189a7485,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722446905505296513,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tm4jn,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1616740e-4445-47e5-9891-6dc753c5f655,},Annotations:map[string]string{io.kubernetes.container.hash: 81b7c80c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f713a250ad9e752c0b2d45c5140624e047b77ac24ace1449a8979bc7ce711e59,PodSandboxId:5f299cbe8df6c27e091c1d433f7b097f9ffa033439712432efb2428b97ec1448,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722446853710746781,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: da4e7247-c042-4da0-9015-e4242d18d043,},Annotations:map[string]string{io.kubernetes.container.hash: 3feca24c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11f278a9b370395f6b3a7cd171498efe28e15ce5b995f45ba5e647d8dff3db53,PodSandboxId:2a0003bbb9ba5f20c9d30e31392966f73c497407209243ed9580004366cf542e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722446841873004606,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pklkm,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 23b4d2f3-c925-4a0e-8c9a-ecda421332bf,},Annotations:map[string]string{io.kubernetes.container.hash: 3edbfc23,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87a364936bdef18ba9cf82cf399cac1f0c9d855f2bbd704616f08ba3c0154e86,PodSandboxId:91d6c19702c62d87dc679aec3596bc68e23bf1a4709fa745a785db0238ce000a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722446838113033393,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v6xrd,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: cb9f0873-0309-40a8-a2f1-c1c6f0713034,},Annotations:map[string]string{io.kubernetes.container.hash: bd0970e4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc640bbf40886bebe629913240d2f5ee56a9b60ed6a6ce6e7da42e278f0303e3,PodSandboxId:49b123da2a4acafe5dd6d86339589e7ed5e3cf1c97e5e26ebe6872488e23e4f1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722446819083942198,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
da27596f1275a6292fe83598b4e87488,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:741c5d19093c08d4889a8536dba6fa217e98a0ff4f96ef237f57d12dd34a844b,PodSandboxId:9cd9353b23377bcce311f56786acd497e80b965f81b4be89c96f16a3864b308b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722446819078861479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: e893babbbefa5fea11a4d995a36606db,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e257027c671ae2fa9681210deb701fd3b243b02260ef45b7eb80c12efe72477,PodSandboxId:6d8470cc9c9f5bb56d63f5c516d5fdf90527f9a92dd424db9d3c91087b7531e1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722446819035923110,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be60b8808f43a8b7b2c4a1a6190bedac,
},Annotations:map[string]string{io.kubernetes.container.hash: ed86d2d2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7706d4d5c5b20f986a98dbe39b9b5f757972cfa89dd0f91e0d780ab4229cd836,PodSandboxId:0d0cc81cdaa8e63316d97c94da40dea16ff0b5d9045b12763cdd34428d116101,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722446819025862909,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2950d4410b3899ccbdb0536947b29f40,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 836ad03f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=de6cf5cb-e6bf-4542-9c65-3fbb1572fd62 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:38:00 multinode-498089 crio[2873]: time="2024-07-31 17:38:00.823264261Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=97fb3566-b9cf-4ade-9822-b3f095a525bb name=/runtime.v1.RuntimeService/Version
	Jul 31 17:38:00 multinode-498089 crio[2873]: time="2024-07-31 17:38:00.823377768Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=97fb3566-b9cf-4ade-9822-b3f095a525bb name=/runtime.v1.RuntimeService/Version
	Jul 31 17:38:00 multinode-498089 crio[2873]: time="2024-07-31 17:38:00.824370682Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9421b6e3-32c9-4de2-8aa2-8402cf39610e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:38:00 multinode-498089 crio[2873]: time="2024-07-31 17:38:00.824800049Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722447480824777284,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9421b6e3-32c9-4de2-8aa2-8402cf39610e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:38:00 multinode-498089 crio[2873]: time="2024-07-31 17:38:00.825544585Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=00183e26-141b-4fcc-97f7-f851e4c18bb5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:38:00 multinode-498089 crio[2873]: time="2024-07-31 17:38:00.825599518Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=00183e26-141b-4fcc-97f7-f851e4c18bb5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:38:00 multinode-498089 crio[2873]: time="2024-07-31 17:38:00.826245140Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6c307c10d491e1b37eefec0b1f0f26f544260f80f7b5121d2175ba0f280a1993,PodSandboxId:1eda2cad03dd79c622a01ae97c4428d2c284ef771ff3a5bcb6e37a51060dbb03,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722447265355133149,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tm4jn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1616740e-4445-47e5-9891-6dc753c5f655,},Annotations:map[string]string{io.kubernetes.container.hash: 81b7c80c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04f3c08ad545a96ddd065cd0e6e06a3ea8e50d3fe11d31e16e77052c6179dc24,PodSandboxId:c48f791ae73c77a09e60b2525cf0448c9575d1888a59b781db0854018f9b239b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722447238113423306,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8qccd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa7bae7b-0ce2-407a-b46f-94178fb43071,},Annotations:map[string]string{io.kubernetes.container.hash: c6c137a2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:220f5364b7adf5b5afb91f5fb001a204f8d65e35d532d4414f05c18f55443647,PodSandboxId:9fb2f68daf6b4c3f642fe0bf15c8564304161c163b9a7841e7199e290f8266d0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722447232191060397,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da4e724
7-c042-4da0-9015-e4242d18d043,},Annotations:map[string]string{io.kubernetes.container.hash: 3feca24c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:382ba3eb1c2830200fa38a7ce4a97e818badbd910e8576f35bdf33aca05b304d,PodSandboxId:a02944a7e020ead4730ba570dbf278d1dca969c8908f82d409692291b67b11a2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722447231968256795,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pklkm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23b4d2f3-c925-4a0e-8c9a-ecda421332bf,},An
notations:map[string]string{io.kubernetes.container.hash: 3edbfc23,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468fe46f35c8cc80128ee1ec6a6050cf94aef0cb919789ed8d25c9880b9dddfc,PodSandboxId:cf7fedce8a8e2283c2bf7e044adbfee19fd6000751ae43fca8a4acec4b252b04,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722447231885288652,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v6xrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb9f0873-0309-40a8-a2f1-c1c6f0713034,},Annotations:map[string]string{io.ku
bernetes.container.hash: bd0970e4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c69ac13c491b850716408eaae15dee0ae649f67e725ee682990cc4a93bea42c7,PodSandboxId:2bf9cbf0d84160d2a35a75b8ead180d4b5d00a2b6df4167e059c390adc4feb5b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722447231787642917,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2950d4410b3899ccbdb0536947b29f40,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 836ad03f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba2f3c3d981875bdc99216720fb78f7187ed031bddd10f4b276b85fbc1df0506,PodSandboxId:3d398e58b156ce29cb02ed249f6dc03fa818bd96b64d85485f9caed70558b00e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722447231774717639,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e893babbbefa5fea11a4d995a36606db,},Annotations:map[string]string{io.kub
ernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac3076f8ec41fd7a430cd72c45fdcc849cb9cf7d80dfe935dd4dd7508fb27c0b,PodSandboxId:e682678fcf5c0d70faf9f60078593d5aac3cfa6f19ed070b054499cd1777ac4c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722447231731178026,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be60b8808f43a8b7b2c4a1a6190bedac,},Annotations:map[string]string{io.kubernetes.container.hash: ed86d2d2,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7df64e7974e8c57b3618f920f29d46ee4cfe200717f7cba6834e6697f01fc257,PodSandboxId:55e5e7a975946471ddc15d954d8aae8ead7fdf08b655cb2a094568d0b96448e5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722447231698881591,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da27596f1275a6292fe83598b4e87488,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6edfcf47092f7b6539d7a28ad02cdce800748c9d7c70a7bf92a9dee67cb4d6e3,PodSandboxId:c48f791ae73c77a09e60b2525cf0448c9575d1888a59b781db0854018f9b239b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722447225720418667,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8qccd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa7bae7b-0ce2-407a-b46f-94178fb43071,},Annotations:map[string]string{io.kubernetes.container.hash: c6c137a2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3483951aa5a43223a694e974bdd881840004a97e4f10e615f4b7e77226b510b,PodSandboxId:1b608d19493aec6aee4416c17b5fa94aca430d3038a95515eeb351fb189a7485,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722446905505296513,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tm4jn,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1616740e-4445-47e5-9891-6dc753c5f655,},Annotations:map[string]string{io.kubernetes.container.hash: 81b7c80c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f713a250ad9e752c0b2d45c5140624e047b77ac24ace1449a8979bc7ce711e59,PodSandboxId:5f299cbe8df6c27e091c1d433f7b097f9ffa033439712432efb2428b97ec1448,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722446853710746781,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: da4e7247-c042-4da0-9015-e4242d18d043,},Annotations:map[string]string{io.kubernetes.container.hash: 3feca24c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11f278a9b370395f6b3a7cd171498efe28e15ce5b995f45ba5e647d8dff3db53,PodSandboxId:2a0003bbb9ba5f20c9d30e31392966f73c497407209243ed9580004366cf542e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722446841873004606,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pklkm,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 23b4d2f3-c925-4a0e-8c9a-ecda421332bf,},Annotations:map[string]string{io.kubernetes.container.hash: 3edbfc23,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87a364936bdef18ba9cf82cf399cac1f0c9d855f2bbd704616f08ba3c0154e86,PodSandboxId:91d6c19702c62d87dc679aec3596bc68e23bf1a4709fa745a785db0238ce000a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722446838113033393,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v6xrd,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: cb9f0873-0309-40a8-a2f1-c1c6f0713034,},Annotations:map[string]string{io.kubernetes.container.hash: bd0970e4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc640bbf40886bebe629913240d2f5ee56a9b60ed6a6ce6e7da42e278f0303e3,PodSandboxId:49b123da2a4acafe5dd6d86339589e7ed5e3cf1c97e5e26ebe6872488e23e4f1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722446819083942198,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
da27596f1275a6292fe83598b4e87488,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:741c5d19093c08d4889a8536dba6fa217e98a0ff4f96ef237f57d12dd34a844b,PodSandboxId:9cd9353b23377bcce311f56786acd497e80b965f81b4be89c96f16a3864b308b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722446819078861479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: e893babbbefa5fea11a4d995a36606db,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e257027c671ae2fa9681210deb701fd3b243b02260ef45b7eb80c12efe72477,PodSandboxId:6d8470cc9c9f5bb56d63f5c516d5fdf90527f9a92dd424db9d3c91087b7531e1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722446819035923110,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be60b8808f43a8b7b2c4a1a6190bedac,
},Annotations:map[string]string{io.kubernetes.container.hash: ed86d2d2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7706d4d5c5b20f986a98dbe39b9b5f757972cfa89dd0f91e0d780ab4229cd836,PodSandboxId:0d0cc81cdaa8e63316d97c94da40dea16ff0b5d9045b12763cdd34428d116101,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722446819025862909,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-498089,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2950d4410b3899ccbdb0536947b29f40,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 836ad03f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=00183e26-141b-4fcc-97f7-f851e4c18bb5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6c307c10d491e       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   1eda2cad03dd7       busybox-fc5497c4f-tm4jn
	04f3c08ad545a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   2                   c48f791ae73c7       coredns-7db6d8ff4d-8qccd
	220f5364b7adf       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   9fb2f68daf6b4       storage-provisioner
	382ba3eb1c283       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      4 minutes ago       Running             kindnet-cni               1                   a02944a7e020e       kindnet-pklkm
	468fe46f35c8c       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      4 minutes ago       Running             kube-proxy                1                   cf7fedce8a8e2       kube-proxy-v6xrd
	c69ac13c491b8       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Running             kube-apiserver            1                   2bf9cbf0d8416       kube-apiserver-multinode-498089
	ba2f3c3d98187       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Running             kube-controller-manager   1                   3d398e58b156c       kube-controller-manager-multinode-498089
	ac3076f8ec41f       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      4 minutes ago       Running             etcd                      1                   e682678fcf5c0       etcd-multinode-498089
	7df64e7974e8c       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      4 minutes ago       Running             kube-scheduler            1                   55e5e7a975946       kube-scheduler-multinode-498089
	6edfcf47092f7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Exited              coredns                   1                   c48f791ae73c7       coredns-7db6d8ff4d-8qccd
	b3483951aa5a4       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   1b608d19493ae       busybox-fc5497c4f-tm4jn
	f713a250ad9e7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   5f299cbe8df6c       storage-provisioner
	11f278a9b3703       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    10 minutes ago      Exited              kindnet-cni               0                   2a0003bbb9ba5       kindnet-pklkm
	87a364936bdef       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      10 minutes ago      Exited              kube-proxy                0                   91d6c19702c62       kube-proxy-v6xrd
	fc640bbf40886       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      11 minutes ago      Exited              kube-scheduler            0                   49b123da2a4ac       kube-scheduler-multinode-498089
	741c5d19093c0       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      11 minutes ago      Exited              kube-controller-manager   0                   9cd9353b23377       kube-controller-manager-multinode-498089
	7e257027c671a       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      11 minutes ago      Exited              etcd                      0                   6d8470cc9c9f5       etcd-multinode-498089
	7706d4d5c5b20       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      11 minutes ago      Exited              kube-apiserver            0                   0d0cc81cdaa8e       kube-apiserver-multinode-498089
	
	
	==> coredns [04f3c08ad545a96ddd065cd0e6e06a3ea8e50d3fe11d31e16e77052c6179dc24] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:37019 - 17009 "HINFO IN 935776094504024851.1920634029359281869. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.011088744s
	
	
	==> coredns [6edfcf47092f7b6539d7a28ad02cdce800748c9d7c70a7bf92a9dee67cb4d6e3] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:50152 - 8875 "HINFO IN 4546732856793314619.2462259501056362839. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011965382s
	
	
	==> describe nodes <==
	Name:               multinode-498089
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-498089
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1d737dad7efa60c56d30434fcd857dd3b14c91d9
	                    minikube.k8s.io/name=multinode-498089
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T17_27_05_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 17:27:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-498089
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 17:37:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 17:33:57 +0000   Wed, 31 Jul 2024 17:26:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 17:33:57 +0000   Wed, 31 Jul 2024 17:26:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 17:33:57 +0000   Wed, 31 Jul 2024 17:26:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 17:33:57 +0000   Wed, 31 Jul 2024 17:27:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.100
	  Hostname:    multinode-498089
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4bbeac4f7a8446b2b13fe255fcd04320
	  System UUID:                4bbeac4f-7a84-46b2-b13f-e255fcd04320
	  Boot ID:                    6eea6dfe-41da-4ca0-a8df-788bf7c1456e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-tm4jn                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m39s
	  kube-system                 coredns-7db6d8ff4d-8qccd                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                 etcd-multinode-498089                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-pklkm                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-multinode-498089             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-multinode-498089    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-v6xrd                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-multinode-498089             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 4m5s   kube-proxy       
	  Normal   Starting                 10m    kube-proxy       
	  Normal   Starting                 10m    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  10m    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  10m    kubelet          Node multinode-498089 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m    kubelet          Node multinode-498089 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m    kubelet          Node multinode-498089 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m    node-controller  Node multinode-498089 event: Registered Node multinode-498089 in Controller
	  Normal   NodeReady                10m    kubelet          Node multinode-498089 status is now: NodeReady
	  Warning  ContainerGCFailed        4m57s  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   Starting                 4m4s   kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  4m4s   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  4m4s   kubelet          Node multinode-498089 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m4s   kubelet          Node multinode-498089 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m4s   kubelet          Node multinode-498089 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3m54s  node-controller  Node multinode-498089 event: Registered Node multinode-498089 in Controller
	
	
	Name:               multinode-498089-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-498089-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1d737dad7efa60c56d30434fcd857dd3b14c91d9
	                    minikube.k8s.io/name=multinode-498089
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T17_34_36_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 17:34:36 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-498089-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 17:35:38 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 31 Jul 2024 17:35:06 +0000   Wed, 31 Jul 2024 17:36:22 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 31 Jul 2024 17:35:06 +0000   Wed, 31 Jul 2024 17:36:22 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 31 Jul 2024 17:35:06 +0000   Wed, 31 Jul 2024 17:36:22 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 31 Jul 2024 17:35:06 +0000   Wed, 31 Jul 2024 17:36:22 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.228
	  Hostname:    multinode-498089-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6666b100748c404c89f1f7ec662d9f1f
	  System UUID:                6666b100-748c-404c-89f1-f7ec662d9f1f
	  Boot ID:                    9c7b8a87-0000-49d5-bf02-b17e82c6b0e8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-tzt7d    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	  kube-system                 kindnet-5lbxl              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-kwpbv           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m20s                  kube-proxy       
	  Normal  Starting                 9m55s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x2 over 10m)      kubelet          Node multinode-498089-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x2 over 10m)      kubelet          Node multinode-498089-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x2 over 10m)      kubelet          Node multinode-498089-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m41s                  kubelet          Node multinode-498089-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m25s (x2 over 3m25s)  kubelet          Node multinode-498089-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m25s (x2 over 3m25s)  kubelet          Node multinode-498089-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m25s (x2 over 3m25s)  kubelet          Node multinode-498089-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m6s                   kubelet          Node multinode-498089-m02 status is now: NodeReady
	  Normal  NodeNotReady             99s                    node-controller  Node multinode-498089-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.186092] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.132350] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.246079] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +3.876296] systemd-fstab-generator[757]: Ignoring "noauto" option for root device
	[  +4.442091] systemd-fstab-generator[938]: Ignoring "noauto" option for root device
	[  +0.055536] kauditd_printk_skb: 158 callbacks suppressed
	[Jul31 17:27] systemd-fstab-generator[1268]: Ignoring "noauto" option for root device
	[  +0.079775] kauditd_printk_skb: 69 callbacks suppressed
	[  +7.157910] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.394220] systemd-fstab-generator[1456]: Ignoring "noauto" option for root device
	[  +5.222878] kauditd_printk_skb: 57 callbacks suppressed
	[Jul31 17:28] kauditd_printk_skb: 12 callbacks suppressed
	[Jul31 17:33] systemd-fstab-generator[2793]: Ignoring "noauto" option for root device
	[  +0.138604] systemd-fstab-generator[2805]: Ignoring "noauto" option for root device
	[  +0.165295] systemd-fstab-generator[2819]: Ignoring "noauto" option for root device
	[  +0.142416] systemd-fstab-generator[2831]: Ignoring "noauto" option for root device
	[  +0.265089] systemd-fstab-generator[2859]: Ignoring "noauto" option for root device
	[  +7.376665] systemd-fstab-generator[2957]: Ignoring "noauto" option for root device
	[  +0.083685] kauditd_printk_skb: 100 callbacks suppressed
	[  +6.367429] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.792878] systemd-fstab-generator[3827]: Ignoring "noauto" option for root device
	[  +0.095197] kauditd_printk_skb: 62 callbacks suppressed
	[Jul31 17:34] kauditd_printk_skb: 19 callbacks suppressed
	[  +4.896080] systemd-fstab-generator[3990]: Ignoring "noauto" option for root device
	[ +12.856099] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [7e257027c671ae2fa9681210deb701fd3b243b02260ef45b7eb80c12efe72477] <==
	{"level":"info","ts":"2024-07-31T17:27:00.217593Z","caller":"traceutil/trace.go:171","msg":"trace[1565674192] range","detail":"{range_begin:/registry/namespaces/; range_end:/registry/namespaces0; response_count:0; response_revision:1; }","duration":"102.991985ms","start":"2024-07-31T17:27:00.114586Z","end":"2024-07-31T17:27:00.217578Z","steps":["trace[1565674192] 'range keys from in-memory index tree'  (duration: 102.708148ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T17:27:00.216593Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.933947ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/\" range_end:\"/registry/namespaces0\" count_only:true ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2024-07-31T17:27:00.218495Z","caller":"traceutil/trace.go:171","msg":"trace[2126615920] range","detail":"{range_begin:/registry/namespaces/; range_end:/registry/namespaces0; response_count:0; response_revision:1; }","duration":"103.84998ms","start":"2024-07-31T17:27:00.114634Z","end":"2024-07-31T17:27:00.218484Z","steps":["trace[2126615920] 'count revisions from in-memory index tree'  (duration: 101.912998ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T17:28:01.069355Z","caller":"traceutil/trace.go:171","msg":"trace[810982051] transaction","detail":"{read_only:false; response_revision:442; number_of_response:1; }","duration":"168.052498ms","start":"2024-07-31T17:28:00.901218Z","end":"2024-07-31T17:28:01.06927Z","steps":["trace[810982051] 'process raft request'  (duration: 167.994131ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T17:28:01.069615Z","caller":"traceutil/trace.go:171","msg":"trace[760668613] transaction","detail":"{read_only:false; response_revision:441; number_of_response:1; }","duration":"230.533677ms","start":"2024-07-31T17:28:00.839064Z","end":"2024-07-31T17:28:01.069598Z","steps":["trace[760668613] 'process raft request'  (duration: 134.638601ms)","trace[760668613] 'compare'  (duration: 94.999503ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-31T17:28:01.069781Z","caller":"traceutil/trace.go:171","msg":"trace[1606502940] linearizableReadLoop","detail":"{readStateIndex:463; appliedIndex:462; }","duration":"207.956319ms","start":"2024-07-31T17:28:00.861813Z","end":"2024-07-31T17:28:01.06977Z","steps":["trace[1606502940] 'read index received'  (duration: 112.043139ms)","trace[1606502940] 'applied index is now lower than readState.Index'  (duration: 95.912259ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-31T17:28:01.070156Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"208.253258ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices/\" range_end:\"/registry/apiregistration.k8s.io/apiservices0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-07-31T17:28:01.070218Z","caller":"traceutil/trace.go:171","msg":"trace[692972716] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices/; range_end:/registry/apiregistration.k8s.io/apiservices0; response_count:0; response_revision:442; }","duration":"208.412365ms","start":"2024-07-31T17:28:00.86179Z","end":"2024-07-31T17:28:01.070202Z","steps":["trace[692972716] 'agreement among raft nodes before linearized reading'  (duration: 208.198087ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T17:28:52.871542Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"158.136735ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2176523991335837742 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-498089-m03.17e75c5bfac225e3\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-498089-m03.17e75c5bfac225e3\" value_size:642 lease:2176523991335837389 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-07-31T17:28:52.871737Z","caller":"traceutil/trace.go:171","msg":"trace[49114635] transaction","detail":"{read_only:false; response_revision:574; number_of_response:1; }","duration":"177.190214ms","start":"2024-07-31T17:28:52.69453Z","end":"2024-07-31T17:28:52.87172Z","steps":["trace[49114635] 'process raft request'  (duration: 177.145285ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T17:28:52.871818Z","caller":"traceutil/trace.go:171","msg":"trace[127697052] linearizableReadLoop","detail":"{readStateIndex:609; appliedIndex:608; }","duration":"190.248395ms","start":"2024-07-31T17:28:52.681556Z","end":"2024-07-31T17:28:52.871804Z","steps":["trace[127697052] 'read index received'  (duration: 31.293419ms)","trace[127697052] 'applied index is now lower than readState.Index'  (duration: 158.953245ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-31T17:28:52.87197Z","caller":"traceutil/trace.go:171","msg":"trace[1484313498] transaction","detail":"{read_only:false; response_revision:573; number_of_response:1; }","duration":"236.038193ms","start":"2024-07-31T17:28:52.635925Z","end":"2024-07-31T17:28:52.871963Z","steps":["trace[1484313498] 'process raft request'  (duration: 76.917235ms)","trace[1484313498] 'compare'  (duration: 158.030867ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-31T17:28:52.872062Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"190.491237ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-498089-m03\" ","response":"range_response_count:1 size:1926"}
	{"level":"info","ts":"2024-07-31T17:28:52.875132Z","caller":"traceutil/trace.go:171","msg":"trace[413867055] range","detail":"{range_begin:/registry/minions/multinode-498089-m03; range_end:; response_count:1; response_revision:574; }","duration":"193.58366ms","start":"2024-07-31T17:28:52.681531Z","end":"2024-07-31T17:28:52.875115Z","steps":["trace[413867055] 'agreement among raft nodes before linearized reading'  (duration: 190.461391ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T17:32:05.825562Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-31T17:32:05.825803Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-498089","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.100:2380"],"advertise-client-urls":["https://192.168.39.100:2379"]}
	{"level":"warn","ts":"2024-07-31T17:32:05.839775Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T17:32:05.839946Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	2024/07/31 17:32:05 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-31T17:32:05.885774Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.100:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T17:32:05.885862Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.100:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-31T17:32:05.887301Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"3276445ff8d31e34","current-leader-member-id":"3276445ff8d31e34"}
	{"level":"info","ts":"2024-07-31T17:32:05.889712Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.100:2380"}
	{"level":"info","ts":"2024-07-31T17:32:05.889882Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.100:2380"}
	{"level":"info","ts":"2024-07-31T17:32:05.889913Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-498089","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.100:2380"],"advertise-client-urls":["https://192.168.39.100:2379"]}
	
	
	==> etcd [ac3076f8ec41fd7a430cd72c45fdcc849cb9cf7d80dfe935dd4dd7508fb27c0b] <==
	{"level":"info","ts":"2024-07-31T17:33:52.249042Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-31T17:33:52.249271Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3276445ff8d31e34 switched to configuration voters=(3636168928135421492)"}
	{"level":"info","ts":"2024-07-31T17:33:52.249409Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6cf58294dcaef1c8","local-member-id":"3276445ff8d31e34","added-peer-id":"3276445ff8d31e34","added-peer-peer-urls":["https://192.168.39.100:2380"]}
	{"level":"info","ts":"2024-07-31T17:33:52.249524Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6cf58294dcaef1c8","local-member-id":"3276445ff8d31e34","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T17:33:52.249561Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T17:33:52.250067Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.100:2380"}
	{"level":"info","ts":"2024-07-31T17:33:52.250103Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.100:2380"}
	{"level":"info","ts":"2024-07-31T17:33:52.256077Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-31T17:33:52.256143Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-31T17:33:52.256153Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-31T17:33:53.897681Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3276445ff8d31e34 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-31T17:33:53.897821Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3276445ff8d31e34 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-31T17:33:53.897886Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3276445ff8d31e34 received MsgPreVoteResp from 3276445ff8d31e34 at term 2"}
	{"level":"info","ts":"2024-07-31T17:33:53.897926Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3276445ff8d31e34 became candidate at term 3"}
	{"level":"info","ts":"2024-07-31T17:33:53.89795Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3276445ff8d31e34 received MsgVoteResp from 3276445ff8d31e34 at term 3"}
	{"level":"info","ts":"2024-07-31T17:33:53.897977Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3276445ff8d31e34 became leader at term 3"}
	{"level":"info","ts":"2024-07-31T17:33:53.898008Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3276445ff8d31e34 elected leader 3276445ff8d31e34 at term 3"}
	{"level":"info","ts":"2024-07-31T17:33:53.902645Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T17:33:53.902886Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T17:33:53.903136Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T17:33:53.903177Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-31T17:33:53.902643Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"3276445ff8d31e34","local-member-attributes":"{Name:multinode-498089 ClientURLs:[https://192.168.39.100:2379]}","request-path":"/0/members/3276445ff8d31e34/attributes","cluster-id":"6cf58294dcaef1c8","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T17:33:53.904874Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-31T17:33:53.904875Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.100:2379"}
	{"level":"info","ts":"2024-07-31T17:35:19.149738Z","caller":"traceutil/trace.go:171","msg":"trace[1335440042] transaction","detail":"{read_only:false; response_revision:1188; number_of_response:1; }","duration":"156.181866ms","start":"2024-07-31T17:35:18.993453Z","end":"2024-07-31T17:35:19.149635Z","steps":["trace[1335440042] 'process raft request'  (duration: 156.03787ms)"],"step_count":1}
	
	
	==> kernel <==
	 17:38:01 up 11 min,  0 users,  load average: 0.14, 0.22, 0.17
	Linux multinode-498089 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [11f278a9b370395f6b3a7cd171498efe28e15ce5b995f45ba5e647d8dff3db53] <==
	I0731 17:31:22.810829       1 main.go:322] Node multinode-498089-m03 has CIDR [10.244.3.0/24] 
	I0731 17:31:32.817996       1 main.go:295] Handling node with IPs: map[192.168.39.100:{}]
	I0731 17:31:32.818174       1 main.go:299] handling current node
	I0731 17:31:32.818220       1 main.go:295] Handling node with IPs: map[192.168.39.228:{}]
	I0731 17:31:32.818244       1 main.go:322] Node multinode-498089-m02 has CIDR [10.244.1.0/24] 
	I0731 17:31:32.818477       1 main.go:295] Handling node with IPs: map[192.168.39.204:{}]
	I0731 17:31:32.818516       1 main.go:322] Node multinode-498089-m03 has CIDR [10.244.3.0/24] 
	I0731 17:31:42.816702       1 main.go:295] Handling node with IPs: map[192.168.39.100:{}]
	I0731 17:31:42.816804       1 main.go:299] handling current node
	I0731 17:31:42.816836       1 main.go:295] Handling node with IPs: map[192.168.39.228:{}]
	I0731 17:31:42.816844       1 main.go:322] Node multinode-498089-m02 has CIDR [10.244.1.0/24] 
	I0731 17:31:42.817003       1 main.go:295] Handling node with IPs: map[192.168.39.204:{}]
	I0731 17:31:42.817023       1 main.go:322] Node multinode-498089-m03 has CIDR [10.244.3.0/24] 
	I0731 17:31:52.816936       1 main.go:295] Handling node with IPs: map[192.168.39.228:{}]
	I0731 17:31:52.817075       1 main.go:322] Node multinode-498089-m02 has CIDR [10.244.1.0/24] 
	I0731 17:31:52.817262       1 main.go:295] Handling node with IPs: map[192.168.39.204:{}]
	I0731 17:31:52.817289       1 main.go:322] Node multinode-498089-m03 has CIDR [10.244.3.0/24] 
	I0731 17:31:52.817434       1 main.go:295] Handling node with IPs: map[192.168.39.100:{}]
	I0731 17:31:52.817463       1 main.go:299] handling current node
	I0731 17:32:02.809971       1 main.go:295] Handling node with IPs: map[192.168.39.100:{}]
	I0731 17:32:02.810146       1 main.go:299] handling current node
	I0731 17:32:02.810186       1 main.go:295] Handling node with IPs: map[192.168.39.228:{}]
	I0731 17:32:02.810205       1 main.go:322] Node multinode-498089-m02 has CIDR [10.244.1.0/24] 
	I0731 17:32:02.810462       1 main.go:295] Handling node with IPs: map[192.168.39.204:{}]
	I0731 17:32:02.810526       1 main.go:322] Node multinode-498089-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [382ba3eb1c2830200fa38a7ce4a97e818badbd910e8576f35bdf33aca05b304d] <==
	I0731 17:36:52.922818       1 main.go:322] Node multinode-498089-m02 has CIDR [10.244.1.0/24] 
	I0731 17:37:02.927540       1 main.go:295] Handling node with IPs: map[192.168.39.100:{}]
	I0731 17:37:02.927655       1 main.go:299] handling current node
	I0731 17:37:02.927692       1 main.go:295] Handling node with IPs: map[192.168.39.228:{}]
	I0731 17:37:02.927712       1 main.go:322] Node multinode-498089-m02 has CIDR [10.244.1.0/24] 
	I0731 17:37:12.931111       1 main.go:295] Handling node with IPs: map[192.168.39.100:{}]
	I0731 17:37:12.931156       1 main.go:299] handling current node
	I0731 17:37:12.931171       1 main.go:295] Handling node with IPs: map[192.168.39.228:{}]
	I0731 17:37:12.931176       1 main.go:322] Node multinode-498089-m02 has CIDR [10.244.1.0/24] 
	I0731 17:37:22.922859       1 main.go:295] Handling node with IPs: map[192.168.39.228:{}]
	I0731 17:37:22.922979       1 main.go:322] Node multinode-498089-m02 has CIDR [10.244.1.0/24] 
	I0731 17:37:22.923128       1 main.go:295] Handling node with IPs: map[192.168.39.100:{}]
	I0731 17:37:22.923154       1 main.go:299] handling current node
	I0731 17:37:32.927477       1 main.go:295] Handling node with IPs: map[192.168.39.100:{}]
	I0731 17:37:32.927621       1 main.go:299] handling current node
	I0731 17:37:32.927663       1 main.go:295] Handling node with IPs: map[192.168.39.228:{}]
	I0731 17:37:32.927686       1 main.go:322] Node multinode-498089-m02 has CIDR [10.244.1.0/24] 
	I0731 17:37:42.923581       1 main.go:295] Handling node with IPs: map[192.168.39.100:{}]
	I0731 17:37:42.923692       1 main.go:299] handling current node
	I0731 17:37:42.923732       1 main.go:295] Handling node with IPs: map[192.168.39.228:{}]
	I0731 17:37:42.923741       1 main.go:322] Node multinode-498089-m02 has CIDR [10.244.1.0/24] 
	I0731 17:37:52.923540       1 main.go:295] Handling node with IPs: map[192.168.39.100:{}]
	I0731 17:37:52.923604       1 main.go:299] handling current node
	I0731 17:37:52.923625       1 main.go:295] Handling node with IPs: map[192.168.39.228:{}]
	I0731 17:37:52.923630       1 main.go:322] Node multinode-498089-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [7706d4d5c5b20f986a98dbe39b9b5f757972cfa89dd0f91e0d780ab4229cd836] <==
	I0731 17:32:05.844423       1 crd_finalizer.go:278] Shutting down CRDFinalizer
	I0731 17:32:05.844472       1 apiapproval_controller.go:198] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I0731 17:32:05.844525       1 nonstructuralschema_controller.go:204] Shutting down NonStructuralSchemaConditionController
	I0731 17:32:05.844546       1 establishing_controller.go:87] Shutting down EstablishingController
	I0731 17:32:05.844581       1 naming_controller.go:302] Shutting down NamingConditionController
	I0731 17:32:05.844626       1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
	I0731 17:32:05.844656       1 system_namespaces_controller.go:77] Shutting down system namespaces controller
	I0731 17:32:05.844678       1 apiservice_controller.go:131] Shutting down APIServiceRegistrationController
	I0731 17:32:05.844715       1 customresource_discovery_controller.go:325] Shutting down DiscoveryController
	I0731 17:32:05.844739       1 apf_controller.go:386] Shutting down API Priority and Fairness config worker
	I0731 17:32:05.844788       1 controller.go:129] Ending legacy_token_tracking_controller
	I0731 17:32:05.844813       1 controller.go:130] Shutting down legacy_token_tracking_controller
	I0731 17:32:05.844841       1 controller.go:117] Shutting down OpenAPI V3 controller
	I0731 17:32:05.844883       1 controller.go:167] Shutting down OpenAPI controller
	I0731 17:32:05.844920       1 available_controller.go:439] Shutting down AvailableConditionController
	I0731 17:32:05.845012       1 dynamic_serving_content.go:146] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0731 17:32:05.846474       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0731 17:32:05.846572       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0731 17:32:05.846619       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0731 17:32:05.846658       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0731 17:32:05.846682       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0731 17:32:05.846974       1 controller.go:176] quota evaluator worker shutdown
	I0731 17:32:05.847007       1 controller.go:176] quota evaluator worker shutdown
	I0731 17:32:05.847015       1 controller.go:176] quota evaluator worker shutdown
	I0731 17:32:05.847024       1 controller.go:176] quota evaluator worker shutdown
	
	
	==> kube-apiserver [c69ac13c491b850716408eaae15dee0ae649f67e725ee682990cc4a93bea42c7] <==
	I0731 17:33:55.166875       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0731 17:33:55.223839       1 shared_informer.go:320] Caches are synced for configmaps
	I0731 17:33:55.226012       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0731 17:33:55.228294       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0731 17:33:55.228414       1 policy_source.go:224] refreshing policies
	I0731 17:33:55.229241       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0731 17:33:55.229293       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0731 17:33:55.230114       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0731 17:33:55.267641       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0731 17:33:55.267820       1 aggregator.go:165] initial CRD sync complete...
	I0731 17:33:55.267864       1 autoregister_controller.go:141] Starting autoregister controller
	I0731 17:33:55.267889       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0731 17:33:55.267912       1 cache.go:39] Caches are synced for autoregister controller
	I0731 17:33:55.309879       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0731 17:33:55.318264       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0731 17:33:55.320541       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0731 17:33:55.333817       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0731 17:33:56.121106       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0731 17:33:58.245379       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0731 17:33:58.364074       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0731 17:33:58.375456       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0731 17:33:58.437856       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0731 17:33:58.443020       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0731 17:34:07.533249       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0731 17:34:07.581837       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [741c5d19093c08d4889a8536dba6fa217e98a0ff4f96ef237f57d12dd34a844b] <==
	I0731 17:28:01.071204       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-498089-m02\" does not exist"
	I0731 17:28:01.101996       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-498089-m02" podCIDRs=["10.244.1.0/24"]
	I0731 17:28:01.598130       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-498089-m02"
	I0731 17:28:20.298672       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-498089-m02"
	I0731 17:28:22.744590       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.571731ms"
	I0731 17:28:22.769456       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="24.803508ms"
	I0731 17:28:22.769796       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.885µs"
	I0731 17:28:22.769917       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.109µs"
	I0731 17:28:26.022002       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.936772ms"
	I0731 17:28:26.023255       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="75.696µs"
	I0731 17:28:26.435181       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="6.031937ms"
	I0731 17:28:26.435385       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="86.519µs"
	I0731 17:28:52.873864       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-498089-m03\" does not exist"
	I0731 17:28:52.873917       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-498089-m02"
	I0731 17:28:52.882620       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-498089-m03" podCIDRs=["10.244.2.0/24"]
	I0731 17:28:56.614570       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-498089-m03"
	I0731 17:29:12.288472       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-498089-m02"
	I0731 17:29:40.039580       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-498089-m02"
	I0731 17:29:41.037376       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-498089-m02"
	I0731 17:29:41.039533       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-498089-m03\" does not exist"
	I0731 17:29:41.049940       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-498089-m03" podCIDRs=["10.244.3.0/24"]
	I0731 17:30:00.392946       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-498089-m02"
	I0731 17:30:41.671528       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-498089-m02"
	I0731 17:30:46.744740       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.966199ms"
	I0731 17:30:46.745227       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="71.331µs"
	
	
	==> kube-controller-manager [ba2f3c3d981875bdc99216720fb78f7187ed031bddd10f4b276b85fbc1df0506] <==
	I0731 17:34:32.171759       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="122.089µs"
	I0731 17:34:36.330463       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-498089-m02\" does not exist"
	I0731 17:34:36.344966       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-498089-m02" podCIDRs=["10.244.1.0/24"]
	I0731 17:34:38.278649       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.396µs"
	I0731 17:34:38.298732       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.263µs"
	I0731 17:34:38.315778       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.297µs"
	I0731 17:34:38.323530       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.229µs"
	I0731 17:34:38.325820       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.306µs"
	I0731 17:34:55.781291       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-498089-m02"
	I0731 17:34:55.800723       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="188.351µs"
	I0731 17:34:55.815260       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.934µs"
	I0731 17:34:59.369363       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.865655ms"
	I0731 17:34:59.369469       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.23µs"
	I0731 17:35:13.845487       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-498089-m02"
	I0731 17:35:14.842438       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-498089-m02"
	I0731 17:35:14.842954       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-498089-m03\" does not exist"
	I0731 17:35:14.868148       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-498089-m03" podCIDRs=["10.244.2.0/24"]
	I0731 17:35:34.240820       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-498089-m02"
	I0731 17:35:39.492050       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-498089-m02"
	I0731 17:36:22.650473       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="6.735533ms"
	I0731 17:36:22.650907       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="157.108µs"
	I0731 17:36:27.561074       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-ppb5z"
	I0731 17:36:27.581001       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-ppb5z"
	I0731 17:36:27.581112       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-pckbq"
	I0731 17:36:27.606077       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-pckbq"
	
	
	==> kube-proxy [468fe46f35c8cc80128ee1ec6a6050cf94aef0cb919789ed8d25c9880b9dddfc] <==
	I0731 17:33:53.234727       1 server_linux.go:69] "Using iptables proxy"
	I0731 17:33:55.259776       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.100"]
	I0731 17:33:55.360861       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 17:33:55.360925       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 17:33:55.360944       1 server_linux.go:165] "Using iptables Proxier"
	I0731 17:33:55.370578       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 17:33:55.371043       1 server.go:872] "Version info" version="v1.30.3"
	I0731 17:33:55.371083       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 17:33:55.373882       1 config.go:192] "Starting service config controller"
	I0731 17:33:55.373923       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 17:33:55.375226       1 config.go:101] "Starting endpoint slice config controller"
	I0731 17:33:55.375246       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 17:33:55.376236       1 config.go:319] "Starting node config controller"
	I0731 17:33:55.376256       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 17:33:55.476072       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 17:33:55.476420       1 shared_informer.go:320] Caches are synced for node config
	I0731 17:33:55.476447       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [87a364936bdef18ba9cf82cf399cac1f0c9d855f2bbd704616f08ba3c0154e86] <==
	I0731 17:27:18.507586       1 server_linux.go:69] "Using iptables proxy"
	I0731 17:27:18.537337       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.100"]
	I0731 17:27:18.604697       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 17:27:18.604739       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 17:27:18.604756       1 server_linux.go:165] "Using iptables Proxier"
	I0731 17:27:18.607226       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 17:27:18.607905       1 server.go:872] "Version info" version="v1.30.3"
	I0731 17:27:18.607921       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 17:27:18.609625       1 config.go:192] "Starting service config controller"
	I0731 17:27:18.609675       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 17:27:18.609714       1 config.go:101] "Starting endpoint slice config controller"
	I0731 17:27:18.609732       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 17:27:18.611733       1 config.go:319] "Starting node config controller"
	I0731 17:27:18.611776       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 17:27:18.709869       1 shared_informer.go:320] Caches are synced for service config
	I0731 17:27:18.709868       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 17:27:18.712657       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7df64e7974e8c57b3618f920f29d46ee4cfe200717f7cba6834e6697f01fc257] <==
	W0731 17:33:55.180694       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0731 17:33:55.180774       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 17:33:55.180804       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0731 17:33:55.180828       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0731 17:33:55.216047       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0731 17:33:55.216148       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 17:33:55.217744       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0731 17:33:55.220404       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0731 17:33:55.220504       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0731 17:33:55.226661       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	W0731 17:33:55.229720       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0731 17:33:55.229798       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0731 17:33:55.229894       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0731 17:33:55.229927       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0731 17:33:55.229991       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope: RBAC: clusterrole.rbac.authorization.k8s.io "system:basic-user" not found
	E0731 17:33:55.230024       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope: RBAC: clusterrole.rbac.authorization.k8s.io "system:basic-user" not found
	W0731 17:33:55.232634       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 17:33:55.232737       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0731 17:33:55.232847       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 17:33:55.232882       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0731 17:33:55.232939       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 17:33:55.232968       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0731 17:33:55.233023       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 17:33:55.233051       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0731 17:33:55.327368       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [fc640bbf40886bebe629913240d2f5ee56a9b60ed6a6ce6e7da42e278f0303e3] <==
	W0731 17:27:01.405651       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 17:27:01.406216       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0731 17:27:01.405707       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 17:27:01.406242       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0731 17:27:02.435489       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 17:27:02.435592       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0731 17:27:02.445901       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 17:27:02.446014       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 17:27:02.459579       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 17:27:02.459672       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0731 17:27:02.495280       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 17:27:02.495419       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0731 17:27:02.532977       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 17:27:02.533110       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0731 17:27:02.537874       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 17:27:02.537923       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0731 17:27:02.583027       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 17:27:02.583068       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0731 17:27:02.609935       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0731 17:27:02.609980       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0731 17:27:04.894386       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 17:32:05.817704       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0731 17:32:05.817835       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0731 17:32:05.818217       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0731 17:32:05.819703       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 31 17:33:57 multinode-498089 kubelet[3834]: I0731 17:33:57.853126    3834 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/23b4d2f3-c925-4a0e-8c9a-ecda421332bf-xtables-lock\") pod \"kindnet-pklkm\" (UID: \"23b4d2f3-c925-4a0e-8c9a-ecda421332bf\") " pod="kube-system/kindnet-pklkm"
	Jul 31 17:33:57 multinode-498089 kubelet[3834]: I0731 17:33:57.853188    3834 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e893babbbefa5fea11a4d995a36606db-ca-certs\") pod \"kube-controller-manager-multinode-498089\" (UID: \"e893babbbefa5fea11a4d995a36606db\") " pod="kube-system/kube-controller-manager-multinode-498089"
	Jul 31 17:33:57 multinode-498089 kubelet[3834]: I0731 17:33:57.853205    3834 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cb9f0873-0309-40a8-a2f1-c1c6f0713034-lib-modules\") pod \"kube-proxy-v6xrd\" (UID: \"cb9f0873-0309-40a8-a2f1-c1c6f0713034\") " pod="kube-system/kube-proxy-v6xrd"
	Jul 31 17:33:57 multinode-498089 kubelet[3834]: I0731 17:33:57.853229    3834 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/da27596f1275a6292fe83598b4e87488-kubeconfig\") pod \"kube-scheduler-multinode-498089\" (UID: \"da27596f1275a6292fe83598b4e87488\") " pod="kube-system/kube-scheduler-multinode-498089"
	Jul 31 17:33:58 multinode-498089 kubelet[3834]: I0731 17:33:58.089457    3834 scope.go:117] "RemoveContainer" containerID="6edfcf47092f7b6539d7a28ad02cdce800748c9d7c70a7bf92a9dee67cb4d6e3"
	Jul 31 17:34:57 multinode-498089 kubelet[3834]: E0731 17:34:57.743636    3834 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 17:34:57 multinode-498089 kubelet[3834]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 17:34:57 multinode-498089 kubelet[3834]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 17:34:57 multinode-498089 kubelet[3834]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 17:34:57 multinode-498089 kubelet[3834]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 17:35:57 multinode-498089 kubelet[3834]: E0731 17:35:57.740098    3834 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 17:35:57 multinode-498089 kubelet[3834]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 17:35:57 multinode-498089 kubelet[3834]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 17:35:57 multinode-498089 kubelet[3834]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 17:35:57 multinode-498089 kubelet[3834]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 17:36:57 multinode-498089 kubelet[3834]: E0731 17:36:57.739200    3834 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 17:36:57 multinode-498089 kubelet[3834]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 17:36:57 multinode-498089 kubelet[3834]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 17:36:57 multinode-498089 kubelet[3834]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 17:36:57 multinode-498089 kubelet[3834]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 17:37:57 multinode-498089 kubelet[3834]: E0731 17:37:57.739464    3834 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 17:37:57 multinode-498089 kubelet[3834]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 17:37:57 multinode-498089 kubelet[3834]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 17:37:57 multinode-498089 kubelet[3834]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 17:37:57 multinode-498089 kubelet[3834]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 17:38:00.424270   46717 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19349-8084/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-498089 -n multinode-498089
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-498089 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.34s)

                                                
                                    
x
+
TestPreload (217.74s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-106909 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0731 17:42:57.005596   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/functional-496242/client.crt: no such file or directory
E0731 17:43:48.393073   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/client.crt: no such file or directory
E0731 17:44:05.346277   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-106909 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m18.25568646s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-106909 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-106909 image pull gcr.io/k8s-minikube/busybox: (2.800595998s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-106909
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-106909: (6.570599651s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-106909 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-106909 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m7.24657s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-106909 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:626: *** TestPreload FAILED at 2024-07-31 17:45:25.023788722 +0000 UTC m=+3928.730526909
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-106909 -n test-preload-106909
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-106909 logs -n 25
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-498089 ssh -n                                                                 | multinode-498089     | jenkins | v1.33.1 | 31 Jul 24 17:29 UTC | 31 Jul 24 17:29 UTC |
	|         | multinode-498089-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-498089 ssh -n multinode-498089 sudo cat                                       | multinode-498089     | jenkins | v1.33.1 | 31 Jul 24 17:29 UTC | 31 Jul 24 17:29 UTC |
	|         | /home/docker/cp-test_multinode-498089-m03_multinode-498089.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-498089 cp multinode-498089-m03:/home/docker/cp-test.txt                       | multinode-498089     | jenkins | v1.33.1 | 31 Jul 24 17:29 UTC | 31 Jul 24 17:29 UTC |
	|         | multinode-498089-m02:/home/docker/cp-test_multinode-498089-m03_multinode-498089-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-498089 ssh -n                                                                 | multinode-498089     | jenkins | v1.33.1 | 31 Jul 24 17:29 UTC | 31 Jul 24 17:29 UTC |
	|         | multinode-498089-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-498089 ssh -n multinode-498089-m02 sudo cat                                   | multinode-498089     | jenkins | v1.33.1 | 31 Jul 24 17:29 UTC | 31 Jul 24 17:29 UTC |
	|         | /home/docker/cp-test_multinode-498089-m03_multinode-498089-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-498089 node stop m03                                                          | multinode-498089     | jenkins | v1.33.1 | 31 Jul 24 17:29 UTC | 31 Jul 24 17:29 UTC |
	| node    | multinode-498089 node start                                                             | multinode-498089     | jenkins | v1.33.1 | 31 Jul 24 17:29 UTC | 31 Jul 24 17:30 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-498089                                                                | multinode-498089     | jenkins | v1.33.1 | 31 Jul 24 17:30 UTC |                     |
	| stop    | -p multinode-498089                                                                     | multinode-498089     | jenkins | v1.33.1 | 31 Jul 24 17:30 UTC |                     |
	| start   | -p multinode-498089                                                                     | multinode-498089     | jenkins | v1.33.1 | 31 Jul 24 17:32 UTC | 31 Jul 24 17:35 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-498089                                                                | multinode-498089     | jenkins | v1.33.1 | 31 Jul 24 17:35 UTC |                     |
	| node    | multinode-498089 node delete                                                            | multinode-498089     | jenkins | v1.33.1 | 31 Jul 24 17:35 UTC | 31 Jul 24 17:35 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-498089 stop                                                                   | multinode-498089     | jenkins | v1.33.1 | 31 Jul 24 17:35 UTC |                     |
	| start   | -p multinode-498089                                                                     | multinode-498089     | jenkins | v1.33.1 | 31 Jul 24 17:38 UTC | 31 Jul 24 17:41 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-498089                                                                | multinode-498089     | jenkins | v1.33.1 | 31 Jul 24 17:41 UTC |                     |
	| start   | -p multinode-498089-m02                                                                 | multinode-498089-m02 | jenkins | v1.33.1 | 31 Jul 24 17:41 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-498089-m03                                                                 | multinode-498089-m03 | jenkins | v1.33.1 | 31 Jul 24 17:41 UTC | 31 Jul 24 17:41 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-498089                                                                 | multinode-498089     | jenkins | v1.33.1 | 31 Jul 24 17:41 UTC |                     |
	| delete  | -p multinode-498089-m03                                                                 | multinode-498089-m03 | jenkins | v1.33.1 | 31 Jul 24 17:41 UTC | 31 Jul 24 17:41 UTC |
	| delete  | -p multinode-498089                                                                     | multinode-498089     | jenkins | v1.33.1 | 31 Jul 24 17:41 UTC | 31 Jul 24 17:41 UTC |
	| start   | -p test-preload-106909                                                                  | test-preload-106909  | jenkins | v1.33.1 | 31 Jul 24 17:41 UTC | 31 Jul 24 17:44 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-106909 image pull                                                          | test-preload-106909  | jenkins | v1.33.1 | 31 Jul 24 17:44 UTC | 31 Jul 24 17:44 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-106909                                                                  | test-preload-106909  | jenkins | v1.33.1 | 31 Jul 24 17:44 UTC | 31 Jul 24 17:44 UTC |
	| start   | -p test-preload-106909                                                                  | test-preload-106909  | jenkins | v1.33.1 | 31 Jul 24 17:44 UTC | 31 Jul 24 17:45 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-106909 image list                                                          | test-preload-106909  | jenkins | v1.33.1 | 31 Jul 24 17:45 UTC | 31 Jul 24 17:45 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 17:44:17
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 17:44:17.615517   49214 out.go:291] Setting OutFile to fd 1 ...
	I0731 17:44:17.615632   49214 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 17:44:17.615641   49214 out.go:304] Setting ErrFile to fd 2...
	I0731 17:44:17.615645   49214 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 17:44:17.615806   49214 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19349-8084/.minikube/bin
	I0731 17:44:17.616324   49214 out.go:298] Setting JSON to false
	I0731 17:44:17.617157   49214 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5202,"bootTime":1722442656,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 17:44:17.617216   49214 start.go:139] virtualization: kvm guest
	I0731 17:44:17.619630   49214 out.go:177] * [test-preload-106909] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 17:44:17.621169   49214 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 17:44:17.621166   49214 notify.go:220] Checking for updates...
	I0731 17:44:17.622982   49214 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 17:44:17.624333   49214 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 17:44:17.625526   49214 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19349-8084/.minikube
	I0731 17:44:17.626791   49214 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 17:44:17.628008   49214 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 17:44:17.629549   49214 config.go:182] Loaded profile config "test-preload-106909": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0731 17:44:17.629932   49214 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:44:17.629989   49214 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:44:17.644393   49214 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39257
	I0731 17:44:17.644755   49214 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:44:17.645225   49214 main.go:141] libmachine: Using API Version  1
	I0731 17:44:17.645246   49214 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:44:17.645559   49214 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:44:17.645722   49214 main.go:141] libmachine: (test-preload-106909) Calling .DriverName
	I0731 17:44:17.647511   49214 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0731 17:44:17.648833   49214 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 17:44:17.649102   49214 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:44:17.649132   49214 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:44:17.663300   49214 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41923
	I0731 17:44:17.663681   49214 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:44:17.664088   49214 main.go:141] libmachine: Using API Version  1
	I0731 17:44:17.664110   49214 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:44:17.664467   49214 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:44:17.664639   49214 main.go:141] libmachine: (test-preload-106909) Calling .DriverName
	I0731 17:44:17.698160   49214 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 17:44:17.699479   49214 start.go:297] selected driver: kvm2
	I0731 17:44:17.699492   49214 start.go:901] validating driver "kvm2" against &{Name:test-preload-106909 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-106909 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L M
ountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 17:44:17.699614   49214 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 17:44:17.700279   49214 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 17:44:17.700374   49214 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19349-8084/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 17:44:17.714905   49214 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 17:44:17.715255   49214 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 17:44:17.715287   49214 cni.go:84] Creating CNI manager for ""
	I0731 17:44:17.715298   49214 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 17:44:17.715368   49214 start.go:340] cluster config:
	{Name:test-preload-106909 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-106909 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 17:44:17.715531   49214 iso.go:125] acquiring lock: {Name:mk4494bb5a63902bf12a909c3109db85229bc516 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 17:44:17.717355   49214 out.go:177] * Starting "test-preload-106909" primary control-plane node in "test-preload-106909" cluster
	I0731 17:44:17.719239   49214 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0731 17:44:18.195373   49214 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0731 17:44:18.195418   49214 cache.go:56] Caching tarball of preloaded images
	I0731 17:44:18.195570   49214 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0731 17:44:18.197484   49214 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0731 17:44:18.198776   49214 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0731 17:44:18.297558   49214 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0731 17:44:30.052003   49214 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0731 17:44:30.052096   49214 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0731 17:44:30.890432   49214 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0731 17:44:30.890548   49214 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/test-preload-106909/config.json ...
	I0731 17:44:30.911742   49214 start.go:360] acquireMachinesLock for test-preload-106909: {Name:mkd78eacfab7d6c3058c6674434b3d889ec957e0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 17:44:30.911814   49214 start.go:364] duration metric: took 45.42µs to acquireMachinesLock for "test-preload-106909"
	I0731 17:44:30.911826   49214 start.go:96] Skipping create...Using existing machine configuration
	I0731 17:44:30.911836   49214 fix.go:54] fixHost starting: 
	I0731 17:44:30.912152   49214 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:44:30.912183   49214 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:44:30.927302   49214 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34595
	I0731 17:44:30.927785   49214 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:44:30.928270   49214 main.go:141] libmachine: Using API Version  1
	I0731 17:44:30.928292   49214 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:44:30.928610   49214 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:44:30.928789   49214 main.go:141] libmachine: (test-preload-106909) Calling .DriverName
	I0731 17:44:30.928932   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetState
	I0731 17:44:30.930671   49214 fix.go:112] recreateIfNeeded on test-preload-106909: state=Stopped err=<nil>
	I0731 17:44:30.930701   49214 main.go:141] libmachine: (test-preload-106909) Calling .DriverName
	W0731 17:44:30.930862   49214 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 17:44:30.996627   49214 out.go:177] * Restarting existing kvm2 VM for "test-preload-106909" ...
	I0731 17:44:31.059255   49214 main.go:141] libmachine: (test-preload-106909) Calling .Start
	I0731 17:44:31.059560   49214 main.go:141] libmachine: (test-preload-106909) Ensuring networks are active...
	I0731 17:44:31.060728   49214 main.go:141] libmachine: (test-preload-106909) Ensuring network default is active
	I0731 17:44:31.061088   49214 main.go:141] libmachine: (test-preload-106909) Ensuring network mk-test-preload-106909 is active
	I0731 17:44:31.061461   49214 main.go:141] libmachine: (test-preload-106909) Getting domain xml...
	I0731 17:44:31.062252   49214 main.go:141] libmachine: (test-preload-106909) Creating domain...
	I0731 17:44:32.351559   49214 main.go:141] libmachine: (test-preload-106909) Waiting to get IP...
	I0731 17:44:32.352402   49214 main.go:141] libmachine: (test-preload-106909) DBG | domain test-preload-106909 has defined MAC address 52:54:00:e8:5c:bc in network mk-test-preload-106909
	I0731 17:44:32.352873   49214 main.go:141] libmachine: (test-preload-106909) DBG | unable to find current IP address of domain test-preload-106909 in network mk-test-preload-106909
	I0731 17:44:32.352953   49214 main.go:141] libmachine: (test-preload-106909) DBG | I0731 17:44:32.352863   49297 retry.go:31] will retry after 256.536188ms: waiting for machine to come up
	I0731 17:44:32.611519   49214 main.go:141] libmachine: (test-preload-106909) DBG | domain test-preload-106909 has defined MAC address 52:54:00:e8:5c:bc in network mk-test-preload-106909
	I0731 17:44:32.611930   49214 main.go:141] libmachine: (test-preload-106909) DBG | unable to find current IP address of domain test-preload-106909 in network mk-test-preload-106909
	I0731 17:44:32.611959   49214 main.go:141] libmachine: (test-preload-106909) DBG | I0731 17:44:32.611883   49297 retry.go:31] will retry after 296.031185ms: waiting for machine to come up
	I0731 17:44:32.909547   49214 main.go:141] libmachine: (test-preload-106909) DBG | domain test-preload-106909 has defined MAC address 52:54:00:e8:5c:bc in network mk-test-preload-106909
	I0731 17:44:32.909969   49214 main.go:141] libmachine: (test-preload-106909) DBG | unable to find current IP address of domain test-preload-106909 in network mk-test-preload-106909
	I0731 17:44:32.909996   49214 main.go:141] libmachine: (test-preload-106909) DBG | I0731 17:44:32.909942   49297 retry.go:31] will retry after 318.042558ms: waiting for machine to come up
	I0731 17:44:33.229220   49214 main.go:141] libmachine: (test-preload-106909) DBG | domain test-preload-106909 has defined MAC address 52:54:00:e8:5c:bc in network mk-test-preload-106909
	I0731 17:44:33.229653   49214 main.go:141] libmachine: (test-preload-106909) DBG | unable to find current IP address of domain test-preload-106909 in network mk-test-preload-106909
	I0731 17:44:33.229680   49214 main.go:141] libmachine: (test-preload-106909) DBG | I0731 17:44:33.229608   49297 retry.go:31] will retry after 580.304391ms: waiting for machine to come up
	I0731 17:44:33.811401   49214 main.go:141] libmachine: (test-preload-106909) DBG | domain test-preload-106909 has defined MAC address 52:54:00:e8:5c:bc in network mk-test-preload-106909
	I0731 17:44:33.811905   49214 main.go:141] libmachine: (test-preload-106909) DBG | unable to find current IP address of domain test-preload-106909 in network mk-test-preload-106909
	I0731 17:44:33.811927   49214 main.go:141] libmachine: (test-preload-106909) DBG | I0731 17:44:33.811870   49297 retry.go:31] will retry after 515.55849ms: waiting for machine to come up
	I0731 17:44:34.328527   49214 main.go:141] libmachine: (test-preload-106909) DBG | domain test-preload-106909 has defined MAC address 52:54:00:e8:5c:bc in network mk-test-preload-106909
	I0731 17:44:34.328853   49214 main.go:141] libmachine: (test-preload-106909) DBG | unable to find current IP address of domain test-preload-106909 in network mk-test-preload-106909
	I0731 17:44:34.328881   49214 main.go:141] libmachine: (test-preload-106909) DBG | I0731 17:44:34.328802   49297 retry.go:31] will retry after 811.646913ms: waiting for machine to come up
	I0731 17:44:35.141924   49214 main.go:141] libmachine: (test-preload-106909) DBG | domain test-preload-106909 has defined MAC address 52:54:00:e8:5c:bc in network mk-test-preload-106909
	I0731 17:44:35.142319   49214 main.go:141] libmachine: (test-preload-106909) DBG | unable to find current IP address of domain test-preload-106909 in network mk-test-preload-106909
	I0731 17:44:35.142350   49214 main.go:141] libmachine: (test-preload-106909) DBG | I0731 17:44:35.142268   49297 retry.go:31] will retry after 1.09361101s: waiting for machine to come up
	I0731 17:44:36.237941   49214 main.go:141] libmachine: (test-preload-106909) DBG | domain test-preload-106909 has defined MAC address 52:54:00:e8:5c:bc in network mk-test-preload-106909
	I0731 17:44:36.238340   49214 main.go:141] libmachine: (test-preload-106909) DBG | unable to find current IP address of domain test-preload-106909 in network mk-test-preload-106909
	I0731 17:44:36.238371   49214 main.go:141] libmachine: (test-preload-106909) DBG | I0731 17:44:36.238281   49297 retry.go:31] will retry after 1.235246498s: waiting for machine to come up
	I0731 17:44:37.475692   49214 main.go:141] libmachine: (test-preload-106909) DBG | domain test-preload-106909 has defined MAC address 52:54:00:e8:5c:bc in network mk-test-preload-106909
	I0731 17:44:37.476077   49214 main.go:141] libmachine: (test-preload-106909) DBG | unable to find current IP address of domain test-preload-106909 in network mk-test-preload-106909
	I0731 17:44:37.476102   49214 main.go:141] libmachine: (test-preload-106909) DBG | I0731 17:44:37.476034   49297 retry.go:31] will retry after 1.412548266s: waiting for machine to come up
	I0731 17:44:38.889812   49214 main.go:141] libmachine: (test-preload-106909) DBG | domain test-preload-106909 has defined MAC address 52:54:00:e8:5c:bc in network mk-test-preload-106909
	I0731 17:44:38.890288   49214 main.go:141] libmachine: (test-preload-106909) DBG | unable to find current IP address of domain test-preload-106909 in network mk-test-preload-106909
	I0731 17:44:38.890317   49214 main.go:141] libmachine: (test-preload-106909) DBG | I0731 17:44:38.890234   49297 retry.go:31] will retry after 1.481243157s: waiting for machine to come up
	I0731 17:44:40.374013   49214 main.go:141] libmachine: (test-preload-106909) DBG | domain test-preload-106909 has defined MAC address 52:54:00:e8:5c:bc in network mk-test-preload-106909
	I0731 17:44:40.374498   49214 main.go:141] libmachine: (test-preload-106909) DBG | unable to find current IP address of domain test-preload-106909 in network mk-test-preload-106909
	I0731 17:44:40.374528   49214 main.go:141] libmachine: (test-preload-106909) DBG | I0731 17:44:40.374441   49297 retry.go:31] will retry after 2.105849918s: waiting for machine to come up
	I0731 17:44:42.482242   49214 main.go:141] libmachine: (test-preload-106909) DBG | domain test-preload-106909 has defined MAC address 52:54:00:e8:5c:bc in network mk-test-preload-106909
	I0731 17:44:42.482603   49214 main.go:141] libmachine: (test-preload-106909) DBG | unable to find current IP address of domain test-preload-106909 in network mk-test-preload-106909
	I0731 17:44:42.482625   49214 main.go:141] libmachine: (test-preload-106909) DBG | I0731 17:44:42.482579   49297 retry.go:31] will retry after 2.450866204s: waiting for machine to come up
	I0731 17:44:44.936144   49214 main.go:141] libmachine: (test-preload-106909) DBG | domain test-preload-106909 has defined MAC address 52:54:00:e8:5c:bc in network mk-test-preload-106909
	I0731 17:44:44.936482   49214 main.go:141] libmachine: (test-preload-106909) DBG | unable to find current IP address of domain test-preload-106909 in network mk-test-preload-106909
	I0731 17:44:44.936516   49214 main.go:141] libmachine: (test-preload-106909) DBG | I0731 17:44:44.936458   49297 retry.go:31] will retry after 3.380423533s: waiting for machine to come up
	I0731 17:44:48.319999   49214 main.go:141] libmachine: (test-preload-106909) DBG | domain test-preload-106909 has defined MAC address 52:54:00:e8:5c:bc in network mk-test-preload-106909
	I0731 17:44:48.320457   49214 main.go:141] libmachine: (test-preload-106909) Found IP for machine: 192.168.39.32
	I0731 17:44:48.320480   49214 main.go:141] libmachine: (test-preload-106909) Reserving static IP address...
	I0731 17:44:48.320497   49214 main.go:141] libmachine: (test-preload-106909) DBG | domain test-preload-106909 has current primary IP address 192.168.39.32 and MAC address 52:54:00:e8:5c:bc in network mk-test-preload-106909
	I0731 17:44:48.320867   49214 main.go:141] libmachine: (test-preload-106909) DBG | found host DHCP lease matching {name: "test-preload-106909", mac: "52:54:00:e8:5c:bc", ip: "192.168.39.32"} in network mk-test-preload-106909: {Iface:virbr1 ExpiryTime:2024-07-31 18:44:41 +0000 UTC Type:0 Mac:52:54:00:e8:5c:bc Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:test-preload-106909 Clientid:01:52:54:00:e8:5c:bc}
	I0731 17:44:48.320906   49214 main.go:141] libmachine: (test-preload-106909) DBG | skip adding static IP to network mk-test-preload-106909 - found existing host DHCP lease matching {name: "test-preload-106909", mac: "52:54:00:e8:5c:bc", ip: "192.168.39.32"}
	I0731 17:44:48.320924   49214 main.go:141] libmachine: (test-preload-106909) Reserved static IP address: 192.168.39.32
	I0731 17:44:48.320933   49214 main.go:141] libmachine: (test-preload-106909) Waiting for SSH to be available...
	I0731 17:44:48.320944   49214 main.go:141] libmachine: (test-preload-106909) DBG | Getting to WaitForSSH function...
	I0731 17:44:48.322939   49214 main.go:141] libmachine: (test-preload-106909) DBG | domain test-preload-106909 has defined MAC address 52:54:00:e8:5c:bc in network mk-test-preload-106909
	I0731 17:44:48.323238   49214 main.go:141] libmachine: (test-preload-106909) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:5c:bc", ip: ""} in network mk-test-preload-106909: {Iface:virbr1 ExpiryTime:2024-07-31 18:44:41 +0000 UTC Type:0 Mac:52:54:00:e8:5c:bc Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:test-preload-106909 Clientid:01:52:54:00:e8:5c:bc}
	I0731 17:44:48.323260   49214 main.go:141] libmachine: (test-preload-106909) DBG | domain test-preload-106909 has defined IP address 192.168.39.32 and MAC address 52:54:00:e8:5c:bc in network mk-test-preload-106909
	I0731 17:44:48.323390   49214 main.go:141] libmachine: (test-preload-106909) DBG | Using SSH client type: external
	I0731 17:44:48.323415   49214 main.go:141] libmachine: (test-preload-106909) DBG | Using SSH private key: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/test-preload-106909/id_rsa (-rw-------)
	I0731 17:44:48.323450   49214 main.go:141] libmachine: (test-preload-106909) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.32 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19349-8084/.minikube/machines/test-preload-106909/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 17:44:48.323464   49214 main.go:141] libmachine: (test-preload-106909) DBG | About to run SSH command:
	I0731 17:44:48.323475   49214 main.go:141] libmachine: (test-preload-106909) DBG | exit 0
	I0731 17:44:48.442743   49214 main.go:141] libmachine: (test-preload-106909) DBG | SSH cmd err, output: <nil>: 
	I0731 17:44:48.443095   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetConfigRaw
	I0731 17:44:48.443691   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetIP
	I0731 17:44:48.446131   49214 main.go:141] libmachine: (test-preload-106909) DBG | domain test-preload-106909 has defined MAC address 52:54:00:e8:5c:bc in network mk-test-preload-106909
	I0731 17:44:48.446547   49214 main.go:141] libmachine: (test-preload-106909) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:5c:bc", ip: ""} in network mk-test-preload-106909: {Iface:virbr1 ExpiryTime:2024-07-31 18:44:41 +0000 UTC Type:0 Mac:52:54:00:e8:5c:bc Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:test-preload-106909 Clientid:01:52:54:00:e8:5c:bc}
	I0731 17:44:48.446579   49214 main.go:141] libmachine: (test-preload-106909) DBG | domain test-preload-106909 has defined IP address 192.168.39.32 and MAC address 52:54:00:e8:5c:bc in network mk-test-preload-106909
	I0731 17:44:48.446816   49214 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/test-preload-106909/config.json ...
	I0731 17:44:48.447000   49214 machine.go:94] provisionDockerMachine start ...
	I0731 17:44:48.447016   49214 main.go:141] libmachine: (test-preload-106909) Calling .DriverName
	I0731 17:44:48.447265   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetSSHHostname
	I0731 17:44:48.449161   49214 main.go:141] libmachine: (test-preload-106909) DBG | domain test-preload-106909 has defined MAC address 52:54:00:e8:5c:bc in network mk-test-preload-106909
	I0731 17:44:48.449417   49214 main.go:141] libmachine: (test-preload-106909) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:5c:bc", ip: ""} in network mk-test-preload-106909: {Iface:virbr1 ExpiryTime:2024-07-31 18:44:41 +0000 UTC Type:0 Mac:52:54:00:e8:5c:bc Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:test-preload-106909 Clientid:01:52:54:00:e8:5c:bc}
	I0731 17:44:48.449444   49214 main.go:141] libmachine: (test-preload-106909) DBG | domain test-preload-106909 has defined IP address 192.168.39.32 and MAC address 52:54:00:e8:5c:bc in network mk-test-preload-106909
	I0731 17:44:48.449512   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetSSHPort
	I0731 17:44:48.449668   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetSSHKeyPath
	I0731 17:44:48.449801   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetSSHKeyPath
	I0731 17:44:48.449905   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetSSHUsername
	I0731 17:44:48.450038   49214 main.go:141] libmachine: Using SSH client type: native
	I0731 17:44:48.450211   49214 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0731 17:44:48.450221   49214 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 17:44:48.542947   49214 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 17:44:48.542979   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetMachineName
	I0731 17:44:48.543233   49214 buildroot.go:166] provisioning hostname "test-preload-106909"
	I0731 17:44:48.543259   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetMachineName
	I0731 17:44:48.543474   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetSSHHostname
	I0731 17:44:48.546055   49214 main.go:141] libmachine: (test-preload-106909) DBG | domain test-preload-106909 has defined MAC address 52:54:00:e8:5c:bc in network mk-test-preload-106909
	I0731 17:44:48.546402   49214 main.go:141] libmachine: (test-preload-106909) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:5c:bc", ip: ""} in network mk-test-preload-106909: {Iface:virbr1 ExpiryTime:2024-07-31 18:44:41 +0000 UTC Type:0 Mac:52:54:00:e8:5c:bc Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:test-preload-106909 Clientid:01:52:54:00:e8:5c:bc}
	I0731 17:44:48.546430   49214 main.go:141] libmachine: (test-preload-106909) DBG | domain test-preload-106909 has defined IP address 192.168.39.32 and MAC address 52:54:00:e8:5c:bc in network mk-test-preload-106909
	I0731 17:44:48.546602   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetSSHPort
	I0731 17:44:48.546773   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetSSHKeyPath
	I0731 17:44:48.546926   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetSSHKeyPath
	I0731 17:44:48.547054   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetSSHUsername
	I0731 17:44:48.547214   49214 main.go:141] libmachine: Using SSH client type: native
	I0731 17:44:48.547369   49214 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0731 17:44:48.547390   49214 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-106909 && echo "test-preload-106909" | sudo tee /etc/hostname
	I0731 17:44:48.656157   49214 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-106909
	
	I0731 17:44:48.656178   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetSSHHostname
	I0731 17:44:48.658647   49214 main.go:141] libmachine: (test-preload-106909) DBG | domain test-preload-106909 has defined MAC address 52:54:00:e8:5c:bc in network mk-test-preload-106909
	I0731 17:44:48.658977   49214 main.go:141] libmachine: (test-preload-106909) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:5c:bc", ip: ""} in network mk-test-preload-106909: {Iface:virbr1 ExpiryTime:2024-07-31 18:44:41 +0000 UTC Type:0 Mac:52:54:00:e8:5c:bc Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:test-preload-106909 Clientid:01:52:54:00:e8:5c:bc}
	I0731 17:44:48.659005   49214 main.go:141] libmachine: (test-preload-106909) DBG | domain test-preload-106909 has defined IP address 192.168.39.32 and MAC address 52:54:00:e8:5c:bc in network mk-test-preload-106909
	I0731 17:44:48.659154   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetSSHPort
	I0731 17:44:48.659333   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetSSHKeyPath
	I0731 17:44:48.659461   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetSSHKeyPath
	I0731 17:44:48.659615   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetSSHUsername
	I0731 17:44:48.659770   49214 main.go:141] libmachine: Using SSH client type: native
	I0731 17:44:48.659984   49214 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0731 17:44:48.660009   49214 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-106909' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-106909/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-106909' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 17:44:48.767244   49214 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 17:44:48.767269   49214 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19349-8084/.minikube CaCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19349-8084/.minikube}
	I0731 17:44:48.767302   49214 buildroot.go:174] setting up certificates
	I0731 17:44:48.767312   49214 provision.go:84] configureAuth start
	I0731 17:44:48.767334   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetMachineName
	I0731 17:44:48.767647   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetIP
	I0731 17:44:48.770174   49214 main.go:141] libmachine: (test-preload-106909) DBG | domain test-preload-106909 has defined MAC address 52:54:00:e8:5c:bc in network mk-test-preload-106909
	I0731 17:44:48.770478   49214 main.go:141] libmachine: (test-preload-106909) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:5c:bc", ip: ""} in network mk-test-preload-106909: {Iface:virbr1 ExpiryTime:2024-07-31 18:44:41 +0000 UTC Type:0 Mac:52:54:00:e8:5c:bc Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:test-preload-106909 Clientid:01:52:54:00:e8:5c:bc}
	I0731 17:44:48.770507   49214 main.go:141] libmachine: (test-preload-106909) DBG | domain test-preload-106909 has defined IP address 192.168.39.32 and MAC address 52:54:00:e8:5c:bc in network mk-test-preload-106909
	I0731 17:44:48.770635   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetSSHHostname
	I0731 17:44:48.772646   49214 main.go:141] libmachine: (test-preload-106909) DBG | domain test-preload-106909 has defined MAC address 52:54:00:e8:5c:bc in network mk-test-preload-106909
	I0731 17:44:48.772921   49214 main.go:141] libmachine: (test-preload-106909) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:5c:bc", ip: ""} in network mk-test-preload-106909: {Iface:virbr1 ExpiryTime:2024-07-31 18:44:41 +0000 UTC Type:0 Mac:52:54:00:e8:5c:bc Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:test-preload-106909 Clientid:01:52:54:00:e8:5c:bc}
	I0731 17:44:48.772944   49214 main.go:141] libmachine: (test-preload-106909) DBG | domain test-preload-106909 has defined IP address 192.168.39.32 and MAC address 52:54:00:e8:5c:bc in network mk-test-preload-106909
	I0731 17:44:48.773086   49214 provision.go:143] copyHostCerts
	I0731 17:44:48.773146   49214 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem, removing ...
	I0731 17:44:48.773159   49214 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem
	I0731 17:44:48.773231   49214 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem (1082 bytes)
	I0731 17:44:48.773366   49214 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem, removing ...
	I0731 17:44:48.773379   49214 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem
	I0731 17:44:48.773420   49214 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem (1123 bytes)
	I0731 17:44:48.773548   49214 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem, removing ...
	I0731 17:44:48.773559   49214 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem
	I0731 17:44:48.773602   49214 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem (1675 bytes)
	I0731 17:44:48.773673   49214 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem org=jenkins.test-preload-106909 san=[127.0.0.1 192.168.39.32 localhost minikube test-preload-106909]
	I0731 17:44:48.960928   49214 provision.go:177] copyRemoteCerts
	I0731 17:44:48.960994   49214 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 17:44:48.961029   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetSSHHostname
	I0731 17:44:48.963833   49214 main.go:141] libmachine: (test-preload-106909) DBG | domain test-preload-106909 has defined MAC address 52:54:00:e8:5c:bc in network mk-test-preload-106909
	I0731 17:44:48.964244   49214 main.go:141] libmachine: (test-preload-106909) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:5c:bc", ip: ""} in network mk-test-preload-106909: {Iface:virbr1 ExpiryTime:2024-07-31 18:44:41 +0000 UTC Type:0 Mac:52:54:00:e8:5c:bc Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:test-preload-106909 Clientid:01:52:54:00:e8:5c:bc}
	I0731 17:44:48.964262   49214 main.go:141] libmachine: (test-preload-106909) DBG | domain test-preload-106909 has defined IP address 192.168.39.32 and MAC address 52:54:00:e8:5c:bc in network mk-test-preload-106909
	I0731 17:44:48.964578   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetSSHPort
	I0731 17:44:48.964788   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetSSHKeyPath
	I0731 17:44:48.964929   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetSSHUsername
	I0731 17:44:48.965051   49214 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/test-preload-106909/id_rsa Username:docker}
	I0731 17:44:49.040660   49214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 17:44:49.062286   49214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0731 17:44:49.084080   49214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 17:44:49.105015   49214 provision.go:87] duration metric: took 337.69158ms to configureAuth
	I0731 17:44:49.105043   49214 buildroot.go:189] setting minikube options for container-runtime
	I0731 17:44:49.105211   49214 config.go:182] Loaded profile config "test-preload-106909": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0731 17:44:49.105276   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetSSHHostname
	I0731 17:44:49.107522   49214 main.go:141] libmachine: (test-preload-106909) DBG | domain test-preload-106909 has defined MAC address 52:54:00:e8:5c:bc in network mk-test-preload-106909
	I0731 17:44:49.107851   49214 main.go:141] libmachine: (test-preload-106909) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:5c:bc", ip: ""} in network mk-test-preload-106909: {Iface:virbr1 ExpiryTime:2024-07-31 18:44:41 +0000 UTC Type:0 Mac:52:54:00:e8:5c:bc Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:test-preload-106909 Clientid:01:52:54:00:e8:5c:bc}
	I0731 17:44:49.107878   49214 main.go:141] libmachine: (test-preload-106909) DBG | domain test-preload-106909 has defined IP address 192.168.39.32 and MAC address 52:54:00:e8:5c:bc in network mk-test-preload-106909
	I0731 17:44:49.108029   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetSSHPort
	I0731 17:44:49.108207   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetSSHKeyPath
	I0731 17:44:49.108376   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetSSHKeyPath
	I0731 17:44:49.108534   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetSSHUsername
	I0731 17:44:49.108702   49214 main.go:141] libmachine: Using SSH client type: native
	I0731 17:44:49.108893   49214 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0731 17:44:49.108917   49214 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 17:44:49.352444   49214 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 17:44:49.352466   49214 machine.go:97] duration metric: took 905.455095ms to provisionDockerMachine
	I0731 17:44:49.352478   49214 start.go:293] postStartSetup for "test-preload-106909" (driver="kvm2")
	I0731 17:44:49.352489   49214 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 17:44:49.352508   49214 main.go:141] libmachine: (test-preload-106909) Calling .DriverName
	I0731 17:44:49.352865   49214 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 17:44:49.352898   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetSSHHostname
	I0731 17:44:49.355451   49214 main.go:141] libmachine: (test-preload-106909) DBG | domain test-preload-106909 has defined MAC address 52:54:00:e8:5c:bc in network mk-test-preload-106909
	I0731 17:44:49.355758   49214 main.go:141] libmachine: (test-preload-106909) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:5c:bc", ip: ""} in network mk-test-preload-106909: {Iface:virbr1 ExpiryTime:2024-07-31 18:44:41 +0000 UTC Type:0 Mac:52:54:00:e8:5c:bc Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:test-preload-106909 Clientid:01:52:54:00:e8:5c:bc}
	I0731 17:44:49.355786   49214 main.go:141] libmachine: (test-preload-106909) DBG | domain test-preload-106909 has defined IP address 192.168.39.32 and MAC address 52:54:00:e8:5c:bc in network mk-test-preload-106909
	I0731 17:44:49.355926   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetSSHPort
	I0731 17:44:49.356123   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetSSHKeyPath
	I0731 17:44:49.356274   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetSSHUsername
	I0731 17:44:49.356451   49214 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/test-preload-106909/id_rsa Username:docker}
	I0731 17:44:49.433819   49214 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 17:44:49.437878   49214 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 17:44:49.437907   49214 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/addons for local assets ...
	I0731 17:44:49.437984   49214 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/files for local assets ...
	I0731 17:44:49.438092   49214 filesync.go:149] local asset: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem -> 152592.pem in /etc/ssl/certs
	I0731 17:44:49.438183   49214 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 17:44:49.446847   49214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /etc/ssl/certs/152592.pem (1708 bytes)
	I0731 17:44:49.468559   49214 start.go:296] duration metric: took 116.070304ms for postStartSetup
	I0731 17:44:49.468597   49214 fix.go:56] duration metric: took 18.556765149s for fixHost
	I0731 17:44:49.468614   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetSSHHostname
	I0731 17:44:49.471127   49214 main.go:141] libmachine: (test-preload-106909) DBG | domain test-preload-106909 has defined MAC address 52:54:00:e8:5c:bc in network mk-test-preload-106909
	I0731 17:44:49.471577   49214 main.go:141] libmachine: (test-preload-106909) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:5c:bc", ip: ""} in network mk-test-preload-106909: {Iface:virbr1 ExpiryTime:2024-07-31 18:44:41 +0000 UTC Type:0 Mac:52:54:00:e8:5c:bc Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:test-preload-106909 Clientid:01:52:54:00:e8:5c:bc}
	I0731 17:44:49.471608   49214 main.go:141] libmachine: (test-preload-106909) DBG | domain test-preload-106909 has defined IP address 192.168.39.32 and MAC address 52:54:00:e8:5c:bc in network mk-test-preload-106909
	I0731 17:44:49.471747   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetSSHPort
	I0731 17:44:49.471912   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetSSHKeyPath
	I0731 17:44:49.472088   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetSSHKeyPath
	I0731 17:44:49.472206   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetSSHUsername
	I0731 17:44:49.472365   49214 main.go:141] libmachine: Using SSH client type: native
	I0731 17:44:49.472558   49214 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0731 17:44:49.472573   49214 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 17:44:49.567564   49214 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722447889.541507030
	
	I0731 17:44:49.567597   49214 fix.go:216] guest clock: 1722447889.541507030
	I0731 17:44:49.567604   49214 fix.go:229] Guest: 2024-07-31 17:44:49.54150703 +0000 UTC Remote: 2024-07-31 17:44:49.468600107 +0000 UTC m=+31.885344808 (delta=72.906923ms)
	I0731 17:44:49.567638   49214 fix.go:200] guest clock delta is within tolerance: 72.906923ms
	I0731 17:44:49.567643   49214 start.go:83] releasing machines lock for "test-preload-106909", held for 18.655822518s
	I0731 17:44:49.567661   49214 main.go:141] libmachine: (test-preload-106909) Calling .DriverName
	I0731 17:44:49.567872   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetIP
	I0731 17:44:49.570641   49214 main.go:141] libmachine: (test-preload-106909) DBG | domain test-preload-106909 has defined MAC address 52:54:00:e8:5c:bc in network mk-test-preload-106909
	I0731 17:44:49.570967   49214 main.go:141] libmachine: (test-preload-106909) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:5c:bc", ip: ""} in network mk-test-preload-106909: {Iface:virbr1 ExpiryTime:2024-07-31 18:44:41 +0000 UTC Type:0 Mac:52:54:00:e8:5c:bc Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:test-preload-106909 Clientid:01:52:54:00:e8:5c:bc}
	I0731 17:44:49.570990   49214 main.go:141] libmachine: (test-preload-106909) DBG | domain test-preload-106909 has defined IP address 192.168.39.32 and MAC address 52:54:00:e8:5c:bc in network mk-test-preload-106909
	I0731 17:44:49.571150   49214 main.go:141] libmachine: (test-preload-106909) Calling .DriverName
	I0731 17:44:49.571648   49214 main.go:141] libmachine: (test-preload-106909) Calling .DriverName
	I0731 17:44:49.571814   49214 main.go:141] libmachine: (test-preload-106909) Calling .DriverName
	I0731 17:44:49.571945   49214 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 17:44:49.571980   49214 ssh_runner.go:195] Run: cat /version.json
	I0731 17:44:49.571986   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetSSHHostname
	I0731 17:44:49.571997   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetSSHHostname
	I0731 17:44:49.574478   49214 main.go:141] libmachine: (test-preload-106909) DBG | domain test-preload-106909 has defined MAC address 52:54:00:e8:5c:bc in network mk-test-preload-106909
	I0731 17:44:49.574627   49214 main.go:141] libmachine: (test-preload-106909) DBG | domain test-preload-106909 has defined MAC address 52:54:00:e8:5c:bc in network mk-test-preload-106909
	I0731 17:44:49.574818   49214 main.go:141] libmachine: (test-preload-106909) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:5c:bc", ip: ""} in network mk-test-preload-106909: {Iface:virbr1 ExpiryTime:2024-07-31 18:44:41 +0000 UTC Type:0 Mac:52:54:00:e8:5c:bc Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:test-preload-106909 Clientid:01:52:54:00:e8:5c:bc}
	I0731 17:44:49.574844   49214 main.go:141] libmachine: (test-preload-106909) DBG | domain test-preload-106909 has defined IP address 192.168.39.32 and MAC address 52:54:00:e8:5c:bc in network mk-test-preload-106909
	I0731 17:44:49.574968   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetSSHPort
	I0731 17:44:49.575067   49214 main.go:141] libmachine: (test-preload-106909) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:5c:bc", ip: ""} in network mk-test-preload-106909: {Iface:virbr1 ExpiryTime:2024-07-31 18:44:41 +0000 UTC Type:0 Mac:52:54:00:e8:5c:bc Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:test-preload-106909 Clientid:01:52:54:00:e8:5c:bc}
	I0731 17:44:49.575086   49214 main.go:141] libmachine: (test-preload-106909) DBG | domain test-preload-106909 has defined IP address 192.168.39.32 and MAC address 52:54:00:e8:5c:bc in network mk-test-preload-106909
	I0731 17:44:49.575145   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetSSHKeyPath
	I0731 17:44:49.575266   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetSSHPort
	I0731 17:44:49.575350   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetSSHUsername
	I0731 17:44:49.575420   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetSSHKeyPath
	I0731 17:44:49.575490   49214 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/test-preload-106909/id_rsa Username:docker}
	I0731 17:44:49.575528   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetSSHUsername
	I0731 17:44:49.575624   49214 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/test-preload-106909/id_rsa Username:docker}
	I0731 17:44:49.685472   49214 ssh_runner.go:195] Run: systemctl --version
	I0731 17:44:49.691078   49214 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 17:44:49.830590   49214 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 17:44:49.836519   49214 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 17:44:49.836587   49214 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 17:44:49.851418   49214 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 17:44:49.851438   49214 start.go:495] detecting cgroup driver to use...
	I0731 17:44:49.851491   49214 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 17:44:49.865602   49214 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 17:44:49.878351   49214 docker.go:217] disabling cri-docker service (if available) ...
	I0731 17:44:49.878404   49214 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 17:44:49.890568   49214 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 17:44:49.904246   49214 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 17:44:50.014626   49214 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 17:44:50.177381   49214 docker.go:233] disabling docker service ...
	I0731 17:44:50.177462   49214 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 17:44:50.190209   49214 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 17:44:50.201833   49214 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 17:44:50.312143   49214 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 17:44:50.420944   49214 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 17:44:50.433601   49214 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 17:44:50.450093   49214 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0731 17:44:50.450156   49214 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:44:50.459468   49214 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 17:44:50.459534   49214 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:44:50.468801   49214 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:44:50.477834   49214 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:44:50.486767   49214 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 17:44:50.496011   49214 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:44:50.504814   49214 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:44:50.520022   49214 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:44:50.529276   49214 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 17:44:50.537783   49214 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 17:44:50.537838   49214 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 17:44:50.549112   49214 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 17:44:50.557785   49214 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 17:44:50.662066   49214 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 17:44:50.790259   49214 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 17:44:50.790339   49214 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 17:44:50.794954   49214 start.go:563] Will wait 60s for crictl version
	I0731 17:44:50.795004   49214 ssh_runner.go:195] Run: which crictl
	I0731 17:44:50.798286   49214 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 17:44:50.833349   49214 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 17:44:50.833431   49214 ssh_runner.go:195] Run: crio --version
	I0731 17:44:50.859161   49214 ssh_runner.go:195] Run: crio --version
	I0731 17:44:50.887059   49214 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0731 17:44:50.888402   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetIP
	I0731 17:44:50.890780   49214 main.go:141] libmachine: (test-preload-106909) DBG | domain test-preload-106909 has defined MAC address 52:54:00:e8:5c:bc in network mk-test-preload-106909
	I0731 17:44:50.891125   49214 main.go:141] libmachine: (test-preload-106909) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:5c:bc", ip: ""} in network mk-test-preload-106909: {Iface:virbr1 ExpiryTime:2024-07-31 18:44:41 +0000 UTC Type:0 Mac:52:54:00:e8:5c:bc Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:test-preload-106909 Clientid:01:52:54:00:e8:5c:bc}
	I0731 17:44:50.891154   49214 main.go:141] libmachine: (test-preload-106909) DBG | domain test-preload-106909 has defined IP address 192.168.39.32 and MAC address 52:54:00:e8:5c:bc in network mk-test-preload-106909
	I0731 17:44:50.891412   49214 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 17:44:50.894970   49214 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 17:44:50.906078   49214 kubeadm.go:883] updating cluster {Name:test-preload-106909 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-106909 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 17:44:50.906168   49214 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0731 17:44:50.906212   49214 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 17:44:50.938073   49214 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0731 17:44:50.938147   49214 ssh_runner.go:195] Run: which lz4
	I0731 17:44:50.941738   49214 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 17:44:50.945562   49214 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 17:44:50.945584   49214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0731 17:44:52.273270   49214 crio.go:462] duration metric: took 1.331561688s to copy over tarball
	I0731 17:44:52.273344   49214 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 17:44:54.507387   49214 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.234020575s)
	I0731 17:44:54.507414   49214 crio.go:469] duration metric: took 2.234121546s to extract the tarball
	I0731 17:44:54.507421   49214 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 17:44:54.549014   49214 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 17:44:54.588980   49214 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0731 17:44:54.589004   49214 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 17:44:54.589073   49214 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 17:44:54.589103   49214 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0731 17:44:54.589083   49214 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0731 17:44:54.589134   49214 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0731 17:44:54.589143   49214 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0731 17:44:54.589126   49214 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0731 17:44:54.589202   49214 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0731 17:44:54.589209   49214 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 17:44:54.590702   49214 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0731 17:44:54.590718   49214 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0731 17:44:54.590700   49214 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0731 17:44:54.590704   49214 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0731 17:44:54.590746   49214 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 17:44:54.590768   49214 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 17:44:54.590774   49214 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0731 17:44:54.590770   49214 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0731 17:44:54.819695   49214 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0731 17:44:54.819814   49214 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0731 17:44:54.822141   49214 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0731 17:44:54.822981   49214 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0731 17:44:54.832078   49214 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0731 17:44:54.834302   49214 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0731 17:44:54.863579   49214 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0731 17:44:54.942524   49214 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0731 17:44:54.942566   49214 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0731 17:44:54.942600   49214 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0731 17:44:54.942613   49214 ssh_runner.go:195] Run: which crictl
	I0731 17:44:54.942635   49214 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 17:44:54.942679   49214 ssh_runner.go:195] Run: which crictl
	I0731 17:44:54.967051   49214 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0731 17:44:54.967079   49214 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0731 17:44:54.967090   49214 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0731 17:44:54.967105   49214 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0731 17:44:54.967152   49214 ssh_runner.go:195] Run: which crictl
	I0731 17:44:54.967156   49214 ssh_runner.go:195] Run: which crictl
	I0731 17:44:54.980243   49214 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0731 17:44:54.980295   49214 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0731 17:44:54.980316   49214 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0731 17:44:54.980335   49214 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0731 17:44:54.980343   49214 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0731 17:44:54.980370   49214 ssh_runner.go:195] Run: which crictl
	I0731 17:44:54.980379   49214 ssh_runner.go:195] Run: which crictl
	I0731 17:44:54.980301   49214 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0731 17:44:54.980371   49214 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0731 17:44:54.980474   49214 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0731 17:44:54.980415   49214 ssh_runner.go:195] Run: which crictl
	I0731 17:44:54.980417   49214 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0731 17:44:54.980419   49214 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0731 17:44:55.051500   49214 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0731 17:44:55.051618   49214 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0731 17:44:55.059282   49214 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0731 17:44:55.059392   49214 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0731 17:44:55.067187   49214 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0731 17:44:55.067236   49214 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0731 17:44:55.067264   49214 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0731 17:44:55.067268   49214 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0731 17:44:55.067341   49214 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0731 17:44:55.067389   49214 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0731 17:44:55.067415   49214 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0731 17:44:55.067426   49214 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0731 17:44:55.067455   49214 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0731 17:44:55.067463   49214 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0731 17:44:55.067499   49214 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0731 17:44:55.148067   49214 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0731 17:44:55.148196   49214 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0731 17:44:55.148300   49214 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0731 17:44:55.154878   49214 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0731 17:44:55.154951   49214 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0731 17:44:55.154969   49214 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0731 17:44:55.154994   49214 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0731 17:44:55.155027   49214 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0731 17:44:55.521126   49214 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 17:44:58.384518   49214 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4: (3.317041645s)
	I0731 17:44:58.384548   49214 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0731 17:44:58.384590   49214 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0731 17:44:58.384589   49214 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4: (3.236270806s)
	I0731 17:44:58.384648   49214 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0731 17:44:58.384663   49214 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: (3.229622821s)
	I0731 17:44:58.384687   49214 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0731 17:44:58.384667   49214 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0731 17:44:58.384635   49214 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: (3.229653381s)
	I0731 17:44:58.384700   49214 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0731 17:44:58.384716   49214 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.863561496s)
	I0731 17:44:58.724266   49214 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0731 17:44:58.724331   49214 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0731 17:44:58.724394   49214 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0731 17:44:59.367371   49214 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0731 17:44:59.367417   49214 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0731 17:44:59.367469   49214 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0731 17:45:00.211480   49214 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0731 17:45:00.211521   49214 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0731 17:45:00.211587   49214 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0731 17:45:00.360380   49214 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0731 17:45:00.360437   49214 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0731 17:45:00.360485   49214 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0731 17:45:00.808799   49214 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0731 17:45:00.808838   49214 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0731 17:45:00.808884   49214 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0731 17:45:02.755120   49214 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (1.946200853s)
	I0731 17:45:02.755168   49214 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0731 17:45:02.755201   49214 cache_images.go:123] Successfully loaded all cached images
	I0731 17:45:02.755208   49214 cache_images.go:92] duration metric: took 8.166191514s to LoadCachedImages
	I0731 17:45:02.755219   49214 kubeadm.go:934] updating node { 192.168.39.32 8443 v1.24.4 crio true true} ...
	I0731 17:45:02.755335   49214 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-106909 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-106909 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 17:45:02.755407   49214 ssh_runner.go:195] Run: crio config
	I0731 17:45:02.799272   49214 cni.go:84] Creating CNI manager for ""
	I0731 17:45:02.799301   49214 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 17:45:02.799316   49214 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 17:45:02.799337   49214 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.32 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-106909 NodeName:test-preload-106909 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 17:45:02.799496   49214 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.32
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-106909"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 17:45:02.799565   49214 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0731 17:45:02.808816   49214 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 17:45:02.808871   49214 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 17:45:02.817365   49214 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0731 17:45:02.831899   49214 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 17:45:02.846114   49214 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0731 17:45:02.861353   49214 ssh_runner.go:195] Run: grep 192.168.39.32	control-plane.minikube.internal$ /etc/hosts
	I0731 17:45:02.864705   49214 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.32	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 17:45:02.875662   49214 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 17:45:02.984582   49214 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 17:45:03.000910   49214 certs.go:68] Setting up /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/test-preload-106909 for IP: 192.168.39.32
	I0731 17:45:03.000937   49214 certs.go:194] generating shared ca certs ...
	I0731 17:45:03.000959   49214 certs.go:226] acquiring lock for ca certs: {Name:mk49eed439085f9c30746706460ce213570d1997 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 17:45:03.001148   49214 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key
	I0731 17:45:03.001203   49214 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key
	I0731 17:45:03.001215   49214 certs.go:256] generating profile certs ...
	I0731 17:45:03.001324   49214 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/test-preload-106909/client.key
	I0731 17:45:03.001413   49214 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/test-preload-106909/apiserver.key.0085313b
	I0731 17:45:03.001461   49214 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/test-preload-106909/proxy-client.key
	I0731 17:45:03.001634   49214 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem (1338 bytes)
	W0731 17:45:03.001677   49214 certs.go:480] ignoring /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259_empty.pem, impossibly tiny 0 bytes
	I0731 17:45:03.001690   49214 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 17:45:03.001725   49214 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem (1082 bytes)
	I0731 17:45:03.001754   49214 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem (1123 bytes)
	I0731 17:45:03.001784   49214 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem (1675 bytes)
	I0731 17:45:03.001847   49214 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem (1708 bytes)
	I0731 17:45:03.002711   49214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 17:45:03.035569   49214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 17:45:03.071397   49214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 17:45:03.114684   49214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 17:45:03.148460   49214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/test-preload-106909/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0731 17:45:03.190637   49214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/test-preload-106909/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 17:45:03.213756   49214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/test-preload-106909/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 17:45:03.234930   49214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/test-preload-106909/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 17:45:03.256394   49214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem --> /usr/share/ca-certificates/15259.pem (1338 bytes)
	I0731 17:45:03.277068   49214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /usr/share/ca-certificates/152592.pem (1708 bytes)
	I0731 17:45:03.297491   49214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 17:45:03.318186   49214 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 17:45:03.332935   49214 ssh_runner.go:195] Run: openssl version
	I0731 17:45:03.337997   49214 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15259.pem && ln -fs /usr/share/ca-certificates/15259.pem /etc/ssl/certs/15259.pem"
	I0731 17:45:03.347699   49214 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15259.pem
	I0731 17:45:03.351600   49214 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 16:54 /usr/share/ca-certificates/15259.pem
	I0731 17:45:03.351654   49214 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15259.pem
	I0731 17:45:03.356779   49214 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15259.pem /etc/ssl/certs/51391683.0"
	I0731 17:45:03.366301   49214 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/152592.pem && ln -fs /usr/share/ca-certificates/152592.pem /etc/ssl/certs/152592.pem"
	I0731 17:45:03.376014   49214 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/152592.pem
	I0731 17:45:03.379893   49214 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 16:54 /usr/share/ca-certificates/152592.pem
	I0731 17:45:03.379933   49214 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/152592.pem
	I0731 17:45:03.384848   49214 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/152592.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 17:45:03.394782   49214 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 17:45:03.404657   49214 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 17:45:03.408598   49214 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 16:42 /usr/share/ca-certificates/minikubeCA.pem
	I0731 17:45:03.408646   49214 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 17:45:03.413687   49214 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 17:45:03.423542   49214 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 17:45:03.427464   49214 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 17:45:03.432612   49214 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 17:45:03.437714   49214 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 17:45:03.443157   49214 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 17:45:03.448280   49214 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 17:45:03.453469   49214 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 17:45:03.458513   49214 kubeadm.go:392] StartCluster: {Name:test-preload-106909 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-106909 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 17:45:03.458596   49214 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 17:45:03.458646   49214 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 17:45:03.493029   49214 cri.go:89] found id: ""
	I0731 17:45:03.493123   49214 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 17:45:03.502678   49214 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 17:45:03.502697   49214 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 17:45:03.502747   49214 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 17:45:03.511402   49214 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 17:45:03.511806   49214 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-106909" does not appear in /home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 17:45:03.511905   49214 kubeconfig.go:62] /home/jenkins/minikube-integration/19349-8084/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-106909" cluster setting kubeconfig missing "test-preload-106909" context setting]
	I0731 17:45:03.512172   49214 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/kubeconfig: {Name:mkf2340bc1ada3c683f132aca37228d7588e1fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 17:45:03.512709   49214 kapi.go:59] client config for test-preload-106909: &rest.Config{Host:"https://192.168.39.32:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19349-8084/.minikube/profiles/test-preload-106909/client.crt", KeyFile:"/home/jenkins/minikube-integration/19349-8084/.minikube/profiles/test-preload-106909/client.key", CAFile:"/home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 17:45:03.513234   49214 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 17:45:03.521863   49214 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.32
	I0731 17:45:03.521886   49214 kubeadm.go:1160] stopping kube-system containers ...
	I0731 17:45:03.521896   49214 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 17:45:03.521953   49214 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 17:45:03.559127   49214 cri.go:89] found id: ""
	I0731 17:45:03.559188   49214 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 17:45:03.574717   49214 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 17:45:03.583589   49214 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 17:45:03.583615   49214 kubeadm.go:157] found existing configuration files:
	
	I0731 17:45:03.583658   49214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 17:45:03.591901   49214 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 17:45:03.591949   49214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 17:45:03.600553   49214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 17:45:03.608640   49214 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 17:45:03.608692   49214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 17:45:03.617150   49214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 17:45:03.625374   49214 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 17:45:03.625423   49214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 17:45:03.633807   49214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 17:45:03.641996   49214 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 17:45:03.642040   49214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 17:45:03.650389   49214 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 17:45:03.658728   49214 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 17:45:03.749928   49214 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 17:45:04.388653   49214 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 17:45:04.632426   49214 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 17:45:04.697163   49214 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 17:45:04.790370   49214 api_server.go:52] waiting for apiserver process to appear ...
	I0731 17:45:04.790446   49214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 17:45:05.291306   49214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 17:45:05.790732   49214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 17:45:05.805314   49214 api_server.go:72] duration metric: took 1.014942018s to wait for apiserver process to appear ...
	I0731 17:45:05.805344   49214 api_server.go:88] waiting for apiserver healthz status ...
	I0731 17:45:05.805364   49214 api_server.go:253] Checking apiserver healthz at https://192.168.39.32:8443/healthz ...
	I0731 17:45:05.805815   49214 api_server.go:269] stopped: https://192.168.39.32:8443/healthz: Get "https://192.168.39.32:8443/healthz": dial tcp 192.168.39.32:8443: connect: connection refused
	I0731 17:45:06.305798   49214 api_server.go:253] Checking apiserver healthz at https://192.168.39.32:8443/healthz ...
	I0731 17:45:10.113963   49214 api_server.go:279] https://192.168.39.32:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 17:45:10.113999   49214 api_server.go:103] status: https://192.168.39.32:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 17:45:10.114017   49214 api_server.go:253] Checking apiserver healthz at https://192.168.39.32:8443/healthz ...
	I0731 17:45:10.131286   49214 api_server.go:279] https://192.168.39.32:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 17:45:10.131313   49214 api_server.go:103] status: https://192.168.39.32:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 17:45:10.305534   49214 api_server.go:253] Checking apiserver healthz at https://192.168.39.32:8443/healthz ...
	I0731 17:45:10.311945   49214 api_server.go:279] https://192.168.39.32:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 17:45:10.311978   49214 api_server.go:103] status: https://192.168.39.32:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 17:45:10.805551   49214 api_server.go:253] Checking apiserver healthz at https://192.168.39.32:8443/healthz ...
	I0731 17:45:10.815590   49214 api_server.go:279] https://192.168.39.32:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 17:45:10.815616   49214 api_server.go:103] status: https://192.168.39.32:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 17:45:11.305556   49214 api_server.go:253] Checking apiserver healthz at https://192.168.39.32:8443/healthz ...
	I0731 17:45:11.310874   49214 api_server.go:279] https://192.168.39.32:8443/healthz returned 200:
	ok
	I0731 17:45:11.316897   49214 api_server.go:141] control plane version: v1.24.4
	I0731 17:45:11.316922   49214 api_server.go:131] duration metric: took 5.511571883s to wait for apiserver health ...
	I0731 17:45:11.316930   49214 cni.go:84] Creating CNI manager for ""
	I0731 17:45:11.316937   49214 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 17:45:11.318685   49214 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 17:45:11.320043   49214 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 17:45:11.329853   49214 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 17:45:11.345166   49214 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 17:45:11.355879   49214 system_pods.go:59] 7 kube-system pods found
	I0731 17:45:11.355912   49214 system_pods.go:61] "coredns-6d4b75cb6d-l4zr6" [6ee5eaf3-545c-4368-9365-51dbee049dcd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 17:45:11.355920   49214 system_pods.go:61] "etcd-test-preload-106909" [c4891f2b-9e17-4154-b259-07efa05f6074] Running
	I0731 17:45:11.355929   49214 system_pods.go:61] "kube-apiserver-test-preload-106909" [4bc9061b-864d-4d81-9ab6-8d822f40cef5] Running
	I0731 17:45:11.355935   49214 system_pods.go:61] "kube-controller-manager-test-preload-106909" [ce46618d-00d2-476c-bcb4-24d083696d1e] Running
	I0731 17:45:11.355946   49214 system_pods.go:61] "kube-proxy-2wbcp" [b5a7709c-bae5-4bb5-8272-0dcb4e2100a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 17:45:11.355958   49214 system_pods.go:61] "kube-scheduler-test-preload-106909" [8744c9e3-f9de-41cb-94ed-ab432ce444e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 17:45:11.355969   49214 system_pods.go:61] "storage-provisioner" [57acd796-98b4-4d7e-910f-6ce5ca605849] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 17:45:11.355981   49214 system_pods.go:74] duration metric: took 10.79616ms to wait for pod list to return data ...
	I0731 17:45:11.355994   49214 node_conditions.go:102] verifying NodePressure condition ...
	I0731 17:45:11.361704   49214 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 17:45:11.361743   49214 node_conditions.go:123] node cpu capacity is 2
	I0731 17:45:11.361756   49214 node_conditions.go:105] duration metric: took 5.75354ms to run NodePressure ...
	I0731 17:45:11.361782   49214 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 17:45:11.599388   49214 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 17:45:11.603138   49214 kubeadm.go:739] kubelet initialised
	I0731 17:45:11.603159   49214 kubeadm.go:740] duration metric: took 3.748671ms waiting for restarted kubelet to initialise ...
	I0731 17:45:11.603166   49214 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 17:45:11.607837   49214 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-l4zr6" in "kube-system" namespace to be "Ready" ...
	I0731 17:45:11.612954   49214 pod_ready.go:97] node "test-preload-106909" hosting pod "coredns-6d4b75cb6d-l4zr6" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-106909" has status "Ready":"False"
	I0731 17:45:11.612979   49214 pod_ready.go:81] duration metric: took 5.115497ms for pod "coredns-6d4b75cb6d-l4zr6" in "kube-system" namespace to be "Ready" ...
	E0731 17:45:11.612990   49214 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-106909" hosting pod "coredns-6d4b75cb6d-l4zr6" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-106909" has status "Ready":"False"
	I0731 17:45:11.612999   49214 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-106909" in "kube-system" namespace to be "Ready" ...
	I0731 17:45:11.617071   49214 pod_ready.go:97] node "test-preload-106909" hosting pod "etcd-test-preload-106909" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-106909" has status "Ready":"False"
	I0731 17:45:11.617091   49214 pod_ready.go:81] duration metric: took 4.072423ms for pod "etcd-test-preload-106909" in "kube-system" namespace to be "Ready" ...
	E0731 17:45:11.617100   49214 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-106909" hosting pod "etcd-test-preload-106909" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-106909" has status "Ready":"False"
	I0731 17:45:11.617113   49214 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-106909" in "kube-system" namespace to be "Ready" ...
	I0731 17:45:11.620698   49214 pod_ready.go:97] node "test-preload-106909" hosting pod "kube-apiserver-test-preload-106909" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-106909" has status "Ready":"False"
	I0731 17:45:11.620718   49214 pod_ready.go:81] duration metric: took 3.592528ms for pod "kube-apiserver-test-preload-106909" in "kube-system" namespace to be "Ready" ...
	E0731 17:45:11.620727   49214 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-106909" hosting pod "kube-apiserver-test-preload-106909" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-106909" has status "Ready":"False"
	I0731 17:45:11.620735   49214 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-106909" in "kube-system" namespace to be "Ready" ...
	I0731 17:45:11.748511   49214 pod_ready.go:97] node "test-preload-106909" hosting pod "kube-controller-manager-test-preload-106909" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-106909" has status "Ready":"False"
	I0731 17:45:11.748539   49214 pod_ready.go:81] duration metric: took 127.793362ms for pod "kube-controller-manager-test-preload-106909" in "kube-system" namespace to be "Ready" ...
	E0731 17:45:11.748551   49214 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-106909" hosting pod "kube-controller-manager-test-preload-106909" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-106909" has status "Ready":"False"
	I0731 17:45:11.748559   49214 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2wbcp" in "kube-system" namespace to be "Ready" ...
	I0731 17:45:12.148274   49214 pod_ready.go:97] node "test-preload-106909" hosting pod "kube-proxy-2wbcp" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-106909" has status "Ready":"False"
	I0731 17:45:12.148316   49214 pod_ready.go:81] duration metric: took 399.746022ms for pod "kube-proxy-2wbcp" in "kube-system" namespace to be "Ready" ...
	E0731 17:45:12.148328   49214 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-106909" hosting pod "kube-proxy-2wbcp" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-106909" has status "Ready":"False"
	I0731 17:45:12.148336   49214 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-106909" in "kube-system" namespace to be "Ready" ...
	I0731 17:45:12.548570   49214 pod_ready.go:97] node "test-preload-106909" hosting pod "kube-scheduler-test-preload-106909" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-106909" has status "Ready":"False"
	I0731 17:45:12.548599   49214 pod_ready.go:81] duration metric: took 400.249387ms for pod "kube-scheduler-test-preload-106909" in "kube-system" namespace to be "Ready" ...
	E0731 17:45:12.548612   49214 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-106909" hosting pod "kube-scheduler-test-preload-106909" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-106909" has status "Ready":"False"
	I0731 17:45:12.548620   49214 pod_ready.go:38] duration metric: took 945.442994ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 17:45:12.548641   49214 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 17:45:12.561312   49214 ops.go:34] apiserver oom_adj: -16
	I0731 17:45:12.561336   49214 kubeadm.go:597] duration metric: took 9.058632714s to restartPrimaryControlPlane
	I0731 17:45:12.561348   49214 kubeadm.go:394] duration metric: took 9.102838148s to StartCluster
	I0731 17:45:12.561368   49214 settings.go:142] acquiring lock: {Name:mk8ac8269296bf0b887a00ff74caaad180005f89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 17:45:12.561452   49214 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 17:45:12.562076   49214 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/kubeconfig: {Name:mkf2340bc1ada3c683f132aca37228d7588e1fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 17:45:12.562307   49214 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 17:45:12.562383   49214 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 17:45:12.562493   49214 addons.go:69] Setting default-storageclass=true in profile "test-preload-106909"
	I0731 17:45:12.562501   49214 addons.go:69] Setting storage-provisioner=true in profile "test-preload-106909"
	I0731 17:45:12.562519   49214 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-106909"
	I0731 17:45:12.562538   49214 addons.go:234] Setting addon storage-provisioner=true in "test-preload-106909"
	I0731 17:45:12.562542   49214 config.go:182] Loaded profile config "test-preload-106909": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	W0731 17:45:12.562560   49214 addons.go:243] addon storage-provisioner should already be in state true
	I0731 17:45:12.562591   49214 host.go:66] Checking if "test-preload-106909" exists ...
	I0731 17:45:12.562808   49214 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:45:12.562844   49214 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:45:12.562868   49214 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:45:12.562909   49214 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:45:12.564137   49214 out.go:177] * Verifying Kubernetes components...
	I0731 17:45:12.565699   49214 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 17:45:12.577897   49214 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42589
	I0731 17:45:12.578326   49214 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:45:12.578781   49214 main.go:141] libmachine: Using API Version  1
	I0731 17:45:12.578813   49214 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:45:12.579212   49214 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:45:12.579702   49214 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:45:12.579740   49214 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:45:12.582066   49214 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46717
	I0731 17:45:12.582432   49214 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:45:12.582863   49214 main.go:141] libmachine: Using API Version  1
	I0731 17:45:12.582891   49214 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:45:12.583278   49214 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:45:12.583487   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetState
	I0731 17:45:12.585815   49214 kapi.go:59] client config for test-preload-106909: &rest.Config{Host:"https://192.168.39.32:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19349-8084/.minikube/profiles/test-preload-106909/client.crt", KeyFile:"/home/jenkins/minikube-integration/19349-8084/.minikube/profiles/test-preload-106909/client.key", CAFile:"/home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 17:45:12.586144   49214 addons.go:234] Setting addon default-storageclass=true in "test-preload-106909"
	W0731 17:45:12.586167   49214 addons.go:243] addon default-storageclass should already be in state true
	I0731 17:45:12.586194   49214 host.go:66] Checking if "test-preload-106909" exists ...
	I0731 17:45:12.586590   49214 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:45:12.586636   49214 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:45:12.595554   49214 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35055
	I0731 17:45:12.595985   49214 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:45:12.596466   49214 main.go:141] libmachine: Using API Version  1
	I0731 17:45:12.596492   49214 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:45:12.596795   49214 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:45:12.596998   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetState
	I0731 17:45:12.598608   49214 main.go:141] libmachine: (test-preload-106909) Calling .DriverName
	I0731 17:45:12.600776   49214 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 17:45:12.601540   49214 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39501
	I0731 17:45:12.601881   49214 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:45:12.602225   49214 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 17:45:12.602244   49214 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 17:45:12.602261   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetSSHHostname
	I0731 17:45:12.602353   49214 main.go:141] libmachine: Using API Version  1
	I0731 17:45:12.602370   49214 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:45:12.602704   49214 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:45:12.603358   49214 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:45:12.603411   49214 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:45:12.605365   49214 main.go:141] libmachine: (test-preload-106909) DBG | domain test-preload-106909 has defined MAC address 52:54:00:e8:5c:bc in network mk-test-preload-106909
	I0731 17:45:12.605836   49214 main.go:141] libmachine: (test-preload-106909) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:5c:bc", ip: ""} in network mk-test-preload-106909: {Iface:virbr1 ExpiryTime:2024-07-31 18:44:41 +0000 UTC Type:0 Mac:52:54:00:e8:5c:bc Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:test-preload-106909 Clientid:01:52:54:00:e8:5c:bc}
	I0731 17:45:12.605864   49214 main.go:141] libmachine: (test-preload-106909) DBG | domain test-preload-106909 has defined IP address 192.168.39.32 and MAC address 52:54:00:e8:5c:bc in network mk-test-preload-106909
	I0731 17:45:12.606085   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetSSHPort
	I0731 17:45:12.606236   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetSSHKeyPath
	I0731 17:45:12.606383   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetSSHUsername
	I0731 17:45:12.606517   49214 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/test-preload-106909/id_rsa Username:docker}
	I0731 17:45:12.618958   49214 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45741
	I0731 17:45:12.619390   49214 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:45:12.619921   49214 main.go:141] libmachine: Using API Version  1
	I0731 17:45:12.619945   49214 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:45:12.620306   49214 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:45:12.620507   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetState
	I0731 17:45:12.622197   49214 main.go:141] libmachine: (test-preload-106909) Calling .DriverName
	I0731 17:45:12.622433   49214 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 17:45:12.622453   49214 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 17:45:12.622473   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetSSHHostname
	I0731 17:45:12.625242   49214 main.go:141] libmachine: (test-preload-106909) DBG | domain test-preload-106909 has defined MAC address 52:54:00:e8:5c:bc in network mk-test-preload-106909
	I0731 17:45:12.625668   49214 main.go:141] libmachine: (test-preload-106909) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:5c:bc", ip: ""} in network mk-test-preload-106909: {Iface:virbr1 ExpiryTime:2024-07-31 18:44:41 +0000 UTC Type:0 Mac:52:54:00:e8:5c:bc Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:test-preload-106909 Clientid:01:52:54:00:e8:5c:bc}
	I0731 17:45:12.625695   49214 main.go:141] libmachine: (test-preload-106909) DBG | domain test-preload-106909 has defined IP address 192.168.39.32 and MAC address 52:54:00:e8:5c:bc in network mk-test-preload-106909
	I0731 17:45:12.625812   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetSSHPort
	I0731 17:45:12.625979   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetSSHKeyPath
	I0731 17:45:12.626127   49214 main.go:141] libmachine: (test-preload-106909) Calling .GetSSHUsername
	I0731 17:45:12.626308   49214 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/test-preload-106909/id_rsa Username:docker}
	I0731 17:45:12.729479   49214 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 17:45:12.745411   49214 node_ready.go:35] waiting up to 6m0s for node "test-preload-106909" to be "Ready" ...
	I0731 17:45:12.863912   49214 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 17:45:12.883357   49214 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 17:45:13.853371   49214 main.go:141] libmachine: Making call to close driver server
	I0731 17:45:13.853399   49214 main.go:141] libmachine: (test-preload-106909) Calling .Close
	I0731 17:45:13.853434   49214 main.go:141] libmachine: Making call to close driver server
	I0731 17:45:13.853452   49214 main.go:141] libmachine: (test-preload-106909) Calling .Close
	I0731 17:45:13.853705   49214 main.go:141] libmachine: Successfully made call to close driver server
	I0731 17:45:13.853718   49214 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 17:45:13.853726   49214 main.go:141] libmachine: Making call to close driver server
	I0731 17:45:13.853734   49214 main.go:141] libmachine: (test-preload-106909) Calling .Close
	I0731 17:45:13.853733   49214 main.go:141] libmachine: Successfully made call to close driver server
	I0731 17:45:13.853749   49214 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 17:45:13.853761   49214 main.go:141] libmachine: Making call to close driver server
	I0731 17:45:13.853770   49214 main.go:141] libmachine: (test-preload-106909) Calling .Close
	I0731 17:45:13.854018   49214 main.go:141] libmachine: (test-preload-106909) DBG | Closing plugin on server side
	I0731 17:45:13.854059   49214 main.go:141] libmachine: Successfully made call to close driver server
	I0731 17:45:13.854069   49214 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 17:45:13.854089   49214 main.go:141] libmachine: (test-preload-106909) DBG | Closing plugin on server side
	I0731 17:45:13.854309   49214 main.go:141] libmachine: Successfully made call to close driver server
	I0731 17:45:13.854324   49214 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 17:45:13.858941   49214 main.go:141] libmachine: Making call to close driver server
	I0731 17:45:13.858956   49214 main.go:141] libmachine: (test-preload-106909) Calling .Close
	I0731 17:45:13.859171   49214 main.go:141] libmachine: Successfully made call to close driver server
	I0731 17:45:13.859185   49214 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 17:45:13.860977   49214 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0731 17:45:13.862144   49214 addons.go:510] duration metric: took 1.299767828s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0731 17:45:14.749697   49214 node_ready.go:53] node "test-preload-106909" has status "Ready":"False"
	I0731 17:45:16.750232   49214 node_ready.go:53] node "test-preload-106909" has status "Ready":"False"
	I0731 17:45:19.252652   49214 node_ready.go:53] node "test-preload-106909" has status "Ready":"False"
	I0731 17:45:20.749580   49214 node_ready.go:49] node "test-preload-106909" has status "Ready":"True"
	I0731 17:45:20.749611   49214 node_ready.go:38] duration metric: took 8.004164395s for node "test-preload-106909" to be "Ready" ...
	I0731 17:45:20.749623   49214 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 17:45:20.754725   49214 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-l4zr6" in "kube-system" namespace to be "Ready" ...
	I0731 17:45:20.759475   49214 pod_ready.go:92] pod "coredns-6d4b75cb6d-l4zr6" in "kube-system" namespace has status "Ready":"True"
	I0731 17:45:20.759495   49214 pod_ready.go:81] duration metric: took 4.743229ms for pod "coredns-6d4b75cb6d-l4zr6" in "kube-system" namespace to be "Ready" ...
	I0731 17:45:20.759505   49214 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-106909" in "kube-system" namespace to be "Ready" ...
	I0731 17:45:21.266984   49214 pod_ready.go:92] pod "etcd-test-preload-106909" in "kube-system" namespace has status "Ready":"True"
	I0731 17:45:21.267011   49214 pod_ready.go:81] duration metric: took 507.497729ms for pod "etcd-test-preload-106909" in "kube-system" namespace to be "Ready" ...
	I0731 17:45:21.267025   49214 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-106909" in "kube-system" namespace to be "Ready" ...
	I0731 17:45:21.273365   49214 pod_ready.go:92] pod "kube-apiserver-test-preload-106909" in "kube-system" namespace has status "Ready":"True"
	I0731 17:45:21.273384   49214 pod_ready.go:81] duration metric: took 6.351044ms for pod "kube-apiserver-test-preload-106909" in "kube-system" namespace to be "Ready" ...
	I0731 17:45:21.273395   49214 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-106909" in "kube-system" namespace to be "Ready" ...
	I0731 17:45:21.279098   49214 pod_ready.go:92] pod "kube-controller-manager-test-preload-106909" in "kube-system" namespace has status "Ready":"True"
	I0731 17:45:21.279129   49214 pod_ready.go:81] duration metric: took 5.72646ms for pod "kube-controller-manager-test-preload-106909" in "kube-system" namespace to be "Ready" ...
	I0731 17:45:21.279140   49214 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2wbcp" in "kube-system" namespace to be "Ready" ...
	I0731 17:45:21.549477   49214 pod_ready.go:92] pod "kube-proxy-2wbcp" in "kube-system" namespace has status "Ready":"True"
	I0731 17:45:21.549504   49214 pod_ready.go:81] duration metric: took 270.355987ms for pod "kube-proxy-2wbcp" in "kube-system" namespace to be "Ready" ...
	I0731 17:45:21.549516   49214 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-106909" in "kube-system" namespace to be "Ready" ...
	I0731 17:45:23.555202   49214 pod_ready.go:102] pod "kube-scheduler-test-preload-106909" in "kube-system" namespace has status "Ready":"False"
	I0731 17:45:24.056012   49214 pod_ready.go:92] pod "kube-scheduler-test-preload-106909" in "kube-system" namespace has status "Ready":"True"
	I0731 17:45:24.056037   49214 pod_ready.go:81] duration metric: took 2.506513268s for pod "kube-scheduler-test-preload-106909" in "kube-system" namespace to be "Ready" ...
	I0731 17:45:24.056046   49214 pod_ready.go:38] duration metric: took 3.306410584s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 17:45:24.056059   49214 api_server.go:52] waiting for apiserver process to appear ...
	I0731 17:45:24.056114   49214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 17:45:24.070854   49214 api_server.go:72] duration metric: took 11.508513251s to wait for apiserver process to appear ...
	I0731 17:45:24.070881   49214 api_server.go:88] waiting for apiserver healthz status ...
	I0731 17:45:24.070902   49214 api_server.go:253] Checking apiserver healthz at https://192.168.39.32:8443/healthz ...
	I0731 17:45:24.075653   49214 api_server.go:279] https://192.168.39.32:8443/healthz returned 200:
	ok
	I0731 17:45:24.076428   49214 api_server.go:141] control plane version: v1.24.4
	I0731 17:45:24.076448   49214 api_server.go:131] duration metric: took 5.560751ms to wait for apiserver health ...
	I0731 17:45:24.076455   49214 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 17:45:24.153034   49214 system_pods.go:59] 7 kube-system pods found
	I0731 17:45:24.153058   49214 system_pods.go:61] "coredns-6d4b75cb6d-l4zr6" [6ee5eaf3-545c-4368-9365-51dbee049dcd] Running
	I0731 17:45:24.153063   49214 system_pods.go:61] "etcd-test-preload-106909" [c4891f2b-9e17-4154-b259-07efa05f6074] Running
	I0731 17:45:24.153066   49214 system_pods.go:61] "kube-apiserver-test-preload-106909" [4bc9061b-864d-4d81-9ab6-8d822f40cef5] Running
	I0731 17:45:24.153070   49214 system_pods.go:61] "kube-controller-manager-test-preload-106909" [ce46618d-00d2-476c-bcb4-24d083696d1e] Running
	I0731 17:45:24.153073   49214 system_pods.go:61] "kube-proxy-2wbcp" [b5a7709c-bae5-4bb5-8272-0dcb4e2100a3] Running
	I0731 17:45:24.153076   49214 system_pods.go:61] "kube-scheduler-test-preload-106909" [8744c9e3-f9de-41cb-94ed-ab432ce444e5] Running
	I0731 17:45:24.153078   49214 system_pods.go:61] "storage-provisioner" [57acd796-98b4-4d7e-910f-6ce5ca605849] Running
	I0731 17:45:24.153084   49214 system_pods.go:74] duration metric: took 76.624224ms to wait for pod list to return data ...
	I0731 17:45:24.153091   49214 default_sa.go:34] waiting for default service account to be created ...
	I0731 17:45:24.350133   49214 default_sa.go:45] found service account: "default"
	I0731 17:45:24.350167   49214 default_sa.go:55] duration metric: took 197.069449ms for default service account to be created ...
	I0731 17:45:24.350178   49214 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 17:45:24.553112   49214 system_pods.go:86] 7 kube-system pods found
	I0731 17:45:24.553137   49214 system_pods.go:89] "coredns-6d4b75cb6d-l4zr6" [6ee5eaf3-545c-4368-9365-51dbee049dcd] Running
	I0731 17:45:24.553144   49214 system_pods.go:89] "etcd-test-preload-106909" [c4891f2b-9e17-4154-b259-07efa05f6074] Running
	I0731 17:45:24.553148   49214 system_pods.go:89] "kube-apiserver-test-preload-106909" [4bc9061b-864d-4d81-9ab6-8d822f40cef5] Running
	I0731 17:45:24.553152   49214 system_pods.go:89] "kube-controller-manager-test-preload-106909" [ce46618d-00d2-476c-bcb4-24d083696d1e] Running
	I0731 17:45:24.553156   49214 system_pods.go:89] "kube-proxy-2wbcp" [b5a7709c-bae5-4bb5-8272-0dcb4e2100a3] Running
	I0731 17:45:24.553159   49214 system_pods.go:89] "kube-scheduler-test-preload-106909" [8744c9e3-f9de-41cb-94ed-ab432ce444e5] Running
	I0731 17:45:24.553162   49214 system_pods.go:89] "storage-provisioner" [57acd796-98b4-4d7e-910f-6ce5ca605849] Running
	I0731 17:45:24.553168   49214 system_pods.go:126] duration metric: took 202.984451ms to wait for k8s-apps to be running ...
	I0731 17:45:24.553175   49214 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 17:45:24.553222   49214 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 17:45:24.567101   49214 system_svc.go:56] duration metric: took 13.918431ms WaitForService to wait for kubelet
	I0731 17:45:24.567143   49214 kubeadm.go:582] duration metric: took 12.004805281s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 17:45:24.567167   49214 node_conditions.go:102] verifying NodePressure condition ...
	I0731 17:45:24.750443   49214 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 17:45:24.750464   49214 node_conditions.go:123] node cpu capacity is 2
	I0731 17:45:24.750472   49214 node_conditions.go:105] duration metric: took 183.299248ms to run NodePressure ...
	I0731 17:45:24.750483   49214 start.go:241] waiting for startup goroutines ...
	I0731 17:45:24.750490   49214 start.go:246] waiting for cluster config update ...
	I0731 17:45:24.750499   49214 start.go:255] writing updated cluster config ...
	I0731 17:45:24.750733   49214 ssh_runner.go:195] Run: rm -f paused
	I0731 17:45:24.794804   49214 start.go:600] kubectl: 1.30.3, cluster: 1.24.4 (minor skew: 6)
	I0731 17:45:24.796722   49214 out.go:177] 
	W0731 17:45:24.797896   49214 out.go:239] ! /usr/local/bin/kubectl is version 1.30.3, which may have incompatibilities with Kubernetes 1.24.4.
	I0731 17:45:24.799319   49214 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0731 17:45:24.800555   49214 out.go:177] * Done! kubectl is now configured to use "test-preload-106909" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 31 17:45:25 test-preload-106909 crio[677]: time="2024-07-31 17:45:25.605977532Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722447925605956870,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2b5ecf8f-b604-4d57-b0ab-e266554a7233 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:45:25 test-preload-106909 crio[677]: time="2024-07-31 17:45:25.606514737Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2089adb1-136e-49bc-881a-44e155598698 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:45:25 test-preload-106909 crio[677]: time="2024-07-31 17:45:25.606573937Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2089adb1-136e-49bc-881a-44e155598698 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:45:25 test-preload-106909 crio[677]: time="2024-07-31 17:45:25.606787981Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d39dde4b3e419d667aaa8992c71eee7a8b34c7929d703c107c2dd28bad84a78e,PodSandboxId:a051b0ebcbf63e8f57e7a99cf8c9b4f7dbf49c0d519f8f14bcf93ff00b19b6cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1722447918987840485,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-l4zr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ee5eaf3-545c-4368-9365-51dbee049dcd,},Annotations:map[string]string{io.kubernetes.container.hash: 7e9d0876,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9467f43c00baa74ef1bfb2f710083b072fdc66559725b871fb6f8f26ae0c8b5a,PodSandboxId:c4a67da7fd7816457620653f2b28157d9ef7d0ccc8f011625f706ec3c226443d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1722447911732919717,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2wbcp,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: b5a7709c-bae5-4bb5-8272-0dcb4e2100a3,},Annotations:map[string]string{io.kubernetes.container.hash: 4ad929b2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f022a2b1a39fad4c470cb8b9fb588f9913ec53f65729aa39284fa96353b2e855,PodSandboxId:f4769f4940c6083c5d8646327960b9c653490c94856709691759ce6b41896b19,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722447911465553851,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57
acd796-98b4-4d7e-910f-6ce5ca605849,},Annotations:map[string]string{io.kubernetes.container.hash: 8daf144f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebc63dd05d5d7923c605ccc86af5f3349ac8a8354da233015203af3899ff7529,PodSandboxId:7343a19c889aa57ba24906b26b1706ebda98ab1f68edac311626258a035c1714,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1722447905504365906,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-106909,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cad36072f45fcbe457231a93a2037ac,},Anno
tations:map[string]string{io.kubernetes.container.hash: 2c07381b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba9ccd2b196ff1d26128e162cb725e2f973c77ade98df49a493e4d86b86e1136,PodSandboxId:dab237b40888a7f5c0472572cc706212a61f3b4589fda2aeb33cdf88997fe11c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1722447905482515087,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-106909,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6600c2b3cf9380a78ade630df822ee41,},Annotations:map
[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf02d192d5325e34b963a3f4a262944828450c3f7f9544be0b94327c8657a0b6,PodSandboxId:2a240e00681399944a3eecf9ec69a5562b4257dc74beb58af124425dbbe19083,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1722447905473260175,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-106909,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e253cf14074dcbfe8b85d814b736b21d,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 7c6c7deb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed3fcb423ac8f618184f853d34d99c1a35717797b9697af48d439817761c3462,PodSandboxId:9c7956acda773e51ef07db37c351941e7a8ce8cb30c8a5e3988e5d341acaa689,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1722447905451224148,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-106909,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b354a724371453e3a78d9cebc86f0e1,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2089adb1-136e-49bc-881a-44e155598698 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:45:25 test-preload-106909 crio[677]: time="2024-07-31 17:45:25.639958448Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=28f6dc83-ceba-44ca-9f85-4d5bb3c7a462 name=/runtime.v1.RuntimeService/Version
	Jul 31 17:45:25 test-preload-106909 crio[677]: time="2024-07-31 17:45:25.640076652Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=28f6dc83-ceba-44ca-9f85-4d5bb3c7a462 name=/runtime.v1.RuntimeService/Version
	Jul 31 17:45:25 test-preload-106909 crio[677]: time="2024-07-31 17:45:25.641273161Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=162163f6-fcc3-42be-8362-27d9df16f460 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:45:25 test-preload-106909 crio[677]: time="2024-07-31 17:45:25.641741223Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722447925641671602,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=162163f6-fcc3-42be-8362-27d9df16f460 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:45:25 test-preload-106909 crio[677]: time="2024-07-31 17:45:25.642261921Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=787589a9-f54e-4802-81c4-a2c247fd5ff3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:45:25 test-preload-106909 crio[677]: time="2024-07-31 17:45:25.642324466Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=787589a9-f54e-4802-81c4-a2c247fd5ff3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:45:25 test-preload-106909 crio[677]: time="2024-07-31 17:45:25.642486612Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d39dde4b3e419d667aaa8992c71eee7a8b34c7929d703c107c2dd28bad84a78e,PodSandboxId:a051b0ebcbf63e8f57e7a99cf8c9b4f7dbf49c0d519f8f14bcf93ff00b19b6cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1722447918987840485,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-l4zr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ee5eaf3-545c-4368-9365-51dbee049dcd,},Annotations:map[string]string{io.kubernetes.container.hash: 7e9d0876,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9467f43c00baa74ef1bfb2f710083b072fdc66559725b871fb6f8f26ae0c8b5a,PodSandboxId:c4a67da7fd7816457620653f2b28157d9ef7d0ccc8f011625f706ec3c226443d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1722447911732919717,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2wbcp,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: b5a7709c-bae5-4bb5-8272-0dcb4e2100a3,},Annotations:map[string]string{io.kubernetes.container.hash: 4ad929b2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f022a2b1a39fad4c470cb8b9fb588f9913ec53f65729aa39284fa96353b2e855,PodSandboxId:f4769f4940c6083c5d8646327960b9c653490c94856709691759ce6b41896b19,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722447911465553851,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57
acd796-98b4-4d7e-910f-6ce5ca605849,},Annotations:map[string]string{io.kubernetes.container.hash: 8daf144f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebc63dd05d5d7923c605ccc86af5f3349ac8a8354da233015203af3899ff7529,PodSandboxId:7343a19c889aa57ba24906b26b1706ebda98ab1f68edac311626258a035c1714,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1722447905504365906,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-106909,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cad36072f45fcbe457231a93a2037ac,},Anno
tations:map[string]string{io.kubernetes.container.hash: 2c07381b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba9ccd2b196ff1d26128e162cb725e2f973c77ade98df49a493e4d86b86e1136,PodSandboxId:dab237b40888a7f5c0472572cc706212a61f3b4589fda2aeb33cdf88997fe11c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1722447905482515087,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-106909,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6600c2b3cf9380a78ade630df822ee41,},Annotations:map
[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf02d192d5325e34b963a3f4a262944828450c3f7f9544be0b94327c8657a0b6,PodSandboxId:2a240e00681399944a3eecf9ec69a5562b4257dc74beb58af124425dbbe19083,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1722447905473260175,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-106909,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e253cf14074dcbfe8b85d814b736b21d,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 7c6c7deb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed3fcb423ac8f618184f853d34d99c1a35717797b9697af48d439817761c3462,PodSandboxId:9c7956acda773e51ef07db37c351941e7a8ce8cb30c8a5e3988e5d341acaa689,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1722447905451224148,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-106909,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b354a724371453e3a78d9cebc86f0e1,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=787589a9-f54e-4802-81c4-a2c247fd5ff3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:45:25 test-preload-106909 crio[677]: time="2024-07-31 17:45:25.675313714Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=00ba22f3-6c2e-4d6b-a8cd-8a833526f891 name=/runtime.v1.RuntimeService/Version
	Jul 31 17:45:25 test-preload-106909 crio[677]: time="2024-07-31 17:45:25.675394074Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=00ba22f3-6c2e-4d6b-a8cd-8a833526f891 name=/runtime.v1.RuntimeService/Version
	Jul 31 17:45:25 test-preload-106909 crio[677]: time="2024-07-31 17:45:25.676271271Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0f0f5031-e941-44a9-8448-845776ad6a3d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:45:25 test-preload-106909 crio[677]: time="2024-07-31 17:45:25.676735127Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722447925676674709,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0f0f5031-e941-44a9-8448-845776ad6a3d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:45:25 test-preload-106909 crio[677]: time="2024-07-31 17:45:25.677107327Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f60cba48-8cb0-4f36-b23c-46479335e243 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:45:25 test-preload-106909 crio[677]: time="2024-07-31 17:45:25.677169084Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f60cba48-8cb0-4f36-b23c-46479335e243 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:45:25 test-preload-106909 crio[677]: time="2024-07-31 17:45:25.677356498Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d39dde4b3e419d667aaa8992c71eee7a8b34c7929d703c107c2dd28bad84a78e,PodSandboxId:a051b0ebcbf63e8f57e7a99cf8c9b4f7dbf49c0d519f8f14bcf93ff00b19b6cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1722447918987840485,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-l4zr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ee5eaf3-545c-4368-9365-51dbee049dcd,},Annotations:map[string]string{io.kubernetes.container.hash: 7e9d0876,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9467f43c00baa74ef1bfb2f710083b072fdc66559725b871fb6f8f26ae0c8b5a,PodSandboxId:c4a67da7fd7816457620653f2b28157d9ef7d0ccc8f011625f706ec3c226443d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1722447911732919717,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2wbcp,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: b5a7709c-bae5-4bb5-8272-0dcb4e2100a3,},Annotations:map[string]string{io.kubernetes.container.hash: 4ad929b2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f022a2b1a39fad4c470cb8b9fb588f9913ec53f65729aa39284fa96353b2e855,PodSandboxId:f4769f4940c6083c5d8646327960b9c653490c94856709691759ce6b41896b19,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722447911465553851,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57
acd796-98b4-4d7e-910f-6ce5ca605849,},Annotations:map[string]string{io.kubernetes.container.hash: 8daf144f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebc63dd05d5d7923c605ccc86af5f3349ac8a8354da233015203af3899ff7529,PodSandboxId:7343a19c889aa57ba24906b26b1706ebda98ab1f68edac311626258a035c1714,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1722447905504365906,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-106909,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cad36072f45fcbe457231a93a2037ac,},Anno
tations:map[string]string{io.kubernetes.container.hash: 2c07381b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba9ccd2b196ff1d26128e162cb725e2f973c77ade98df49a493e4d86b86e1136,PodSandboxId:dab237b40888a7f5c0472572cc706212a61f3b4589fda2aeb33cdf88997fe11c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1722447905482515087,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-106909,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6600c2b3cf9380a78ade630df822ee41,},Annotations:map
[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf02d192d5325e34b963a3f4a262944828450c3f7f9544be0b94327c8657a0b6,PodSandboxId:2a240e00681399944a3eecf9ec69a5562b4257dc74beb58af124425dbbe19083,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1722447905473260175,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-106909,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e253cf14074dcbfe8b85d814b736b21d,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 7c6c7deb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed3fcb423ac8f618184f853d34d99c1a35717797b9697af48d439817761c3462,PodSandboxId:9c7956acda773e51ef07db37c351941e7a8ce8cb30c8a5e3988e5d341acaa689,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1722447905451224148,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-106909,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b354a724371453e3a78d9cebc86f0e1,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f60cba48-8cb0-4f36-b23c-46479335e243 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:45:25 test-preload-106909 crio[677]: time="2024-07-31 17:45:25.709352244Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4400d420-fd94-473a-b5c9-87ecc4e38094 name=/runtime.v1.RuntimeService/Version
	Jul 31 17:45:25 test-preload-106909 crio[677]: time="2024-07-31 17:45:25.709434488Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4400d420-fd94-473a-b5c9-87ecc4e38094 name=/runtime.v1.RuntimeService/Version
	Jul 31 17:45:25 test-preload-106909 crio[677]: time="2024-07-31 17:45:25.710464601Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3f501c39-790d-4140-a86f-a83dbe8a1898 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:45:25 test-preload-106909 crio[677]: time="2024-07-31 17:45:25.710923135Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722447925710893448,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3f501c39-790d-4140-a86f-a83dbe8a1898 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:45:25 test-preload-106909 crio[677]: time="2024-07-31 17:45:25.711528126Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5fc8c77a-4415-46f2-9a15-5d0f69f78d15 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:45:25 test-preload-106909 crio[677]: time="2024-07-31 17:45:25.711575715Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5fc8c77a-4415-46f2-9a15-5d0f69f78d15 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:45:25 test-preload-106909 crio[677]: time="2024-07-31 17:45:25.711808891Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d39dde4b3e419d667aaa8992c71eee7a8b34c7929d703c107c2dd28bad84a78e,PodSandboxId:a051b0ebcbf63e8f57e7a99cf8c9b4f7dbf49c0d519f8f14bcf93ff00b19b6cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1722447918987840485,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-l4zr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ee5eaf3-545c-4368-9365-51dbee049dcd,},Annotations:map[string]string{io.kubernetes.container.hash: 7e9d0876,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9467f43c00baa74ef1bfb2f710083b072fdc66559725b871fb6f8f26ae0c8b5a,PodSandboxId:c4a67da7fd7816457620653f2b28157d9ef7d0ccc8f011625f706ec3c226443d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1722447911732919717,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2wbcp,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: b5a7709c-bae5-4bb5-8272-0dcb4e2100a3,},Annotations:map[string]string{io.kubernetes.container.hash: 4ad929b2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f022a2b1a39fad4c470cb8b9fb588f9913ec53f65729aa39284fa96353b2e855,PodSandboxId:f4769f4940c6083c5d8646327960b9c653490c94856709691759ce6b41896b19,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722447911465553851,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57
acd796-98b4-4d7e-910f-6ce5ca605849,},Annotations:map[string]string{io.kubernetes.container.hash: 8daf144f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebc63dd05d5d7923c605ccc86af5f3349ac8a8354da233015203af3899ff7529,PodSandboxId:7343a19c889aa57ba24906b26b1706ebda98ab1f68edac311626258a035c1714,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1722447905504365906,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-106909,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cad36072f45fcbe457231a93a2037ac,},Anno
tations:map[string]string{io.kubernetes.container.hash: 2c07381b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba9ccd2b196ff1d26128e162cb725e2f973c77ade98df49a493e4d86b86e1136,PodSandboxId:dab237b40888a7f5c0472572cc706212a61f3b4589fda2aeb33cdf88997fe11c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1722447905482515087,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-106909,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6600c2b3cf9380a78ade630df822ee41,},Annotations:map
[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf02d192d5325e34b963a3f4a262944828450c3f7f9544be0b94327c8657a0b6,PodSandboxId:2a240e00681399944a3eecf9ec69a5562b4257dc74beb58af124425dbbe19083,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1722447905473260175,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-106909,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e253cf14074dcbfe8b85d814b736b21d,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 7c6c7deb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed3fcb423ac8f618184f853d34d99c1a35717797b9697af48d439817761c3462,PodSandboxId:9c7956acda773e51ef07db37c351941e7a8ce8cb30c8a5e3988e5d341acaa689,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1722447905451224148,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-106909,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b354a724371453e3a78d9cebc86f0e1,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5fc8c77a-4415-46f2-9a15-5d0f69f78d15 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d39dde4b3e419       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   6 seconds ago       Running             coredns                   1                   a051b0ebcbf63       coredns-6d4b75cb6d-l4zr6
	9467f43c00baa       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   14 seconds ago      Running             kube-proxy                1                   c4a67da7fd781       kube-proxy-2wbcp
	f022a2b1a39fa       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Running             storage-provisioner       2                   f4769f4940c60       storage-provisioner
	ebc63dd05d5d7       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   20 seconds ago      Running             etcd                      1                   7343a19c889aa       etcd-test-preload-106909
	ba9ccd2b196ff       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   20 seconds ago      Running             kube-scheduler            1                   dab237b40888a       kube-scheduler-test-preload-106909
	bf02d192d5325       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   20 seconds ago      Running             kube-apiserver            1                   2a240e0068139       kube-apiserver-test-preload-106909
	ed3fcb423ac8f       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   20 seconds ago      Running             kube-controller-manager   1                   9c7956acda773       kube-controller-manager-test-preload-106909
	
	
	==> coredns [d39dde4b3e419d667aaa8992c71eee7a8b34c7929d703c107c2dd28bad84a78e] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:46187 - 21129 "HINFO IN 8155767617735071448.6224851042122919964. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010581821s
	
	
	==> describe nodes <==
	Name:               test-preload-106909
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-106909
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1d737dad7efa60c56d30434fcd857dd3b14c91d9
	                    minikube.k8s.io/name=test-preload-106909
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T17_43_02_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 17:42:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-106909
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 17:45:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 17:45:20 +0000   Wed, 31 Jul 2024 17:42:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 17:45:20 +0000   Wed, 31 Jul 2024 17:42:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 17:45:20 +0000   Wed, 31 Jul 2024 17:42:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 17:45:20 +0000   Wed, 31 Jul 2024 17:45:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.32
	  Hostname:    test-preload-106909
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 aa21a051fd5542c99f1dcebf3302822e
	  System UUID:                aa21a051-fd55-42c9-9f1d-cebf3302822e
	  Boot ID:                    d71feda4-67e4-4f0b-9caf-ed14f129428f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-l4zr6                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     2m10s
	  kube-system                 etcd-test-preload-106909                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         2m23s
	  kube-system                 kube-apiserver-test-preload-106909             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m23s
	  kube-system                 kube-controller-manager-test-preload-106909    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m23s
	  kube-system                 kube-proxy-2wbcp                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m10s
	  kube-system                 kube-scheduler-test-preload-106909             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m23s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 13s                    kube-proxy       
	  Normal  Starting                 2m8s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  2m31s (x5 over 2m31s)  kubelet          Node test-preload-106909 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m31s (x5 over 2m31s)  kubelet          Node test-preload-106909 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m31s (x4 over 2m31s)  kubelet          Node test-preload-106909 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m23s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m23s                  kubelet          Node test-preload-106909 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m23s                  kubelet          Node test-preload-106909 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m23s                  kubelet          Node test-preload-106909 status is now: NodeHasSufficientPID
	  Normal  NodeReady                2m13s                  kubelet          Node test-preload-106909 status is now: NodeReady
	  Normal  RegisteredNode           2m10s                  node-controller  Node test-preload-106909 event: Registered Node test-preload-106909 in Controller
	  Normal  Starting                 21s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21s (x8 over 21s)      kubelet          Node test-preload-106909 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x8 over 21s)      kubelet          Node test-preload-106909 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x7 over 21s)      kubelet          Node test-preload-106909 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3s                     node-controller  Node test-preload-106909 event: Registered Node test-preload-106909 in Controller
	
	
	==> dmesg <==
	[Jul31 17:44] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050476] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037052] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.671264] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.740798] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.403541] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.691472] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.057580] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053466] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.190090] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.103107] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.247196] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[Jul31 17:45] systemd-fstab-generator[938]: Ignoring "noauto" option for root device
	[  +0.058129] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.580469] systemd-fstab-generator[1066]: Ignoring "noauto" option for root device
	[  +6.868265] kauditd_printk_skb: 105 callbacks suppressed
	[  +1.207981] systemd-fstab-generator[1707]: Ignoring "noauto" option for root device
	[  +6.165633] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [ebc63dd05d5d7923c605ccc86af5f3349ac8a8354da233015203af3899ff7529] <==
	{"level":"info","ts":"2024-07-31T17:45:05.880Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"d4c05646b7156589","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-07-31T17:45:05.897Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-07-31T17:45:05.907Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 switched to configuration voters=(15330347993288500617)"}
	{"level":"info","ts":"2024-07-31T17:45:05.920Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68bdcbcbc4b793bb","local-member-id":"d4c05646b7156589","added-peer-id":"d4c05646b7156589","added-peer-peer-urls":["https://192.168.39.32:2380"]}
	{"level":"info","ts":"2024-07-31T17:45:05.922Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68bdcbcbc4b793bb","local-member-id":"d4c05646b7156589","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T17:45:05.923Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T17:45:05.934Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.32:2380"}
	{"level":"info","ts":"2024-07-31T17:45:05.934Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.32:2380"}
	{"level":"info","ts":"2024-07-31T17:45:05.934Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-31T17:45:05.937Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"d4c05646b7156589","initial-advertise-peer-urls":["https://192.168.39.32:2380"],"listen-peer-urls":["https://192.168.39.32:2380"],"advertise-client-urls":["https://192.168.39.32:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.32:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-31T17:45:05.937Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-31T17:45:07.743Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-31T17:45:07.743Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-31T17:45:07.743Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 received MsgPreVoteResp from d4c05646b7156589 at term 2"}
	{"level":"info","ts":"2024-07-31T17:45:07.743Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 became candidate at term 3"}
	{"level":"info","ts":"2024-07-31T17:45:07.743Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 received MsgVoteResp from d4c05646b7156589 at term 3"}
	{"level":"info","ts":"2024-07-31T17:45:07.743Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 became leader at term 3"}
	{"level":"info","ts":"2024-07-31T17:45:07.743Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d4c05646b7156589 elected leader d4c05646b7156589 at term 3"}
	{"level":"info","ts":"2024-07-31T17:45:07.748Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"d4c05646b7156589","local-member-attributes":"{Name:test-preload-106909 ClientURLs:[https://192.168.39.32:2379]}","request-path":"/0/members/d4c05646b7156589/attributes","cluster-id":"68bdcbcbc4b793bb","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T17:45:07.748Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T17:45:07.749Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T17:45:07.749Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-31T17:45:07.749Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T17:45:07.750Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-31T17:45:07.750Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.32:2379"}
	
	
	==> kernel <==
	 17:45:25 up 0 min,  0 users,  load average: 0.39, 0.12, 0.04
	Linux test-preload-106909 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [bf02d192d5325e34b963a3f4a262944828450c3f7f9544be0b94327c8657a0b6] <==
	I0731 17:45:10.043856       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0731 17:45:10.044201       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0731 17:45:10.044229       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0731 17:45:10.030767       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0731 17:45:10.097731       1 apf_controller.go:317] Starting API Priority and Fairness config controller
	I0731 17:45:10.102942       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0731 17:45:10.102968       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0731 17:45:10.198596       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0731 17:45:10.204006       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0731 17:45:10.233206       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0731 17:45:10.233788       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0731 17:45:10.241747       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0731 17:45:10.244018       1 cache.go:39] Caches are synced for autoregister controller
	I0731 17:45:10.245227       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0731 17:45:10.245371       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0731 17:45:10.727185       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0731 17:45:11.038766       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0731 17:45:11.487366       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0731 17:45:11.499990       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0731 17:45:11.545261       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0731 17:45:11.573524       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0731 17:45:11.580903       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0731 17:45:11.943470       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0731 17:45:22.692140       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0731 17:45:22.745605       1 controller.go:611] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [ed3fcb423ac8f618184f853d34d99c1a35717797b9697af48d439817761c3462] <==
	I0731 17:45:22.488823       1 shared_informer.go:262] Caches are synced for persistent volume
	I0731 17:45:22.489881       1 shared_informer.go:262] Caches are synced for namespace
	I0731 17:45:22.491192       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0731 17:45:22.491817       1 shared_informer.go:262] Caches are synced for GC
	I0731 17:45:22.493129       1 shared_informer.go:262] Caches are synced for TTL
	I0731 17:45:22.494727       1 shared_informer.go:262] Caches are synced for expand
	I0731 17:45:22.496256       1 shared_informer.go:262] Caches are synced for ephemeral
	I0731 17:45:22.504563       1 shared_informer.go:262] Caches are synced for disruption
	I0731 17:45:22.504593       1 disruption.go:371] Sending events to api server.
	I0731 17:45:22.632403       1 shared_informer.go:262] Caches are synced for attach detach
	I0731 17:45:22.645281       1 shared_informer.go:262] Caches are synced for daemon sets
	I0731 17:45:22.656568       1 shared_informer.go:262] Caches are synced for resource quota
	I0731 17:45:22.670859       1 shared_informer.go:262] Caches are synced for taint
	I0731 17:45:22.670958       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	I0731 17:45:22.671008       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	W0731 17:45:22.671089       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-106909. Assuming now as a timestamp.
	I0731 17:45:22.671134       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0731 17:45:22.671368       1 event.go:294] "Event occurred" object="test-preload-106909" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-106909 event: Registered Node test-preload-106909 in Controller"
	I0731 17:45:22.683870       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0731 17:45:22.694374       1 shared_informer.go:262] Caches are synced for resource quota
	I0731 17:45:22.736043       1 shared_informer.go:262] Caches are synced for endpoint
	I0731 17:45:22.737792       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0731 17:45:23.100858       1 shared_informer.go:262] Caches are synced for garbage collector
	I0731 17:45:23.100951       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0731 17:45:23.132091       1 shared_informer.go:262] Caches are synced for garbage collector
	
	
	==> kube-proxy [9467f43c00baa74ef1bfb2f710083b072fdc66559725b871fb6f8f26ae0c8b5a] <==
	I0731 17:45:11.898884       1 node.go:163] Successfully retrieved node IP: 192.168.39.32
	I0731 17:45:11.898986       1 server_others.go:138] "Detected node IP" address="192.168.39.32"
	I0731 17:45:11.899020       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0731 17:45:11.935039       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0731 17:45:11.935065       1 server_others.go:206] "Using iptables Proxier"
	I0731 17:45:11.935425       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0731 17:45:11.936296       1 server.go:661] "Version info" version="v1.24.4"
	I0731 17:45:11.936319       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 17:45:11.938002       1 config.go:317] "Starting service config controller"
	I0731 17:45:11.938314       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0731 17:45:11.938340       1 config.go:226] "Starting endpoint slice config controller"
	I0731 17:45:11.938344       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0731 17:45:11.939255       1 config.go:444] "Starting node config controller"
	I0731 17:45:11.939277       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0731 17:45:12.038998       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0731 17:45:12.039029       1 shared_informer.go:262] Caches are synced for service config
	I0731 17:45:12.039462       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [ba9ccd2b196ff1d26128e162cb725e2f973c77ade98df49a493e4d86b86e1136] <==
	I0731 17:45:06.244264       1 serving.go:348] Generated self-signed cert in-memory
	W0731 17:45:10.119756       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0731 17:45:10.119819       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 17:45:10.119834       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0731 17:45:10.119844       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0731 17:45:10.158272       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0731 17:45:10.160278       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 17:45:10.164649       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0731 17:45:10.164757       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0731 17:45:10.164864       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 17:45:10.164929       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0731 17:45:10.265138       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 31 17:45:10 test-preload-106909 kubelet[1073]: I0731 17:45:10.211508    1073 kubelet_node_status.go:73] "Successfully registered node" node="test-preload-106909"
	Jul 31 17:45:10 test-preload-106909 kubelet[1073]: I0731 17:45:10.214300    1073 setters.go:532] "Node became not ready" node="test-preload-106909" condition={Type:Ready Status:False LastHeartbeatTime:2024-07-31 17:45:10.214239288 +0000 UTC m=+5.589779866 LastTransitionTime:2024-07-31 17:45:10.214239288 +0000 UTC m=+5.589779866 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?}
	Jul 31 17:45:10 test-preload-106909 kubelet[1073]: I0731 17:45:10.746370    1073 apiserver.go:52] "Watching apiserver"
	Jul 31 17:45:10 test-preload-106909 kubelet[1073]: I0731 17:45:10.754077    1073 topology_manager.go:200] "Topology Admit Handler"
	Jul 31 17:45:10 test-preload-106909 kubelet[1073]: I0731 17:45:10.754179    1073 topology_manager.go:200] "Topology Admit Handler"
	Jul 31 17:45:10 test-preload-106909 kubelet[1073]: I0731 17:45:10.754239    1073 topology_manager.go:200] "Topology Admit Handler"
	Jul 31 17:45:10 test-preload-106909 kubelet[1073]: E0731 17:45:10.756587    1073 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-l4zr6" podUID=6ee5eaf3-545c-4368-9365-51dbee049dcd
	Jul 31 17:45:10 test-preload-106909 kubelet[1073]: I0731 17:45:10.893212    1073 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b27mx\" (UniqueName: \"kubernetes.io/projected/b5a7709c-bae5-4bb5-8272-0dcb4e2100a3-kube-api-access-b27mx\") pod \"kube-proxy-2wbcp\" (UID: \"b5a7709c-bae5-4bb5-8272-0dcb4e2100a3\") " pod="kube-system/kube-proxy-2wbcp"
	Jul 31 17:45:10 test-preload-106909 kubelet[1073]: I0731 17:45:10.893592    1073 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hv4p\" (UniqueName: \"kubernetes.io/projected/6ee5eaf3-545c-4368-9365-51dbee049dcd-kube-api-access-2hv4p\") pod \"coredns-6d4b75cb6d-l4zr6\" (UID: \"6ee5eaf3-545c-4368-9365-51dbee049dcd\") " pod="kube-system/coredns-6d4b75cb6d-l4zr6"
	Jul 31 17:45:10 test-preload-106909 kubelet[1073]: I0731 17:45:10.893728    1073 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/57acd796-98b4-4d7e-910f-6ce5ca605849-tmp\") pod \"storage-provisioner\" (UID: \"57acd796-98b4-4d7e-910f-6ce5ca605849\") " pod="kube-system/storage-provisioner"
	Jul 31 17:45:10 test-preload-106909 kubelet[1073]: I0731 17:45:10.893793    1073 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b5a7709c-bae5-4bb5-8272-0dcb4e2100a3-xtables-lock\") pod \"kube-proxy-2wbcp\" (UID: \"b5a7709c-bae5-4bb5-8272-0dcb4e2100a3\") " pod="kube-system/kube-proxy-2wbcp"
	Jul 31 17:45:10 test-preload-106909 kubelet[1073]: I0731 17:45:10.893857    1073 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5a7709c-bae5-4bb5-8272-0dcb4e2100a3-lib-modules\") pod \"kube-proxy-2wbcp\" (UID: \"b5a7709c-bae5-4bb5-8272-0dcb4e2100a3\") " pod="kube-system/kube-proxy-2wbcp"
	Jul 31 17:45:10 test-preload-106909 kubelet[1073]: I0731 17:45:10.893905    1073 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ee5eaf3-545c-4368-9365-51dbee049dcd-config-volume\") pod \"coredns-6d4b75cb6d-l4zr6\" (UID: \"6ee5eaf3-545c-4368-9365-51dbee049dcd\") " pod="kube-system/coredns-6d4b75cb6d-l4zr6"
	Jul 31 17:45:10 test-preload-106909 kubelet[1073]: I0731 17:45:10.894032    1073 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b5a7709c-bae5-4bb5-8272-0dcb4e2100a3-kube-proxy\") pod \"kube-proxy-2wbcp\" (UID: \"b5a7709c-bae5-4bb5-8272-0dcb4e2100a3\") " pod="kube-system/kube-proxy-2wbcp"
	Jul 31 17:45:10 test-preload-106909 kubelet[1073]: I0731 17:45:10.894055    1073 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzww8\" (UniqueName: \"kubernetes.io/projected/57acd796-98b4-4d7e-910f-6ce5ca605849-kube-api-access-zzww8\") pod \"storage-provisioner\" (UID: \"57acd796-98b4-4d7e-910f-6ce5ca605849\") " pod="kube-system/storage-provisioner"
	Jul 31 17:45:10 test-preload-106909 kubelet[1073]: I0731 17:45:10.894124    1073 reconciler.go:159] "Reconciler: start to sync state"
	Jul 31 17:45:10 test-preload-106909 kubelet[1073]: E0731 17:45:10.997187    1073 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 31 17:45:10 test-preload-106909 kubelet[1073]: E0731 17:45:10.997294    1073 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/6ee5eaf3-545c-4368-9365-51dbee049dcd-config-volume podName:6ee5eaf3-545c-4368-9365-51dbee049dcd nodeName:}" failed. No retries permitted until 2024-07-31 17:45:11.497263371 +0000 UTC m=+6.872803937 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/6ee5eaf3-545c-4368-9365-51dbee049dcd-config-volume") pod "coredns-6d4b75cb6d-l4zr6" (UID: "6ee5eaf3-545c-4368-9365-51dbee049dcd") : object "kube-system"/"coredns" not registered
	Jul 31 17:45:11 test-preload-106909 kubelet[1073]: E0731 17:45:11.500836    1073 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 31 17:45:11 test-preload-106909 kubelet[1073]: E0731 17:45:11.500917    1073 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/6ee5eaf3-545c-4368-9365-51dbee049dcd-config-volume podName:6ee5eaf3-545c-4368-9365-51dbee049dcd nodeName:}" failed. No retries permitted until 2024-07-31 17:45:12.500896365 +0000 UTC m=+7.876436930 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/6ee5eaf3-545c-4368-9365-51dbee049dcd-config-volume") pod "coredns-6d4b75cb6d-l4zr6" (UID: "6ee5eaf3-545c-4368-9365-51dbee049dcd") : object "kube-system"/"coredns" not registered
	Jul 31 17:45:12 test-preload-106909 kubelet[1073]: E0731 17:45:12.509051    1073 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 31 17:45:12 test-preload-106909 kubelet[1073]: E0731 17:45:12.509123    1073 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/6ee5eaf3-545c-4368-9365-51dbee049dcd-config-volume podName:6ee5eaf3-545c-4368-9365-51dbee049dcd nodeName:}" failed. No retries permitted until 2024-07-31 17:45:14.509108063 +0000 UTC m=+9.884648641 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/6ee5eaf3-545c-4368-9365-51dbee049dcd-config-volume") pod "coredns-6d4b75cb6d-l4zr6" (UID: "6ee5eaf3-545c-4368-9365-51dbee049dcd") : object "kube-system"/"coredns" not registered
	Jul 31 17:45:12 test-preload-106909 kubelet[1073]: E0731 17:45:12.860009    1073 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-l4zr6" podUID=6ee5eaf3-545c-4368-9365-51dbee049dcd
	Jul 31 17:45:14 test-preload-106909 kubelet[1073]: E0731 17:45:14.527625    1073 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 31 17:45:14 test-preload-106909 kubelet[1073]: E0731 17:45:14.527776    1073 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/6ee5eaf3-545c-4368-9365-51dbee049dcd-config-volume podName:6ee5eaf3-545c-4368-9365-51dbee049dcd nodeName:}" failed. No retries permitted until 2024-07-31 17:45:18.527753254 +0000 UTC m=+13.903293832 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/6ee5eaf3-545c-4368-9365-51dbee049dcd-config-volume") pod "coredns-6d4b75cb6d-l4zr6" (UID: "6ee5eaf3-545c-4368-9365-51dbee049dcd") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [f022a2b1a39fad4c470cb8b9fb588f9913ec53f65729aa39284fa96353b2e855] <==
	I0731 17:45:11.572580       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-106909 -n test-preload-106909
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-106909 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-106909" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-106909
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-106909: (1.105570557s)
--- FAIL: TestPreload (217.74s)

                                                
                                    
x
+
TestKubernetesUpgrade (408.99s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-410576 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-410576 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m26.825278721s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-410576] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19349
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19349-8084/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19349-8084/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-410576" primary control-plane node in "kubernetes-upgrade-410576" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 17:51:01.712153   56427 out.go:291] Setting OutFile to fd 1 ...
	I0731 17:51:01.712274   56427 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 17:51:01.712284   56427 out.go:304] Setting ErrFile to fd 2...
	I0731 17:51:01.712290   56427 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 17:51:01.712462   56427 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19349-8084/.minikube/bin
	I0731 17:51:01.713075   56427 out.go:298] Setting JSON to false
	I0731 17:51:01.713992   56427 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5606,"bootTime":1722442656,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 17:51:01.714063   56427 start.go:139] virtualization: kvm guest
	I0731 17:51:01.716461   56427 out.go:177] * [kubernetes-upgrade-410576] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 17:51:01.717837   56427 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 17:51:01.717878   56427 notify.go:220] Checking for updates...
	I0731 17:51:01.720343   56427 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 17:51:01.721612   56427 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 17:51:01.722936   56427 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19349-8084/.minikube
	I0731 17:51:01.724351   56427 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 17:51:01.725473   56427 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 17:51:01.727079   56427 config.go:182] Loaded profile config "NoKubernetes-231031": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0731 17:51:01.727221   56427 config.go:182] Loaded profile config "cert-expiration-761578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 17:51:01.727336   56427 config.go:182] Loaded profile config "running-upgrade-262154": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0731 17:51:01.727432   56427 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 17:51:01.766535   56427 out.go:177] * Using the kvm2 driver based on user configuration
	I0731 17:51:01.767857   56427 start.go:297] selected driver: kvm2
	I0731 17:51:01.767869   56427 start.go:901] validating driver "kvm2" against <nil>
	I0731 17:51:01.767880   56427 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 17:51:01.768518   56427 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 17:51:01.768593   56427 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19349-8084/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 17:51:01.783413   56427 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 17:51:01.783452   56427 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 17:51:01.783658   56427 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 17:51:01.783708   56427 cni.go:84] Creating CNI manager for ""
	I0731 17:51:01.783719   56427 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 17:51:01.783725   56427 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 17:51:01.783775   56427 start.go:340] cluster config:
	{Name:kubernetes-upgrade-410576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-410576 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 17:51:01.783864   56427 iso.go:125] acquiring lock: {Name:mk4494bb5a63902bf12a909c3109db85229bc516 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 17:51:01.785761   56427 out.go:177] * Starting "kubernetes-upgrade-410576" primary control-plane node in "kubernetes-upgrade-410576" cluster
	I0731 17:51:01.786966   56427 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 17:51:01.787017   56427 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0731 17:51:01.787030   56427 cache.go:56] Caching tarball of preloaded images
	I0731 17:51:01.787137   56427 preload.go:172] Found /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 17:51:01.787154   56427 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0731 17:51:01.787290   56427 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/kubernetes-upgrade-410576/config.json ...
	I0731 17:51:01.787316   56427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/kubernetes-upgrade-410576/config.json: {Name:mk40f6b230bbecb16bb53d29d208e9bae732862b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 17:51:01.787466   56427 start.go:360] acquireMachinesLock for kubernetes-upgrade-410576: {Name:mkd78eacfab7d6c3058c6674434b3d889ec957e0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 17:51:01.787504   56427 start.go:364] duration metric: took 16.947µs to acquireMachinesLock for "kubernetes-upgrade-410576"
	I0731 17:51:01.787517   56427 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-410576 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-410576 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 17:51:01.787577   56427 start.go:125] createHost starting for "" (driver="kvm2")
	I0731 17:51:01.789118   56427 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 17:51:01.789269   56427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:51:01.789319   56427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:51:01.803346   56427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44291
	I0731 17:51:01.803784   56427 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:51:01.804433   56427 main.go:141] libmachine: Using API Version  1
	I0731 17:51:01.804457   56427 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:51:01.804805   56427 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:51:01.805018   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetMachineName
	I0731 17:51:01.805179   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .DriverName
	I0731 17:51:01.805319   56427 start.go:159] libmachine.API.Create for "kubernetes-upgrade-410576" (driver="kvm2")
	I0731 17:51:01.805345   56427 client.go:168] LocalClient.Create starting
	I0731 17:51:01.805372   56427 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem
	I0731 17:51:01.805410   56427 main.go:141] libmachine: Decoding PEM data...
	I0731 17:51:01.805429   56427 main.go:141] libmachine: Parsing certificate...
	I0731 17:51:01.805493   56427 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem
	I0731 17:51:01.805526   56427 main.go:141] libmachine: Decoding PEM data...
	I0731 17:51:01.805545   56427 main.go:141] libmachine: Parsing certificate...
	I0731 17:51:01.805569   56427 main.go:141] libmachine: Running pre-create checks...
	I0731 17:51:01.805581   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .PreCreateCheck
	I0731 17:51:01.806032   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetConfigRaw
	I0731 17:51:01.806459   56427 main.go:141] libmachine: Creating machine...
	I0731 17:51:01.806473   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .Create
	I0731 17:51:01.806736   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Creating KVM machine...
	I0731 17:51:01.808144   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | found existing default KVM network
	I0731 17:51:01.811455   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | I0731 17:51:01.811282   56450 network.go:209] skipping subnet 192.168.39.0/24 that is reserved: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0731 17:51:01.812445   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | I0731 17:51:01.812326   56450 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:a9:4e:83} reservation:<nil>}
	I0731 17:51:01.813370   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | I0731 17:51:01.813313   56450 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015770}
	I0731 17:51:01.813439   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | created network xml: 
	I0731 17:51:01.813461   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | <network>
	I0731 17:51:01.813474   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG |   <name>mk-kubernetes-upgrade-410576</name>
	I0731 17:51:01.813492   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG |   <dns enable='no'/>
	I0731 17:51:01.813503   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG |   
	I0731 17:51:01.813513   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0731 17:51:01.813524   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG |     <dhcp>
	I0731 17:51:01.813546   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0731 17:51:01.813558   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG |     </dhcp>
	I0731 17:51:01.813574   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG |   </ip>
	I0731 17:51:01.813583   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG |   
	I0731 17:51:01.813594   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | </network>
	I0731 17:51:01.813603   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | 
	I0731 17:51:01.819176   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | trying to create private KVM network mk-kubernetes-upgrade-410576 192.168.61.0/24...
	I0731 17:51:01.892787   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | private KVM network mk-kubernetes-upgrade-410576 192.168.61.0/24 created
	I0731 17:51:01.892819   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | I0731 17:51:01.892753   56450 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19349-8084/.minikube
	I0731 17:51:01.892833   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Setting up store path in /home/jenkins/minikube-integration/19349-8084/.minikube/machines/kubernetes-upgrade-410576 ...
	I0731 17:51:01.892850   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Building disk image from file:///home/jenkins/minikube-integration/19349-8084/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0731 17:51:01.892980   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Downloading /home/jenkins/minikube-integration/19349-8084/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19349-8084/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0731 17:51:02.140160   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | I0731 17:51:02.140056   56450 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/kubernetes-upgrade-410576/id_rsa...
	I0731 17:51:02.428666   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | I0731 17:51:02.428554   56450 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/kubernetes-upgrade-410576/kubernetes-upgrade-410576.rawdisk...
	I0731 17:51:02.428693   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | Writing magic tar header
	I0731 17:51:02.428722   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | Writing SSH key tar header
	I0731 17:51:02.428734   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | I0731 17:51:02.428667   56450 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19349-8084/.minikube/machines/kubernetes-upgrade-410576 ...
	I0731 17:51:02.428755   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/kubernetes-upgrade-410576
	I0731 17:51:02.428790   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Setting executable bit set on /home/jenkins/minikube-integration/19349-8084/.minikube/machines/kubernetes-upgrade-410576 (perms=drwx------)
	I0731 17:51:02.428819   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Setting executable bit set on /home/jenkins/minikube-integration/19349-8084/.minikube/machines (perms=drwxr-xr-x)
	I0731 17:51:02.428837   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19349-8084/.minikube/machines
	I0731 17:51:02.428852   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19349-8084/.minikube
	I0731 17:51:02.428864   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19349-8084
	I0731 17:51:02.428878   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Setting executable bit set on /home/jenkins/minikube-integration/19349-8084/.minikube (perms=drwxr-xr-x)
	I0731 17:51:02.428900   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Setting executable bit set on /home/jenkins/minikube-integration/19349-8084 (perms=drwxrwxr-x)
	I0731 17:51:02.428912   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0731 17:51:02.428925   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0731 17:51:02.428936   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0731 17:51:02.428944   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | Checking permissions on dir: /home/jenkins
	I0731 17:51:02.428955   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Creating domain...
	I0731 17:51:02.428972   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | Checking permissions on dir: /home
	I0731 17:51:02.428983   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | Skipping /home - not owner
	I0731 17:51:02.430681   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) define libvirt domain using xml: 
	I0731 17:51:02.430710   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) <domain type='kvm'>
	I0731 17:51:02.430722   56427 main.go:141] libmachine: (kubernetes-upgrade-410576)   <name>kubernetes-upgrade-410576</name>
	I0731 17:51:02.430729   56427 main.go:141] libmachine: (kubernetes-upgrade-410576)   <memory unit='MiB'>2200</memory>
	I0731 17:51:02.430737   56427 main.go:141] libmachine: (kubernetes-upgrade-410576)   <vcpu>2</vcpu>
	I0731 17:51:02.430745   56427 main.go:141] libmachine: (kubernetes-upgrade-410576)   <features>
	I0731 17:51:02.430755   56427 main.go:141] libmachine: (kubernetes-upgrade-410576)     <acpi/>
	I0731 17:51:02.430771   56427 main.go:141] libmachine: (kubernetes-upgrade-410576)     <apic/>
	I0731 17:51:02.430804   56427 main.go:141] libmachine: (kubernetes-upgrade-410576)     <pae/>
	I0731 17:51:02.430827   56427 main.go:141] libmachine: (kubernetes-upgrade-410576)     
	I0731 17:51:02.430837   56427 main.go:141] libmachine: (kubernetes-upgrade-410576)   </features>
	I0731 17:51:02.430847   56427 main.go:141] libmachine: (kubernetes-upgrade-410576)   <cpu mode='host-passthrough'>
	I0731 17:51:02.430858   56427 main.go:141] libmachine: (kubernetes-upgrade-410576)   
	I0731 17:51:02.430865   56427 main.go:141] libmachine: (kubernetes-upgrade-410576)   </cpu>
	I0731 17:51:02.430877   56427 main.go:141] libmachine: (kubernetes-upgrade-410576)   <os>
	I0731 17:51:02.430887   56427 main.go:141] libmachine: (kubernetes-upgrade-410576)     <type>hvm</type>
	I0731 17:51:02.430897   56427 main.go:141] libmachine: (kubernetes-upgrade-410576)     <boot dev='cdrom'/>
	I0731 17:51:02.430907   56427 main.go:141] libmachine: (kubernetes-upgrade-410576)     <boot dev='hd'/>
	I0731 17:51:02.430916   56427 main.go:141] libmachine: (kubernetes-upgrade-410576)     <bootmenu enable='no'/>
	I0731 17:51:02.430926   56427 main.go:141] libmachine: (kubernetes-upgrade-410576)   </os>
	I0731 17:51:02.430948   56427 main.go:141] libmachine: (kubernetes-upgrade-410576)   <devices>
	I0731 17:51:02.430969   56427 main.go:141] libmachine: (kubernetes-upgrade-410576)     <disk type='file' device='cdrom'>
	I0731 17:51:02.431021   56427 main.go:141] libmachine: (kubernetes-upgrade-410576)       <source file='/home/jenkins/minikube-integration/19349-8084/.minikube/machines/kubernetes-upgrade-410576/boot2docker.iso'/>
	I0731 17:51:02.431051   56427 main.go:141] libmachine: (kubernetes-upgrade-410576)       <target dev='hdc' bus='scsi'/>
	I0731 17:51:02.431065   56427 main.go:141] libmachine: (kubernetes-upgrade-410576)       <readonly/>
	I0731 17:51:02.431076   56427 main.go:141] libmachine: (kubernetes-upgrade-410576)     </disk>
	I0731 17:51:02.431089   56427 main.go:141] libmachine: (kubernetes-upgrade-410576)     <disk type='file' device='disk'>
	I0731 17:51:02.431101   56427 main.go:141] libmachine: (kubernetes-upgrade-410576)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0731 17:51:02.431132   56427 main.go:141] libmachine: (kubernetes-upgrade-410576)       <source file='/home/jenkins/minikube-integration/19349-8084/.minikube/machines/kubernetes-upgrade-410576/kubernetes-upgrade-410576.rawdisk'/>
	I0731 17:51:02.431149   56427 main.go:141] libmachine: (kubernetes-upgrade-410576)       <target dev='hda' bus='virtio'/>
	I0731 17:51:02.431161   56427 main.go:141] libmachine: (kubernetes-upgrade-410576)     </disk>
	I0731 17:51:02.431172   56427 main.go:141] libmachine: (kubernetes-upgrade-410576)     <interface type='network'>
	I0731 17:51:02.431185   56427 main.go:141] libmachine: (kubernetes-upgrade-410576)       <source network='mk-kubernetes-upgrade-410576'/>
	I0731 17:51:02.431196   56427 main.go:141] libmachine: (kubernetes-upgrade-410576)       <model type='virtio'/>
	I0731 17:51:02.431207   56427 main.go:141] libmachine: (kubernetes-upgrade-410576)     </interface>
	I0731 17:51:02.431218   56427 main.go:141] libmachine: (kubernetes-upgrade-410576)     <interface type='network'>
	I0731 17:51:02.431229   56427 main.go:141] libmachine: (kubernetes-upgrade-410576)       <source network='default'/>
	I0731 17:51:02.431241   56427 main.go:141] libmachine: (kubernetes-upgrade-410576)       <model type='virtio'/>
	I0731 17:51:02.431253   56427 main.go:141] libmachine: (kubernetes-upgrade-410576)     </interface>
	I0731 17:51:02.431263   56427 main.go:141] libmachine: (kubernetes-upgrade-410576)     <serial type='pty'>
	I0731 17:51:02.431273   56427 main.go:141] libmachine: (kubernetes-upgrade-410576)       <target port='0'/>
	I0731 17:51:02.431283   56427 main.go:141] libmachine: (kubernetes-upgrade-410576)     </serial>
	I0731 17:51:02.431298   56427 main.go:141] libmachine: (kubernetes-upgrade-410576)     <console type='pty'>
	I0731 17:51:02.431318   56427 main.go:141] libmachine: (kubernetes-upgrade-410576)       <target type='serial' port='0'/>
	I0731 17:51:02.431335   56427 main.go:141] libmachine: (kubernetes-upgrade-410576)     </console>
	I0731 17:51:02.431360   56427 main.go:141] libmachine: (kubernetes-upgrade-410576)     <rng model='virtio'>
	I0731 17:51:02.431374   56427 main.go:141] libmachine: (kubernetes-upgrade-410576)       <backend model='random'>/dev/random</backend>
	I0731 17:51:02.431386   56427 main.go:141] libmachine: (kubernetes-upgrade-410576)     </rng>
	I0731 17:51:02.431395   56427 main.go:141] libmachine: (kubernetes-upgrade-410576)     
	I0731 17:51:02.431403   56427 main.go:141] libmachine: (kubernetes-upgrade-410576)     
	I0731 17:51:02.431414   56427 main.go:141] libmachine: (kubernetes-upgrade-410576)   </devices>
	I0731 17:51:02.431421   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) </domain>
	I0731 17:51:02.431442   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) 
	I0731 17:51:02.436902   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | domain kubernetes-upgrade-410576 has defined MAC address 52:54:00:2e:92:78 in network default
	I0731 17:51:02.437538   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Ensuring networks are active...
	I0731 17:51:02.437565   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | domain kubernetes-upgrade-410576 has defined MAC address 52:54:00:c8:b0:36 in network mk-kubernetes-upgrade-410576
	I0731 17:51:02.439229   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Ensuring network default is active
	I0731 17:51:02.560069   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Ensuring network mk-kubernetes-upgrade-410576 is active
	I0731 17:51:03.109719   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Getting domain xml...
	I0731 17:51:03.110827   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Creating domain...
	I0731 17:51:04.337969   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Waiting to get IP...
	I0731 17:51:04.338737   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | domain kubernetes-upgrade-410576 has defined MAC address 52:54:00:c8:b0:36 in network mk-kubernetes-upgrade-410576
	I0731 17:51:04.339105   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | unable to find current IP address of domain kubernetes-upgrade-410576 in network mk-kubernetes-upgrade-410576
	I0731 17:51:04.339173   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | I0731 17:51:04.339084   56450 retry.go:31] will retry after 205.261078ms: waiting for machine to come up
	I0731 17:51:04.545397   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | domain kubernetes-upgrade-410576 has defined MAC address 52:54:00:c8:b0:36 in network mk-kubernetes-upgrade-410576
	I0731 17:51:04.545935   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | unable to find current IP address of domain kubernetes-upgrade-410576 in network mk-kubernetes-upgrade-410576
	I0731 17:51:04.545976   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | I0731 17:51:04.545891   56450 retry.go:31] will retry after 306.002389ms: waiting for machine to come up
	I0731 17:51:04.853439   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | domain kubernetes-upgrade-410576 has defined MAC address 52:54:00:c8:b0:36 in network mk-kubernetes-upgrade-410576
	I0731 17:51:04.853972   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | unable to find current IP address of domain kubernetes-upgrade-410576 in network mk-kubernetes-upgrade-410576
	I0731 17:51:04.853999   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | I0731 17:51:04.853926   56450 retry.go:31] will retry after 295.91162ms: waiting for machine to come up
	I0731 17:51:05.151084   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | domain kubernetes-upgrade-410576 has defined MAC address 52:54:00:c8:b0:36 in network mk-kubernetes-upgrade-410576
	I0731 17:51:05.151534   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | unable to find current IP address of domain kubernetes-upgrade-410576 in network mk-kubernetes-upgrade-410576
	I0731 17:51:05.151557   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | I0731 17:51:05.151485   56450 retry.go:31] will retry after 455.10925ms: waiting for machine to come up
	I0731 17:51:05.608163   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | domain kubernetes-upgrade-410576 has defined MAC address 52:54:00:c8:b0:36 in network mk-kubernetes-upgrade-410576
	I0731 17:51:05.608706   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | unable to find current IP address of domain kubernetes-upgrade-410576 in network mk-kubernetes-upgrade-410576
	I0731 17:51:05.608734   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | I0731 17:51:05.608655   56450 retry.go:31] will retry after 687.005549ms: waiting for machine to come up
	I0731 17:51:06.297772   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | domain kubernetes-upgrade-410576 has defined MAC address 52:54:00:c8:b0:36 in network mk-kubernetes-upgrade-410576
	I0731 17:51:06.298262   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | unable to find current IP address of domain kubernetes-upgrade-410576 in network mk-kubernetes-upgrade-410576
	I0731 17:51:06.298287   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | I0731 17:51:06.298224   56450 retry.go:31] will retry after 909.492058ms: waiting for machine to come up
	I0731 17:51:07.209715   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | domain kubernetes-upgrade-410576 has defined MAC address 52:54:00:c8:b0:36 in network mk-kubernetes-upgrade-410576
	I0731 17:51:07.210254   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | unable to find current IP address of domain kubernetes-upgrade-410576 in network mk-kubernetes-upgrade-410576
	I0731 17:51:07.210281   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | I0731 17:51:07.210189   56450 retry.go:31] will retry after 1.003385313s: waiting for machine to come up
	I0731 17:51:08.215280   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | domain kubernetes-upgrade-410576 has defined MAC address 52:54:00:c8:b0:36 in network mk-kubernetes-upgrade-410576
	I0731 17:51:08.215802   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | unable to find current IP address of domain kubernetes-upgrade-410576 in network mk-kubernetes-upgrade-410576
	I0731 17:51:08.215838   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | I0731 17:51:08.215738   56450 retry.go:31] will retry after 1.035411856s: waiting for machine to come up
	I0731 17:51:09.252941   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | domain kubernetes-upgrade-410576 has defined MAC address 52:54:00:c8:b0:36 in network mk-kubernetes-upgrade-410576
	I0731 17:51:09.253379   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | unable to find current IP address of domain kubernetes-upgrade-410576 in network mk-kubernetes-upgrade-410576
	I0731 17:51:09.253414   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | I0731 17:51:09.253335   56450 retry.go:31] will retry after 1.653762005s: waiting for machine to come up
	I0731 17:51:10.909129   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | domain kubernetes-upgrade-410576 has defined MAC address 52:54:00:c8:b0:36 in network mk-kubernetes-upgrade-410576
	I0731 17:51:10.909554   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | unable to find current IP address of domain kubernetes-upgrade-410576 in network mk-kubernetes-upgrade-410576
	I0731 17:51:10.909581   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | I0731 17:51:10.909514   56450 retry.go:31] will retry after 2.17864086s: waiting for machine to come up
	I0731 17:51:13.090114   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | domain kubernetes-upgrade-410576 has defined MAC address 52:54:00:c8:b0:36 in network mk-kubernetes-upgrade-410576
	I0731 17:51:13.090557   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | unable to find current IP address of domain kubernetes-upgrade-410576 in network mk-kubernetes-upgrade-410576
	I0731 17:51:13.090592   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | I0731 17:51:13.090522   56450 retry.go:31] will retry after 2.178766092s: waiting for machine to come up
	I0731 17:51:15.270756   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | domain kubernetes-upgrade-410576 has defined MAC address 52:54:00:c8:b0:36 in network mk-kubernetes-upgrade-410576
	I0731 17:51:15.271234   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | unable to find current IP address of domain kubernetes-upgrade-410576 in network mk-kubernetes-upgrade-410576
	I0731 17:51:15.271265   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | I0731 17:51:15.271182   56450 retry.go:31] will retry after 3.281742986s: waiting for machine to come up
	I0731 17:51:18.555208   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | domain kubernetes-upgrade-410576 has defined MAC address 52:54:00:c8:b0:36 in network mk-kubernetes-upgrade-410576
	I0731 17:51:18.555787   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | unable to find current IP address of domain kubernetes-upgrade-410576 in network mk-kubernetes-upgrade-410576
	I0731 17:51:18.555812   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | I0731 17:51:18.555747   56450 retry.go:31] will retry after 4.295608124s: waiting for machine to come up
	I0731 17:51:22.853238   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | domain kubernetes-upgrade-410576 has defined MAC address 52:54:00:c8:b0:36 in network mk-kubernetes-upgrade-410576
	I0731 17:51:22.853690   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Found IP for machine: 192.168.61.234
	I0731 17:51:22.853720   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | domain kubernetes-upgrade-410576 has current primary IP address 192.168.61.234 and MAC address 52:54:00:c8:b0:36 in network mk-kubernetes-upgrade-410576
	I0731 17:51:22.853730   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Reserving static IP address...
	I0731 17:51:22.854092   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-410576", mac: "52:54:00:c8:b0:36", ip: "192.168.61.234"} in network mk-kubernetes-upgrade-410576
	I0731 17:51:22.927347   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Reserved static IP address: 192.168.61.234
	I0731 17:51:22.927378   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Waiting for SSH to be available...
	I0731 17:51:22.927388   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | Getting to WaitForSSH function...
	I0731 17:51:22.930281   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | domain kubernetes-upgrade-410576 has defined MAC address 52:54:00:c8:b0:36 in network mk-kubernetes-upgrade-410576
	I0731 17:51:22.930783   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:b0:36", ip: ""} in network mk-kubernetes-upgrade-410576: {Iface:virbr2 ExpiryTime:2024-07-31 18:51:16 +0000 UTC Type:0 Mac:52:54:00:c8:b0:36 Iaid: IPaddr:192.168.61.234 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c8:b0:36}
	I0731 17:51:22.930824   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | domain kubernetes-upgrade-410576 has defined IP address 192.168.61.234 and MAC address 52:54:00:c8:b0:36 in network mk-kubernetes-upgrade-410576
	I0731 17:51:22.930907   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | Using SSH client type: external
	I0731 17:51:22.930933   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | Using SSH private key: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/kubernetes-upgrade-410576/id_rsa (-rw-------)
	I0731 17:51:22.930973   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.234 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19349-8084/.minikube/machines/kubernetes-upgrade-410576/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 17:51:22.930991   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | About to run SSH command:
	I0731 17:51:22.931012   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | exit 0
	I0731 17:51:23.054941   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | SSH cmd err, output: <nil>: 
	I0731 17:51:23.055209   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) KVM machine creation complete!
	I0731 17:51:23.055594   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetConfigRaw
	I0731 17:51:23.056106   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .DriverName
	I0731 17:51:23.056291   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .DriverName
	I0731 17:51:23.056462   56427 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0731 17:51:23.056480   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetState
	I0731 17:51:23.057685   56427 main.go:141] libmachine: Detecting operating system of created instance...
	I0731 17:51:23.057700   56427 main.go:141] libmachine: Waiting for SSH to be available...
	I0731 17:51:23.057705   56427 main.go:141] libmachine: Getting to WaitForSSH function...
	I0731 17:51:23.057710   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetSSHHostname
	I0731 17:51:23.060030   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | domain kubernetes-upgrade-410576 has defined MAC address 52:54:00:c8:b0:36 in network mk-kubernetes-upgrade-410576
	I0731 17:51:23.060403   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:b0:36", ip: ""} in network mk-kubernetes-upgrade-410576: {Iface:virbr2 ExpiryTime:2024-07-31 18:51:16 +0000 UTC Type:0 Mac:52:54:00:c8:b0:36 Iaid: IPaddr:192.168.61.234 Prefix:24 Hostname:kubernetes-upgrade-410576 Clientid:01:52:54:00:c8:b0:36}
	I0731 17:51:23.060429   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | domain kubernetes-upgrade-410576 has defined IP address 192.168.61.234 and MAC address 52:54:00:c8:b0:36 in network mk-kubernetes-upgrade-410576
	I0731 17:51:23.060586   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetSSHPort
	I0731 17:51:23.060746   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetSSHKeyPath
	I0731 17:51:23.060982   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetSSHKeyPath
	I0731 17:51:23.061125   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetSSHUsername
	I0731 17:51:23.061333   56427 main.go:141] libmachine: Using SSH client type: native
	I0731 17:51:23.061521   56427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.234 22 <nil> <nil>}
	I0731 17:51:23.061531   56427 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0731 17:51:23.166362   56427 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 17:51:23.166428   56427 main.go:141] libmachine: Detecting the provisioner...
	I0731 17:51:23.166449   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetSSHHostname
	I0731 17:51:23.169113   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | domain kubernetes-upgrade-410576 has defined MAC address 52:54:00:c8:b0:36 in network mk-kubernetes-upgrade-410576
	I0731 17:51:23.169555   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:b0:36", ip: ""} in network mk-kubernetes-upgrade-410576: {Iface:virbr2 ExpiryTime:2024-07-31 18:51:16 +0000 UTC Type:0 Mac:52:54:00:c8:b0:36 Iaid: IPaddr:192.168.61.234 Prefix:24 Hostname:kubernetes-upgrade-410576 Clientid:01:52:54:00:c8:b0:36}
	I0731 17:51:23.169597   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | domain kubernetes-upgrade-410576 has defined IP address 192.168.61.234 and MAC address 52:54:00:c8:b0:36 in network mk-kubernetes-upgrade-410576
	I0731 17:51:23.169727   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetSSHPort
	I0731 17:51:23.169892   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetSSHKeyPath
	I0731 17:51:23.170104   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetSSHKeyPath
	I0731 17:51:23.170286   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetSSHUsername
	I0731 17:51:23.170484   56427 main.go:141] libmachine: Using SSH client type: native
	I0731 17:51:23.170682   56427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.234 22 <nil> <nil>}
	I0731 17:51:23.170695   56427 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0731 17:51:23.271254   56427 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0731 17:51:23.271346   56427 main.go:141] libmachine: found compatible host: buildroot
	I0731 17:51:23.271358   56427 main.go:141] libmachine: Provisioning with buildroot...
	I0731 17:51:23.271368   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetMachineName
	I0731 17:51:23.271626   56427 buildroot.go:166] provisioning hostname "kubernetes-upgrade-410576"
	I0731 17:51:23.271655   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetMachineName
	I0731 17:51:23.271884   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetSSHHostname
	I0731 17:51:23.274591   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | domain kubernetes-upgrade-410576 has defined MAC address 52:54:00:c8:b0:36 in network mk-kubernetes-upgrade-410576
	I0731 17:51:23.274905   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:b0:36", ip: ""} in network mk-kubernetes-upgrade-410576: {Iface:virbr2 ExpiryTime:2024-07-31 18:51:16 +0000 UTC Type:0 Mac:52:54:00:c8:b0:36 Iaid: IPaddr:192.168.61.234 Prefix:24 Hostname:kubernetes-upgrade-410576 Clientid:01:52:54:00:c8:b0:36}
	I0731 17:51:23.274944   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | domain kubernetes-upgrade-410576 has defined IP address 192.168.61.234 and MAC address 52:54:00:c8:b0:36 in network mk-kubernetes-upgrade-410576
	I0731 17:51:23.275065   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetSSHPort
	I0731 17:51:23.275260   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetSSHKeyPath
	I0731 17:51:23.275406   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetSSHKeyPath
	I0731 17:51:23.275569   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetSSHUsername
	I0731 17:51:23.275716   56427 main.go:141] libmachine: Using SSH client type: native
	I0731 17:51:23.275871   56427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.234 22 <nil> <nil>}
	I0731 17:51:23.275884   56427 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-410576 && echo "kubernetes-upgrade-410576" | sudo tee /etc/hostname
	I0731 17:51:23.392351   56427 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-410576
	
	I0731 17:51:23.392380   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetSSHHostname
	I0731 17:51:23.395072   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | domain kubernetes-upgrade-410576 has defined MAC address 52:54:00:c8:b0:36 in network mk-kubernetes-upgrade-410576
	I0731 17:51:23.395385   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:b0:36", ip: ""} in network mk-kubernetes-upgrade-410576: {Iface:virbr2 ExpiryTime:2024-07-31 18:51:16 +0000 UTC Type:0 Mac:52:54:00:c8:b0:36 Iaid: IPaddr:192.168.61.234 Prefix:24 Hostname:kubernetes-upgrade-410576 Clientid:01:52:54:00:c8:b0:36}
	I0731 17:51:23.395428   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | domain kubernetes-upgrade-410576 has defined IP address 192.168.61.234 and MAC address 52:54:00:c8:b0:36 in network mk-kubernetes-upgrade-410576
	I0731 17:51:23.395536   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetSSHPort
	I0731 17:51:23.395715   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetSSHKeyPath
	I0731 17:51:23.395859   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetSSHKeyPath
	I0731 17:51:23.396021   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetSSHUsername
	I0731 17:51:23.396149   56427 main.go:141] libmachine: Using SSH client type: native
	I0731 17:51:23.396364   56427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.234 22 <nil> <nil>}
	I0731 17:51:23.396382   56427 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-410576' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-410576/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-410576' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 17:51:23.507161   56427 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 17:51:23.507191   56427 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19349-8084/.minikube CaCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19349-8084/.minikube}
	I0731 17:51:23.507209   56427 buildroot.go:174] setting up certificates
	I0731 17:51:23.507219   56427 provision.go:84] configureAuth start
	I0731 17:51:23.507227   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetMachineName
	I0731 17:51:23.507491   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetIP
	I0731 17:51:23.509641   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | domain kubernetes-upgrade-410576 has defined MAC address 52:54:00:c8:b0:36 in network mk-kubernetes-upgrade-410576
	I0731 17:51:23.510023   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:b0:36", ip: ""} in network mk-kubernetes-upgrade-410576: {Iface:virbr2 ExpiryTime:2024-07-31 18:51:16 +0000 UTC Type:0 Mac:52:54:00:c8:b0:36 Iaid: IPaddr:192.168.61.234 Prefix:24 Hostname:kubernetes-upgrade-410576 Clientid:01:52:54:00:c8:b0:36}
	I0731 17:51:23.510048   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | domain kubernetes-upgrade-410576 has defined IP address 192.168.61.234 and MAC address 52:54:00:c8:b0:36 in network mk-kubernetes-upgrade-410576
	I0731 17:51:23.510169   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetSSHHostname
	I0731 17:51:23.512374   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | domain kubernetes-upgrade-410576 has defined MAC address 52:54:00:c8:b0:36 in network mk-kubernetes-upgrade-410576
	I0731 17:51:23.512689   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:b0:36", ip: ""} in network mk-kubernetes-upgrade-410576: {Iface:virbr2 ExpiryTime:2024-07-31 18:51:16 +0000 UTC Type:0 Mac:52:54:00:c8:b0:36 Iaid: IPaddr:192.168.61.234 Prefix:24 Hostname:kubernetes-upgrade-410576 Clientid:01:52:54:00:c8:b0:36}
	I0731 17:51:23.512731   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | domain kubernetes-upgrade-410576 has defined IP address 192.168.61.234 and MAC address 52:54:00:c8:b0:36 in network mk-kubernetes-upgrade-410576
	I0731 17:51:23.512829   56427 provision.go:143] copyHostCerts
	I0731 17:51:23.512890   56427 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem, removing ...
	I0731 17:51:23.512903   56427 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem
	I0731 17:51:23.512967   56427 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem (1082 bytes)
	I0731 17:51:23.513222   56427 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem, removing ...
	I0731 17:51:23.513244   56427 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem
	I0731 17:51:23.513282   56427 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem (1123 bytes)
	I0731 17:51:23.513364   56427 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem, removing ...
	I0731 17:51:23.513374   56427 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem
	I0731 17:51:23.513410   56427 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem (1675 bytes)
	I0731 17:51:23.513496   56427 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-410576 san=[127.0.0.1 192.168.61.234 kubernetes-upgrade-410576 localhost minikube]
	I0731 17:51:23.840887   56427 provision.go:177] copyRemoteCerts
	I0731 17:51:23.840946   56427 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 17:51:23.840968   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetSSHHostname
	I0731 17:51:23.843890   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | domain kubernetes-upgrade-410576 has defined MAC address 52:54:00:c8:b0:36 in network mk-kubernetes-upgrade-410576
	I0731 17:51:23.844192   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:b0:36", ip: ""} in network mk-kubernetes-upgrade-410576: {Iface:virbr2 ExpiryTime:2024-07-31 18:51:16 +0000 UTC Type:0 Mac:52:54:00:c8:b0:36 Iaid: IPaddr:192.168.61.234 Prefix:24 Hostname:kubernetes-upgrade-410576 Clientid:01:52:54:00:c8:b0:36}
	I0731 17:51:23.844224   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | domain kubernetes-upgrade-410576 has defined IP address 192.168.61.234 and MAC address 52:54:00:c8:b0:36 in network mk-kubernetes-upgrade-410576
	I0731 17:51:23.844352   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetSSHPort
	I0731 17:51:23.844562   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetSSHKeyPath
	I0731 17:51:23.844709   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetSSHUsername
	I0731 17:51:23.844861   56427 sshutil.go:53] new ssh client: &{IP:192.168.61.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/kubernetes-upgrade-410576/id_rsa Username:docker}
	I0731 17:51:23.925200   56427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0731 17:51:23.948270   56427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 17:51:23.970083   56427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 17:51:24.003416   56427 provision.go:87] duration metric: took 496.183652ms to configureAuth
	I0731 17:51:24.003445   56427 buildroot.go:189] setting minikube options for container-runtime
	I0731 17:51:24.003629   56427 config.go:182] Loaded profile config "kubernetes-upgrade-410576": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0731 17:51:24.003718   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetSSHHostname
	I0731 17:51:24.006511   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | domain kubernetes-upgrade-410576 has defined MAC address 52:54:00:c8:b0:36 in network mk-kubernetes-upgrade-410576
	I0731 17:51:24.006956   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:b0:36", ip: ""} in network mk-kubernetes-upgrade-410576: {Iface:virbr2 ExpiryTime:2024-07-31 18:51:16 +0000 UTC Type:0 Mac:52:54:00:c8:b0:36 Iaid: IPaddr:192.168.61.234 Prefix:24 Hostname:kubernetes-upgrade-410576 Clientid:01:52:54:00:c8:b0:36}
	I0731 17:51:24.006986   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | domain kubernetes-upgrade-410576 has defined IP address 192.168.61.234 and MAC address 52:54:00:c8:b0:36 in network mk-kubernetes-upgrade-410576
	I0731 17:51:24.007139   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetSSHPort
	I0731 17:51:24.007383   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetSSHKeyPath
	I0731 17:51:24.007577   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetSSHKeyPath
	I0731 17:51:24.007723   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetSSHUsername
	I0731 17:51:24.007861   56427 main.go:141] libmachine: Using SSH client type: native
	I0731 17:51:24.008016   56427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.234 22 <nil> <nil>}
	I0731 17:51:24.008030   56427 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 17:51:24.262915   56427 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 17:51:24.262945   56427 main.go:141] libmachine: Checking connection to Docker...
	I0731 17:51:24.262955   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetURL
	I0731 17:51:24.264191   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | Using libvirt version 6000000
	I0731 17:51:24.266565   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | domain kubernetes-upgrade-410576 has defined MAC address 52:54:00:c8:b0:36 in network mk-kubernetes-upgrade-410576
	I0731 17:51:24.266864   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:b0:36", ip: ""} in network mk-kubernetes-upgrade-410576: {Iface:virbr2 ExpiryTime:2024-07-31 18:51:16 +0000 UTC Type:0 Mac:52:54:00:c8:b0:36 Iaid: IPaddr:192.168.61.234 Prefix:24 Hostname:kubernetes-upgrade-410576 Clientid:01:52:54:00:c8:b0:36}
	I0731 17:51:24.266887   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | domain kubernetes-upgrade-410576 has defined IP address 192.168.61.234 and MAC address 52:54:00:c8:b0:36 in network mk-kubernetes-upgrade-410576
	I0731 17:51:24.267081   56427 main.go:141] libmachine: Docker is up and running!
	I0731 17:51:24.267097   56427 main.go:141] libmachine: Reticulating splines...
	I0731 17:51:24.267103   56427 client.go:171] duration metric: took 22.46174966s to LocalClient.Create
	I0731 17:51:24.267142   56427 start.go:167] duration metric: took 22.461822562s to libmachine.API.Create "kubernetes-upgrade-410576"
	I0731 17:51:24.267162   56427 start.go:293] postStartSetup for "kubernetes-upgrade-410576" (driver="kvm2")
	I0731 17:51:24.267172   56427 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 17:51:24.267195   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .DriverName
	I0731 17:51:24.267450   56427 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 17:51:24.267476   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetSSHHostname
	I0731 17:51:24.269621   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | domain kubernetes-upgrade-410576 has defined MAC address 52:54:00:c8:b0:36 in network mk-kubernetes-upgrade-410576
	I0731 17:51:24.269958   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:b0:36", ip: ""} in network mk-kubernetes-upgrade-410576: {Iface:virbr2 ExpiryTime:2024-07-31 18:51:16 +0000 UTC Type:0 Mac:52:54:00:c8:b0:36 Iaid: IPaddr:192.168.61.234 Prefix:24 Hostname:kubernetes-upgrade-410576 Clientid:01:52:54:00:c8:b0:36}
	I0731 17:51:24.269990   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | domain kubernetes-upgrade-410576 has defined IP address 192.168.61.234 and MAC address 52:54:00:c8:b0:36 in network mk-kubernetes-upgrade-410576
	I0731 17:51:24.270113   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetSSHPort
	I0731 17:51:24.270271   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetSSHKeyPath
	I0731 17:51:24.270440   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetSSHUsername
	I0731 17:51:24.270580   56427 sshutil.go:53] new ssh client: &{IP:192.168.61.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/kubernetes-upgrade-410576/id_rsa Username:docker}
	I0731 17:51:24.348684   56427 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 17:51:24.352480   56427 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 17:51:24.352501   56427 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/addons for local assets ...
	I0731 17:51:24.352569   56427 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/files for local assets ...
	I0731 17:51:24.352654   56427 filesync.go:149] local asset: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem -> 152592.pem in /etc/ssl/certs
	I0731 17:51:24.352756   56427 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 17:51:24.361588   56427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /etc/ssl/certs/152592.pem (1708 bytes)
	I0731 17:51:24.383782   56427 start.go:296] duration metric: took 116.581046ms for postStartSetup
	I0731 17:51:24.383835   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetConfigRaw
	I0731 17:51:24.384391   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetIP
	I0731 17:51:24.386992   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | domain kubernetes-upgrade-410576 has defined MAC address 52:54:00:c8:b0:36 in network mk-kubernetes-upgrade-410576
	I0731 17:51:24.387373   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:b0:36", ip: ""} in network mk-kubernetes-upgrade-410576: {Iface:virbr2 ExpiryTime:2024-07-31 18:51:16 +0000 UTC Type:0 Mac:52:54:00:c8:b0:36 Iaid: IPaddr:192.168.61.234 Prefix:24 Hostname:kubernetes-upgrade-410576 Clientid:01:52:54:00:c8:b0:36}
	I0731 17:51:24.387403   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | domain kubernetes-upgrade-410576 has defined IP address 192.168.61.234 and MAC address 52:54:00:c8:b0:36 in network mk-kubernetes-upgrade-410576
	I0731 17:51:24.387678   56427 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/kubernetes-upgrade-410576/config.json ...
	I0731 17:51:24.387917   56427 start.go:128] duration metric: took 22.60032935s to createHost
	I0731 17:51:24.387951   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetSSHHostname
	I0731 17:51:24.390440   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | domain kubernetes-upgrade-410576 has defined MAC address 52:54:00:c8:b0:36 in network mk-kubernetes-upgrade-410576
	I0731 17:51:24.390784   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:b0:36", ip: ""} in network mk-kubernetes-upgrade-410576: {Iface:virbr2 ExpiryTime:2024-07-31 18:51:16 +0000 UTC Type:0 Mac:52:54:00:c8:b0:36 Iaid: IPaddr:192.168.61.234 Prefix:24 Hostname:kubernetes-upgrade-410576 Clientid:01:52:54:00:c8:b0:36}
	I0731 17:51:24.390812   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | domain kubernetes-upgrade-410576 has defined IP address 192.168.61.234 and MAC address 52:54:00:c8:b0:36 in network mk-kubernetes-upgrade-410576
	I0731 17:51:24.390973   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetSSHPort
	I0731 17:51:24.391191   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetSSHKeyPath
	I0731 17:51:24.391387   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetSSHKeyPath
	I0731 17:51:24.391563   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetSSHUsername
	I0731 17:51:24.391770   56427 main.go:141] libmachine: Using SSH client type: native
	I0731 17:51:24.391971   56427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.234 22 <nil> <nil>}
	I0731 17:51:24.391986   56427 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0731 17:51:24.499544   56427 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722448284.475646222
	
	I0731 17:51:24.499577   56427 fix.go:216] guest clock: 1722448284.475646222
	I0731 17:51:24.499588   56427 fix.go:229] Guest: 2024-07-31 17:51:24.475646222 +0000 UTC Remote: 2024-07-31 17:51:24.387935276 +0000 UTC m=+22.710499847 (delta=87.710946ms)
	I0731 17:51:24.499655   56427 fix.go:200] guest clock delta is within tolerance: 87.710946ms
	I0731 17:51:24.499663   56427 start.go:83] releasing machines lock for "kubernetes-upgrade-410576", held for 22.712151974s
	I0731 17:51:24.499693   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .DriverName
	I0731 17:51:24.499964   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetIP
	I0731 17:51:24.502928   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | domain kubernetes-upgrade-410576 has defined MAC address 52:54:00:c8:b0:36 in network mk-kubernetes-upgrade-410576
	I0731 17:51:24.503356   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:b0:36", ip: ""} in network mk-kubernetes-upgrade-410576: {Iface:virbr2 ExpiryTime:2024-07-31 18:51:16 +0000 UTC Type:0 Mac:52:54:00:c8:b0:36 Iaid: IPaddr:192.168.61.234 Prefix:24 Hostname:kubernetes-upgrade-410576 Clientid:01:52:54:00:c8:b0:36}
	I0731 17:51:24.503386   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | domain kubernetes-upgrade-410576 has defined IP address 192.168.61.234 and MAC address 52:54:00:c8:b0:36 in network mk-kubernetes-upgrade-410576
	I0731 17:51:24.503541   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .DriverName
	I0731 17:51:24.504134   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .DriverName
	I0731 17:51:24.504349   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .DriverName
	I0731 17:51:24.504423   56427 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 17:51:24.504460   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetSSHHostname
	I0731 17:51:24.504572   56427 ssh_runner.go:195] Run: cat /version.json
	I0731 17:51:24.504600   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetSSHHostname
	I0731 17:51:24.507284   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | domain kubernetes-upgrade-410576 has defined MAC address 52:54:00:c8:b0:36 in network mk-kubernetes-upgrade-410576
	I0731 17:51:24.507387   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | domain kubernetes-upgrade-410576 has defined MAC address 52:54:00:c8:b0:36 in network mk-kubernetes-upgrade-410576
	I0731 17:51:24.507708   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:b0:36", ip: ""} in network mk-kubernetes-upgrade-410576: {Iface:virbr2 ExpiryTime:2024-07-31 18:51:16 +0000 UTC Type:0 Mac:52:54:00:c8:b0:36 Iaid: IPaddr:192.168.61.234 Prefix:24 Hostname:kubernetes-upgrade-410576 Clientid:01:52:54:00:c8:b0:36}
	I0731 17:51:24.507740   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:b0:36", ip: ""} in network mk-kubernetes-upgrade-410576: {Iface:virbr2 ExpiryTime:2024-07-31 18:51:16 +0000 UTC Type:0 Mac:52:54:00:c8:b0:36 Iaid: IPaddr:192.168.61.234 Prefix:24 Hostname:kubernetes-upgrade-410576 Clientid:01:52:54:00:c8:b0:36}
	I0731 17:51:24.507786   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | domain kubernetes-upgrade-410576 has defined IP address 192.168.61.234 and MAC address 52:54:00:c8:b0:36 in network mk-kubernetes-upgrade-410576
	I0731 17:51:24.507807   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | domain kubernetes-upgrade-410576 has defined IP address 192.168.61.234 and MAC address 52:54:00:c8:b0:36 in network mk-kubernetes-upgrade-410576
	I0731 17:51:24.507994   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetSSHPort
	I0731 17:51:24.508137   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetSSHPort
	I0731 17:51:24.508196   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetSSHKeyPath
	I0731 17:51:24.508358   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetSSHKeyPath
	I0731 17:51:24.508360   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetSSHUsername
	I0731 17:51:24.508568   56427 sshutil.go:53] new ssh client: &{IP:192.168.61.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/kubernetes-upgrade-410576/id_rsa Username:docker}
	I0731 17:51:24.508916   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetSSHUsername
	I0731 17:51:24.509108   56427 sshutil.go:53] new ssh client: &{IP:192.168.61.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/kubernetes-upgrade-410576/id_rsa Username:docker}
	I0731 17:51:24.622026   56427 ssh_runner.go:195] Run: systemctl --version
	I0731 17:51:24.628211   56427 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 17:51:24.784274   56427 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 17:51:24.790782   56427 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 17:51:24.790842   56427 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 17:51:24.809126   56427 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 17:51:24.809151   56427 start.go:495] detecting cgroup driver to use...
	I0731 17:51:24.809220   56427 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 17:51:24.826478   56427 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 17:51:24.842562   56427 docker.go:217] disabling cri-docker service (if available) ...
	I0731 17:51:24.842633   56427 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 17:51:24.861431   56427 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 17:51:24.879533   56427 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 17:51:25.036440   56427 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 17:51:25.211189   56427 docker.go:233] disabling docker service ...
	I0731 17:51:25.211249   56427 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 17:51:25.227739   56427 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 17:51:25.244687   56427 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 17:51:25.392288   56427 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 17:51:25.543910   56427 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 17:51:25.561898   56427 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 17:51:25.583035   56427 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0731 17:51:25.583104   56427 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:51:25.594553   56427 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 17:51:25.594639   56427 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:51:25.607552   56427 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:51:25.622685   56427 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:51:25.638556   56427 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 17:51:25.651065   56427 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 17:51:25.661964   56427 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 17:51:25.662039   56427 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 17:51:25.676436   56427 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 17:51:25.688040   56427 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 17:51:25.823580   56427 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 17:51:25.979245   56427 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 17:51:25.979320   56427 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 17:51:25.983891   56427 start.go:563] Will wait 60s for crictl version
	I0731 17:51:25.983953   56427 ssh_runner.go:195] Run: which crictl
	I0731 17:51:25.988184   56427 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 17:51:26.035393   56427 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 17:51:26.035490   56427 ssh_runner.go:195] Run: crio --version
	I0731 17:51:26.067221   56427 ssh_runner.go:195] Run: crio --version
	I0731 17:51:26.099863   56427 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0731 17:51:26.101234   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetIP
	I0731 17:51:26.104286   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | domain kubernetes-upgrade-410576 has defined MAC address 52:54:00:c8:b0:36 in network mk-kubernetes-upgrade-410576
	I0731 17:51:26.104767   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:b0:36", ip: ""} in network mk-kubernetes-upgrade-410576: {Iface:virbr2 ExpiryTime:2024-07-31 18:51:16 +0000 UTC Type:0 Mac:52:54:00:c8:b0:36 Iaid: IPaddr:192.168.61.234 Prefix:24 Hostname:kubernetes-upgrade-410576 Clientid:01:52:54:00:c8:b0:36}
	I0731 17:51:26.104798   56427 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | domain kubernetes-upgrade-410576 has defined IP address 192.168.61.234 and MAC address 52:54:00:c8:b0:36 in network mk-kubernetes-upgrade-410576
	I0731 17:51:26.105026   56427 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0731 17:51:26.109609   56427 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 17:51:26.122344   56427 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-410576 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-410576 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.234 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 17:51:26.122473   56427 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 17:51:26.122520   56427 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 17:51:26.156246   56427 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0731 17:51:26.156304   56427 ssh_runner.go:195] Run: which lz4
	I0731 17:51:26.160451   56427 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0731 17:51:26.164378   56427 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 17:51:26.164410   56427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0731 17:51:27.656083   56427 crio.go:462] duration metric: took 1.495671799s to copy over tarball
	I0731 17:51:27.656159   56427 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 17:51:30.451338   56427 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.795151119s)
	I0731 17:51:30.451371   56427 crio.go:469] duration metric: took 2.795259075s to extract the tarball
	I0731 17:51:30.451386   56427 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 17:51:30.496653   56427 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 17:51:30.545877   56427 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0731 17:51:30.545907   56427 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 17:51:30.545969   56427 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 17:51:30.546027   56427 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 17:51:30.546038   56427 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 17:51:30.546067   56427 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0731 17:51:30.546082   56427 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0731 17:51:30.546045   56427 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 17:51:30.546027   56427 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 17:51:30.546067   56427 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0731 17:51:30.548285   56427 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0731 17:51:30.548299   56427 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0731 17:51:30.548612   56427 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 17:51:30.548637   56427 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 17:51:30.548666   56427 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 17:51:30.548612   56427 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0731 17:51:30.548690   56427 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 17:51:30.548883   56427 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 17:51:30.863218   56427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0731 17:51:30.864345   56427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 17:51:30.889336   56427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0731 17:51:30.889439   56427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0731 17:51:30.893226   56427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0731 17:51:30.893232   56427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0731 17:51:30.893610   56427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0731 17:51:30.973215   56427 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0731 17:51:30.973264   56427 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 17:51:30.973321   56427 ssh_runner.go:195] Run: which crictl
	I0731 17:51:30.978436   56427 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0731 17:51:30.978477   56427 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 17:51:30.978544   56427 ssh_runner.go:195] Run: which crictl
	I0731 17:51:31.035975   56427 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0731 17:51:31.036023   56427 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0731 17:51:31.036046   56427 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0731 17:51:31.036069   56427 ssh_runner.go:195] Run: which crictl
	I0731 17:51:31.036084   56427 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 17:51:31.036130   56427 ssh_runner.go:195] Run: which crictl
	I0731 17:51:31.056858   56427 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0731 17:51:31.056888   56427 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0731 17:51:31.056906   56427 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0731 17:51:31.056919   56427 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 17:51:31.056949   56427 ssh_runner.go:195] Run: which crictl
	I0731 17:51:31.056968   56427 ssh_runner.go:195] Run: which crictl
	I0731 17:51:31.062927   56427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0731 17:51:31.063033   56427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 17:51:31.063056   56427 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0731 17:51:31.063083   56427 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0731 17:51:31.063137   56427 ssh_runner.go:195] Run: which crictl
	I0731 17:51:31.063158   56427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0731 17:51:31.063215   56427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0731 17:51:31.066123   56427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0731 17:51:31.066168   56427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0731 17:51:31.212053   56427 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0731 17:51:31.212121   56427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0731 17:51:31.212134   56427 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0731 17:51:31.212167   56427 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0731 17:51:31.212208   56427 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0731 17:51:31.212228   56427 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0731 17:51:31.212266   56427 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0731 17:51:31.244428   56427 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0731 17:51:31.463463   56427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 17:51:31.604657   56427 cache_images.go:92] duration metric: took 1.05873044s to LoadCachedImages
	W0731 17:51:31.604762   56427 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0731 17:51:31.604779   56427 kubeadm.go:934] updating node { 192.168.61.234 8443 v1.20.0 crio true true} ...
	I0731 17:51:31.604936   56427 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-410576 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.234
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-410576 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 17:51:31.605041   56427 ssh_runner.go:195] Run: crio config
	I0731 17:51:31.655290   56427 cni.go:84] Creating CNI manager for ""
	I0731 17:51:31.655321   56427 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 17:51:31.655343   56427 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 17:51:31.655368   56427 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.234 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-410576 NodeName:kubernetes-upgrade-410576 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.234"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.234 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0731 17:51:31.655543   56427 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.234
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-410576"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.234
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.234"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 17:51:31.655620   56427 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0731 17:51:31.665424   56427 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 17:51:31.665493   56427 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 17:51:31.674172   56427 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0731 17:51:31.690023   56427 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 17:51:31.706622   56427 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0731 17:51:31.724321   56427 ssh_runner.go:195] Run: grep 192.168.61.234	control-plane.minikube.internal$ /etc/hosts
	I0731 17:51:31.728155   56427 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.234	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 17:51:31.740049   56427 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 17:51:31.855749   56427 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 17:51:31.872085   56427 certs.go:68] Setting up /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/kubernetes-upgrade-410576 for IP: 192.168.61.234
	I0731 17:51:31.872111   56427 certs.go:194] generating shared ca certs ...
	I0731 17:51:31.872129   56427 certs.go:226] acquiring lock for ca certs: {Name:mk49eed439085f9c30746706460ce213570d1997 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 17:51:31.872320   56427 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key
	I0731 17:51:31.872384   56427 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key
	I0731 17:51:31.872398   56427 certs.go:256] generating profile certs ...
	I0731 17:51:31.872458   56427 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/kubernetes-upgrade-410576/client.key
	I0731 17:51:31.872475   56427 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/kubernetes-upgrade-410576/client.crt with IP's: []
	I0731 17:51:31.963532   56427 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/kubernetes-upgrade-410576/client.crt ...
	I0731 17:51:31.963560   56427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/kubernetes-upgrade-410576/client.crt: {Name:mkfae2930ffc89eac668912faf13c7950ad7937a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 17:51:31.963729   56427 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/kubernetes-upgrade-410576/client.key ...
	I0731 17:51:31.963748   56427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/kubernetes-upgrade-410576/client.key: {Name:mkf38a7e810ec6c3ca6996553e53a02faa141ea3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 17:51:31.963827   56427 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/kubernetes-upgrade-410576/apiserver.key.6d5376f6
	I0731 17:51:31.963842   56427 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/kubernetes-upgrade-410576/apiserver.crt.6d5376f6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.234]
	I0731 17:51:32.127989   56427 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/kubernetes-upgrade-410576/apiserver.crt.6d5376f6 ...
	I0731 17:51:32.128048   56427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/kubernetes-upgrade-410576/apiserver.crt.6d5376f6: {Name:mk8c3bdca2d41aa2360a1e2ee711add6b9548046 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 17:51:32.193579   56427 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/kubernetes-upgrade-410576/apiserver.key.6d5376f6 ...
	I0731 17:51:32.193622   56427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/kubernetes-upgrade-410576/apiserver.key.6d5376f6: {Name:mke6c3022070b089e7ac499f7d619de2ae255a92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 17:51:32.193773   56427 certs.go:381] copying /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/kubernetes-upgrade-410576/apiserver.crt.6d5376f6 -> /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/kubernetes-upgrade-410576/apiserver.crt
	I0731 17:51:32.193884   56427 certs.go:385] copying /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/kubernetes-upgrade-410576/apiserver.key.6d5376f6 -> /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/kubernetes-upgrade-410576/apiserver.key
	I0731 17:51:32.193967   56427 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/kubernetes-upgrade-410576/proxy-client.key
	I0731 17:51:32.193990   56427 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/kubernetes-upgrade-410576/proxy-client.crt with IP's: []
	I0731 17:51:32.485766   56427 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/kubernetes-upgrade-410576/proxy-client.crt ...
	I0731 17:51:32.485802   56427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/kubernetes-upgrade-410576/proxy-client.crt: {Name:mk2833c3ecb2c9ca32bbcbc3a2d891f53d55ec7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 17:51:32.486010   56427 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/kubernetes-upgrade-410576/proxy-client.key ...
	I0731 17:51:32.486035   56427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/kubernetes-upgrade-410576/proxy-client.key: {Name:mk62b8cb93e4ca1028e3903efcccbecf46a71ab5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 17:51:32.486297   56427 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem (1338 bytes)
	W0731 17:51:32.486355   56427 certs.go:480] ignoring /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259_empty.pem, impossibly tiny 0 bytes
	I0731 17:51:32.486369   56427 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 17:51:32.486399   56427 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem (1082 bytes)
	I0731 17:51:32.486431   56427 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem (1123 bytes)
	I0731 17:51:32.486463   56427 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem (1675 bytes)
	I0731 17:51:32.486521   56427 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem (1708 bytes)
	I0731 17:51:32.487460   56427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 17:51:32.511997   56427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 17:51:32.535665   56427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 17:51:32.558886   56427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 17:51:32.588860   56427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/kubernetes-upgrade-410576/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0731 17:51:32.613658   56427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/kubernetes-upgrade-410576/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 17:51:32.638305   56427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/kubernetes-upgrade-410576/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 17:51:32.662435   56427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/kubernetes-upgrade-410576/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 17:51:32.687582   56427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /usr/share/ca-certificates/152592.pem (1708 bytes)
	I0731 17:51:32.713402   56427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 17:51:32.738808   56427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem --> /usr/share/ca-certificates/15259.pem (1338 bytes)
	I0731 17:51:32.761649   56427 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 17:51:32.790298   56427 ssh_runner.go:195] Run: openssl version
	I0731 17:51:32.796947   56427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/152592.pem && ln -fs /usr/share/ca-certificates/152592.pem /etc/ssl/certs/152592.pem"
	I0731 17:51:32.811585   56427 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/152592.pem
	I0731 17:51:32.819083   56427 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 16:54 /usr/share/ca-certificates/152592.pem
	I0731 17:51:32.819170   56427 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/152592.pem
	I0731 17:51:32.827058   56427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/152592.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 17:51:32.838372   56427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 17:51:32.849428   56427 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 17:51:32.854234   56427 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 16:42 /usr/share/ca-certificates/minikubeCA.pem
	I0731 17:51:32.854293   56427 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 17:51:32.859982   56427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 17:51:32.871531   56427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15259.pem && ln -fs /usr/share/ca-certificates/15259.pem /etc/ssl/certs/15259.pem"
	I0731 17:51:32.882059   56427 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15259.pem
	I0731 17:51:32.886534   56427 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 16:54 /usr/share/ca-certificates/15259.pem
	I0731 17:51:32.886600   56427 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15259.pem
	I0731 17:51:32.892098   56427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15259.pem /etc/ssl/certs/51391683.0"
	I0731 17:51:32.904484   56427 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 17:51:32.909116   56427 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 17:51:32.909189   56427 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-410576 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-410576 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.234 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 17:51:32.909292   56427 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 17:51:32.909373   56427 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 17:51:32.948685   56427 cri.go:89] found id: ""
	I0731 17:51:32.948751   56427 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 17:51:32.959483   56427 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 17:51:32.971317   56427 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 17:51:32.982551   56427 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 17:51:32.982578   56427 kubeadm.go:157] found existing configuration files:
	
	I0731 17:51:32.982634   56427 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 17:51:32.991824   56427 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 17:51:32.991891   56427 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 17:51:33.002758   56427 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 17:51:33.012655   56427 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 17:51:33.012720   56427 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 17:51:33.022548   56427 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 17:51:33.032975   56427 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 17:51:33.033062   56427 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 17:51:33.042963   56427 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 17:51:33.052226   56427 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 17:51:33.052294   56427 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 17:51:33.061596   56427 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 17:51:33.339766   56427 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 17:53:30.669363   56427 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 17:53:30.669479   56427 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0731 17:53:30.671199   56427 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0731 17:53:30.671271   56427 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 17:53:30.671373   56427 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 17:53:30.671521   56427 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 17:53:30.671663   56427 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 17:53:30.671747   56427 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 17:53:30.673172   56427 out.go:204]   - Generating certificates and keys ...
	I0731 17:53:30.673272   56427 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 17:53:30.673367   56427 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 17:53:30.673481   56427 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0731 17:53:30.673571   56427 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0731 17:53:30.673654   56427 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0731 17:53:30.673720   56427 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0731 17:53:30.673792   56427 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0731 17:53:30.673979   56427 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-410576 localhost] and IPs [192.168.61.234 127.0.0.1 ::1]
	I0731 17:53:30.674048   56427 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0731 17:53:30.674211   56427 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-410576 localhost] and IPs [192.168.61.234 127.0.0.1 ::1]
	I0731 17:53:30.674297   56427 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0731 17:53:30.674373   56427 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0731 17:53:30.674430   56427 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0731 17:53:30.674500   56427 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 17:53:30.674562   56427 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 17:53:30.674635   56427 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 17:53:30.674712   56427 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 17:53:30.674781   56427 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 17:53:30.674906   56427 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 17:53:30.675013   56427 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 17:53:30.675061   56427 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 17:53:30.675177   56427 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 17:53:30.676819   56427 out.go:204]   - Booting up control plane ...
	I0731 17:53:30.676951   56427 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 17:53:30.677067   56427 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 17:53:30.677180   56427 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 17:53:30.677288   56427 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 17:53:30.677482   56427 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 17:53:30.677569   56427 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0731 17:53:30.677674   56427 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 17:53:30.677927   56427 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 17:53:30.678029   56427 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 17:53:30.678309   56427 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 17:53:30.678417   56427 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 17:53:30.678693   56427 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 17:53:30.678795   56427 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 17:53:30.679037   56427 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 17:53:30.679169   56427 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 17:53:30.679449   56427 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 17:53:30.679466   56427 kubeadm.go:310] 
	I0731 17:53:30.679529   56427 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 17:53:30.679595   56427 kubeadm.go:310] 		timed out waiting for the condition
	I0731 17:53:30.679617   56427 kubeadm.go:310] 
	I0731 17:53:30.679667   56427 kubeadm.go:310] 	This error is likely caused by:
	I0731 17:53:30.679721   56427 kubeadm.go:310] 		- The kubelet is not running
	I0731 17:53:30.679868   56427 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 17:53:30.679879   56427 kubeadm.go:310] 
	I0731 17:53:30.680020   56427 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 17:53:30.680065   56427 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 17:53:30.680108   56427 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 17:53:30.680125   56427 kubeadm.go:310] 
	I0731 17:53:30.680269   56427 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 17:53:30.680380   56427 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 17:53:30.680390   56427 kubeadm.go:310] 
	I0731 17:53:30.680545   56427 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 17:53:30.680706   56427 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 17:53:30.680802   56427 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 17:53:30.680908   56427 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 17:53:30.680962   56427 kubeadm.go:310] 
	W0731 17:53:30.681055   56427 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-410576 localhost] and IPs [192.168.61.234 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-410576 localhost] and IPs [192.168.61.234 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-410576 localhost] and IPs [192.168.61.234 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-410576 localhost] and IPs [192.168.61.234 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0731 17:53:30.681110   56427 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 17:53:31.164236   56427 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 17:53:31.182825   56427 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 17:53:31.196478   56427 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 17:53:31.196500   56427 kubeadm.go:157] found existing configuration files:
	
	I0731 17:53:31.196550   56427 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 17:53:31.208319   56427 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 17:53:31.208398   56427 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 17:53:31.221883   56427 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 17:53:31.232768   56427 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 17:53:31.232832   56427 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 17:53:31.245495   56427 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 17:53:31.257689   56427 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 17:53:31.257770   56427 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 17:53:31.270092   56427 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 17:53:31.282965   56427 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 17:53:31.283041   56427 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 17:53:31.296666   56427 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 17:53:31.396524   56427 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0731 17:53:31.396638   56427 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 17:53:31.587380   56427 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 17:53:31.587500   56427 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 17:53:31.587598   56427 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 17:53:31.828871   56427 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 17:53:31.830486   56427 out.go:204]   - Generating certificates and keys ...
	I0731 17:53:31.830631   56427 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 17:53:31.830759   56427 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 17:53:31.830881   56427 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 17:53:31.830966   56427 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 17:53:31.831741   56427 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 17:53:31.831946   56427 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 17:53:31.832319   56427 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 17:53:31.832603   56427 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 17:53:31.833031   56427 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 17:53:31.833610   56427 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 17:53:31.833782   56427 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 17:53:31.833861   56427 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 17:53:32.031783   56427 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 17:53:32.140338   56427 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 17:53:32.388575   56427 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 17:53:32.564520   56427 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 17:53:32.585651   56427 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 17:53:32.587038   56427 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 17:53:32.587130   56427 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 17:53:32.715426   56427 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 17:53:32.717427   56427 out.go:204]   - Booting up control plane ...
	I0731 17:53:32.717565   56427 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 17:53:32.724208   56427 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 17:53:32.727127   56427 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 17:53:32.728483   56427 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 17:53:32.730677   56427 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 17:54:12.733666   56427 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0731 17:54:12.733800   56427 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 17:54:12.734148   56427 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 17:54:17.734548   56427 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 17:54:17.734779   56427 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 17:54:27.735628   56427 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 17:54:27.735839   56427 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 17:54:47.735335   56427 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 17:54:47.735910   56427 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 17:55:27.736223   56427 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 17:55:27.736555   56427 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 17:55:27.736586   56427 kubeadm.go:310] 
	I0731 17:55:27.736653   56427 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 17:55:27.736706   56427 kubeadm.go:310] 		timed out waiting for the condition
	I0731 17:55:27.736712   56427 kubeadm.go:310] 
	I0731 17:55:27.736758   56427 kubeadm.go:310] 	This error is likely caused by:
	I0731 17:55:27.736801   56427 kubeadm.go:310] 		- The kubelet is not running
	I0731 17:55:27.736940   56427 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 17:55:27.736948   56427 kubeadm.go:310] 
	I0731 17:55:27.737085   56427 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 17:55:27.737135   56427 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 17:55:27.737183   56427 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 17:55:27.737190   56427 kubeadm.go:310] 
	I0731 17:55:27.737343   56427 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 17:55:27.737465   56427 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 17:55:27.737499   56427 kubeadm.go:310] 
	I0731 17:55:27.737656   56427 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 17:55:27.737786   56427 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 17:55:27.737897   56427 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 17:55:27.737993   56427 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 17:55:27.737999   56427 kubeadm.go:310] 
	I0731 17:55:27.741718   56427 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 17:55:27.741852   56427 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 17:55:27.741933   56427 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0731 17:55:27.742015   56427 kubeadm.go:394] duration metric: took 3m54.832830873s to StartCluster
	I0731 17:55:27.742066   56427 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 17:55:27.742137   56427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 17:55:27.815770   56427 cri.go:89] found id: ""
	I0731 17:55:27.815800   56427 logs.go:276] 0 containers: []
	W0731 17:55:27.815811   56427 logs.go:278] No container was found matching "kube-apiserver"
	I0731 17:55:27.815819   56427 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 17:55:27.815887   56427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 17:55:27.859533   56427 cri.go:89] found id: ""
	I0731 17:55:27.859565   56427 logs.go:276] 0 containers: []
	W0731 17:55:27.859576   56427 logs.go:278] No container was found matching "etcd"
	I0731 17:55:27.859584   56427 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 17:55:27.859649   56427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 17:55:27.902613   56427 cri.go:89] found id: ""
	I0731 17:55:27.902635   56427 logs.go:276] 0 containers: []
	W0731 17:55:27.902642   56427 logs.go:278] No container was found matching "coredns"
	I0731 17:55:27.902648   56427 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 17:55:27.902705   56427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 17:55:27.946930   56427 cri.go:89] found id: ""
	I0731 17:55:27.946951   56427 logs.go:276] 0 containers: []
	W0731 17:55:27.946959   56427 logs.go:278] No container was found matching "kube-scheduler"
	I0731 17:55:27.946964   56427 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 17:55:27.947008   56427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 17:55:27.986209   56427 cri.go:89] found id: ""
	I0731 17:55:27.986248   56427 logs.go:276] 0 containers: []
	W0731 17:55:27.986261   56427 logs.go:278] No container was found matching "kube-proxy"
	I0731 17:55:27.986270   56427 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 17:55:27.986339   56427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 17:55:28.037877   56427 cri.go:89] found id: ""
	I0731 17:55:28.037903   56427 logs.go:276] 0 containers: []
	W0731 17:55:28.037913   56427 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 17:55:28.037921   56427 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 17:55:28.037978   56427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 17:55:28.105760   56427 cri.go:89] found id: ""
	I0731 17:55:28.105789   56427 logs.go:276] 0 containers: []
	W0731 17:55:28.105798   56427 logs.go:278] No container was found matching "kindnet"
	I0731 17:55:28.105808   56427 logs.go:123] Gathering logs for kubelet ...
	I0731 17:55:28.105822   56427 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 17:55:28.161458   56427 logs.go:123] Gathering logs for dmesg ...
	I0731 17:55:28.161495   56427 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 17:55:28.176947   56427 logs.go:123] Gathering logs for describe nodes ...
	I0731 17:55:28.176975   56427 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 17:55:28.321911   56427 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 17:55:28.321933   56427 logs.go:123] Gathering logs for CRI-O ...
	I0731 17:55:28.321946   56427 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 17:55:28.432266   56427 logs.go:123] Gathering logs for container status ...
	I0731 17:55:28.432302   56427 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0731 17:55:28.484115   56427 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0731 17:55:28.484174   56427 out.go:239] * 
	* 
	W0731 17:55:28.484256   56427 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 17:55:28.484289   56427 out.go:239] * 
	* 
	W0731 17:55:28.485646   56427 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 17:55:28.489242   56427 out.go:177] 
	W0731 17:55:28.490322   56427 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 17:55:28.490589   56427 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0731 17:55:28.490627   56427 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0731 17:55:28.492522   56427 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-410576 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-410576
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-410576: (6.585876609s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-410576 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-410576 status --format={{.Host}}: exit status 7 (64.148008ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-410576 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-410576 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m11.023887593s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-410576 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-410576 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-410576 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (99.177854ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-410576] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19349
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19349-8084/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19349-8084/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-beta.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-410576
	    minikube start -p kubernetes-upgrade-410576 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4105762 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-410576 --kubernetes-version=v1.31.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-410576 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-410576 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m0.364170727s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-07-31 17:57:46.77495731 +0000 UTC m=+4670.481695492
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-410576 -n kubernetes-upgrade-410576
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-410576 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-410576 logs -n 25: (1.786111521s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-985288 sudo                                | bridge-985288          | jenkins | v1.33.1 | 31 Jul 24 17:57 UTC | 31 Jul 24 17:57 UTC |
	|         | systemctl cat kubelet                                |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p bridge-985288 sudo                                | bridge-985288          | jenkins | v1.33.1 | 31 Jul 24 17:57 UTC | 31 Jul 24 17:57 UTC |
	|         | journalctl -xeu kubelet --all                        |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p bridge-985288 sudo cat                            | bridge-985288          | jenkins | v1.33.1 | 31 Jul 24 17:57 UTC | 31 Jul 24 17:57 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                        |         |         |                     |                     |
	| ssh     | -p bridge-985288 sudo cat                            | bridge-985288          | jenkins | v1.33.1 | 31 Jul 24 17:57 UTC | 31 Jul 24 17:57 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                        |         |         |                     |                     |
	| ssh     | -p bridge-985288 sudo                                | bridge-985288          | jenkins | v1.33.1 | 31 Jul 24 17:57 UTC |                     |
	|         | systemctl status docker --all                        |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p bridge-985288 sudo                                | bridge-985288          | jenkins | v1.33.1 | 31 Jul 24 17:57 UTC | 31 Jul 24 17:57 UTC |
	|         | systemctl cat docker                                 |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p bridge-985288 sudo cat                            | bridge-985288          | jenkins | v1.33.1 | 31 Jul 24 17:57 UTC | 31 Jul 24 17:57 UTC |
	|         | /etc/docker/daemon.json                              |                        |         |         |                     |                     |
	| ssh     | -p bridge-985288 sudo docker                         | bridge-985288          | jenkins | v1.33.1 | 31 Jul 24 17:57 UTC |                     |
	|         | system info                                          |                        |         |         |                     |                     |
	| ssh     | -p bridge-985288 sudo                                | bridge-985288          | jenkins | v1.33.1 | 31 Jul 24 17:57 UTC |                     |
	|         | systemctl status cri-docker                          |                        |         |         |                     |                     |
	|         | --all --full --no-pager                              |                        |         |         |                     |                     |
	| ssh     | -p bridge-985288 sudo                                | bridge-985288          | jenkins | v1.33.1 | 31 Jul 24 17:57 UTC | 31 Jul 24 17:57 UTC |
	|         | systemctl cat cri-docker                             |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p bridge-985288 sudo cat                            | bridge-985288          | jenkins | v1.33.1 | 31 Jul 24 17:57 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                        |         |         |                     |                     |
	| ssh     | -p bridge-985288 sudo cat                            | bridge-985288          | jenkins | v1.33.1 | 31 Jul 24 17:57 UTC | 31 Jul 24 17:57 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                        |         |         |                     |                     |
	| ssh     | -p bridge-985288 sudo                                | bridge-985288          | jenkins | v1.33.1 | 31 Jul 24 17:57 UTC | 31 Jul 24 17:57 UTC |
	|         | cri-dockerd --version                                |                        |         |         |                     |                     |
	| ssh     | -p bridge-985288 sudo                                | bridge-985288          | jenkins | v1.33.1 | 31 Jul 24 17:57 UTC |                     |
	|         | systemctl status containerd                          |                        |         |         |                     |                     |
	|         | --all --full --no-pager                              |                        |         |         |                     |                     |
	| ssh     | -p bridge-985288 sudo                                | bridge-985288          | jenkins | v1.33.1 | 31 Jul 24 17:57 UTC | 31 Jul 24 17:57 UTC |
	|         | systemctl cat containerd                             |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p bridge-985288 sudo cat                            | bridge-985288          | jenkins | v1.33.1 | 31 Jul 24 17:57 UTC | 31 Jul 24 17:57 UTC |
	|         | /lib/systemd/system/containerd.service               |                        |         |         |                     |                     |
	| ssh     | -p bridge-985288 sudo cat                            | bridge-985288          | jenkins | v1.33.1 | 31 Jul 24 17:57 UTC | 31 Jul 24 17:57 UTC |
	|         | /etc/containerd/config.toml                          |                        |         |         |                     |                     |
	| ssh     | -p bridge-985288 sudo                                | bridge-985288          | jenkins | v1.33.1 | 31 Jul 24 17:57 UTC | 31 Jul 24 17:57 UTC |
	|         | containerd config dump                               |                        |         |         |                     |                     |
	| ssh     | -p bridge-985288 sudo                                | bridge-985288          | jenkins | v1.33.1 | 31 Jul 24 17:57 UTC | 31 Jul 24 17:57 UTC |
	|         | systemctl status crio --all                          |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p bridge-985288 sudo                                | bridge-985288          | jenkins | v1.33.1 | 31 Jul 24 17:57 UTC | 31 Jul 24 17:57 UTC |
	|         | systemctl cat crio --no-pager                        |                        |         |         |                     |                     |
	| ssh     | -p bridge-985288 sudo find                           | bridge-985288          | jenkins | v1.33.1 | 31 Jul 24 17:57 UTC | 31 Jul 24 17:57 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                        |         |         |                     |                     |
	| ssh     | -p bridge-985288 sudo crio                           | bridge-985288          | jenkins | v1.33.1 | 31 Jul 24 17:57 UTC | 31 Jul 24 17:57 UTC |
	|         | config                                               |                        |         |         |                     |                     |
	| delete  | -p bridge-985288                                     | bridge-985288          | jenkins | v1.33.1 | 31 Jul 24 17:57 UTC | 31 Jul 24 17:57 UTC |
	| start   | -p old-k8s-version-276459                            | old-k8s-version-276459 | jenkins | v1.33.1 | 31 Jul 24 17:57 UTC |                     |
	|         | --memory=2200                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                        |         |         |                     |                     |
	|         | --kvm-network=default                                |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                        |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                              |                        |         |         |                     |                     |
	|         | --keep-context=false                                 |                        |         |         |                     |                     |
	|         | --driver=kvm2                                        |                        |         |         |                     |                     |
	|         | --container-runtime=crio                             |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                        |         |         |                     |                     |
	| ssh     | -p flannel-985288 pgrep -a                           | flannel-985288         | jenkins | v1.33.1 | 31 Jul 24 17:57 UTC | 31 Jul 24 17:57 UTC |
	|         | kubelet                                              |                        |         |         |                     |                     |
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 17:57:39
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 17:57:39.117433   67995 out.go:291] Setting OutFile to fd 1 ...
	I0731 17:57:39.117525   67995 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 17:57:39.117533   67995 out.go:304] Setting ErrFile to fd 2...
	I0731 17:57:39.117537   67995 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 17:57:39.117711   67995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19349-8084/.minikube/bin
	I0731 17:57:39.118277   67995 out.go:298] Setting JSON to false
	I0731 17:57:39.119514   67995 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6003,"bootTime":1722442656,"procs":315,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 17:57:39.119576   67995 start.go:139] virtualization: kvm guest
	I0731 17:57:39.121850   67995 out.go:177] * [old-k8s-version-276459] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 17:57:39.123125   67995 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 17:57:39.123178   67995 notify.go:220] Checking for updates...
	I0731 17:57:39.125621   67995 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 17:57:39.126856   67995 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 17:57:39.128060   67995 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19349-8084/.minikube
	I0731 17:57:39.129307   67995 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 17:57:39.130625   67995 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 17:57:39.132272   67995 config.go:182] Loaded profile config "enable-default-cni-985288": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 17:57:39.132391   67995 config.go:182] Loaded profile config "flannel-985288": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 17:57:39.132494   67995 config.go:182] Loaded profile config "kubernetes-upgrade-410576": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 17:57:39.132620   67995 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 17:57:39.169119   67995 out.go:177] * Using the kvm2 driver based on user configuration
	I0731 17:57:39.170527   67995 start.go:297] selected driver: kvm2
	I0731 17:57:39.170542   67995 start.go:901] validating driver "kvm2" against <nil>
	I0731 17:57:39.170558   67995 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 17:57:39.171300   67995 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 17:57:39.171386   67995 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19349-8084/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 17:57:39.186105   67995 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 17:57:39.186165   67995 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 17:57:39.186441   67995 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 17:57:39.186507   67995 cni.go:84] Creating CNI manager for ""
	I0731 17:57:39.186522   67995 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 17:57:39.186536   67995 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 17:57:39.186601   67995 start.go:340] cluster config:
	{Name:old-k8s-version-276459 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-276459 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 17:57:39.186720   67995 iso.go:125] acquiring lock: {Name:mk4494bb5a63902bf12a909c3109db85229bc516 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 17:57:39.188567   67995 out.go:177] * Starting "old-k8s-version-276459" primary control-plane node in "old-k8s-version-276459" cluster
	I0731 17:57:39.189908   67995 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 17:57:39.189950   67995 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0731 17:57:39.189959   67995 cache.go:56] Caching tarball of preloaded images
	I0731 17:57:39.190050   67995 preload.go:172] Found /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 17:57:39.190062   67995 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0731 17:57:39.190187   67995 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/config.json ...
	I0731 17:57:39.190211   67995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/config.json: {Name:mk7d50135ed2c60e0545008accf64d73827c287c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 17:57:39.190384   67995 start.go:360] acquireMachinesLock for old-k8s-version-276459: {Name:mkd78eacfab7d6c3058c6674434b3d889ec957e0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 17:57:39.190428   67995 start.go:364] duration metric: took 23.264µs to acquireMachinesLock for "old-k8s-version-276459"
	I0731 17:57:39.190453   67995 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-276459 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-276459 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 17:57:39.190531   67995 start.go:125] createHost starting for "" (driver="kvm2")
	I0731 17:57:37.959129   64441 pod_ready.go:102] pod "coredns-7db6d8ff4d-7d7p5" in "kube-system" namespace has status "Ready":"False"
	I0731 17:57:39.459350   64441 pod_ready.go:92] pod "coredns-7db6d8ff4d-7d7p5" in "kube-system" namespace has status "Ready":"True"
	I0731 17:57:39.459383   64441 pod_ready.go:81] duration metric: took 15.007746311s for pod "coredns-7db6d8ff4d-7d7p5" in "kube-system" namespace to be "Ready" ...
	I0731 17:57:39.459398   64441 pod_ready.go:78] waiting up to 15m0s for pod "etcd-flannel-985288" in "kube-system" namespace to be "Ready" ...
	I0731 17:57:39.464848   64441 pod_ready.go:92] pod "etcd-flannel-985288" in "kube-system" namespace has status "Ready":"True"
	I0731 17:57:39.464875   64441 pod_ready.go:81] duration metric: took 5.468122ms for pod "etcd-flannel-985288" in "kube-system" namespace to be "Ready" ...
	I0731 17:57:39.464888   64441 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-flannel-985288" in "kube-system" namespace to be "Ready" ...
	I0731 17:57:39.470087   64441 pod_ready.go:92] pod "kube-apiserver-flannel-985288" in "kube-system" namespace has status "Ready":"True"
	I0731 17:57:39.470115   64441 pod_ready.go:81] duration metric: took 5.218456ms for pod "kube-apiserver-flannel-985288" in "kube-system" namespace to be "Ready" ...
	I0731 17:57:39.470139   64441 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-flannel-985288" in "kube-system" namespace to be "Ready" ...
	I0731 17:57:39.474876   64441 pod_ready.go:92] pod "kube-controller-manager-flannel-985288" in "kube-system" namespace has status "Ready":"True"
	I0731 17:57:39.474902   64441 pod_ready.go:81] duration metric: took 4.754764ms for pod "kube-controller-manager-flannel-985288" in "kube-system" namespace to be "Ready" ...
	I0731 17:57:39.474914   64441 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-865dv" in "kube-system" namespace to be "Ready" ...
	I0731 17:57:39.480314   64441 pod_ready.go:92] pod "kube-proxy-865dv" in "kube-system" namespace has status "Ready":"True"
	I0731 17:57:39.480345   64441 pod_ready.go:81] duration metric: took 5.423677ms for pod "kube-proxy-865dv" in "kube-system" namespace to be "Ready" ...
	I0731 17:57:39.480356   64441 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-flannel-985288" in "kube-system" namespace to be "Ready" ...
	I0731 17:57:39.855488   64441 pod_ready.go:92] pod "kube-scheduler-flannel-985288" in "kube-system" namespace has status "Ready":"True"
	I0731 17:57:39.855509   64441 pod_ready.go:81] duration metric: took 375.145288ms for pod "kube-scheduler-flannel-985288" in "kube-system" namespace to be "Ready" ...
	I0731 17:57:39.855519   64441 pod_ready.go:38] duration metric: took 15.419514646s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 17:57:39.855531   64441 api_server.go:52] waiting for apiserver process to appear ...
	I0731 17:57:39.855579   64441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 17:57:39.875733   64441 api_server.go:72] duration metric: took 25.32473917s to wait for apiserver process to appear ...
	I0731 17:57:39.875753   64441 api_server.go:88] waiting for apiserver healthz status ...
	I0731 17:57:39.875781   64441 api_server.go:253] Checking apiserver healthz at https://192.168.50.38:8443/healthz ...
	I0731 17:57:39.880080   64441 api_server.go:279] https://192.168.50.38:8443/healthz returned 200:
	ok
	I0731 17:57:39.881426   64441 api_server.go:141] control plane version: v1.30.3
	I0731 17:57:39.881456   64441 api_server.go:131] duration metric: took 5.696547ms to wait for apiserver health ...
	I0731 17:57:39.881466   64441 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 17:57:40.059626   64441 system_pods.go:59] 7 kube-system pods found
	I0731 17:57:40.059677   64441 system_pods.go:61] "coredns-7db6d8ff4d-7d7p5" [885fcdd6-055e-4665-bf5a-515f7f40f348] Running
	I0731 17:57:40.059685   64441 system_pods.go:61] "etcd-flannel-985288" [2d56ef1d-d8a0-494e-b92c-96e0867aefeb] Running
	I0731 17:57:40.059691   64441 system_pods.go:61] "kube-apiserver-flannel-985288" [a5db2ff3-33be-495e-8b4e-431f783c5d6c] Running
	I0731 17:57:40.059697   64441 system_pods.go:61] "kube-controller-manager-flannel-985288" [fc9ba061-c84e-407a-8108-5bcacd0496d5] Running
	I0731 17:57:40.059703   64441 system_pods.go:61] "kube-proxy-865dv" [c39af123-e09d-4d6a-8f62-72b09dfc4768] Running
	I0731 17:57:40.059709   64441 system_pods.go:61] "kube-scheduler-flannel-985288" [05d6c650-2f5f-4b07-aeae-cca7cd97aab7] Running
	I0731 17:57:40.059714   64441 system_pods.go:61] "storage-provisioner" [4a714db1-bd20-4ff3-a4ff-c0db2c9ccf7c] Running
	I0731 17:57:40.059723   64441 system_pods.go:74] duration metric: took 178.249743ms to wait for pod list to return data ...
	I0731 17:57:40.059736   64441 default_sa.go:34] waiting for default service account to be created ...
	I0731 17:57:40.256106   64441 default_sa.go:45] found service account: "default"
	I0731 17:57:40.256134   64441 default_sa.go:55] duration metric: took 196.390792ms for default service account to be created ...
	I0731 17:57:40.256145   64441 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 17:57:40.458752   64441 system_pods.go:86] 7 kube-system pods found
	I0731 17:57:40.458781   64441 system_pods.go:89] "coredns-7db6d8ff4d-7d7p5" [885fcdd6-055e-4665-bf5a-515f7f40f348] Running
	I0731 17:57:40.458788   64441 system_pods.go:89] "etcd-flannel-985288" [2d56ef1d-d8a0-494e-b92c-96e0867aefeb] Running
	I0731 17:57:40.458794   64441 system_pods.go:89] "kube-apiserver-flannel-985288" [a5db2ff3-33be-495e-8b4e-431f783c5d6c] Running
	I0731 17:57:40.458800   64441 system_pods.go:89] "kube-controller-manager-flannel-985288" [fc9ba061-c84e-407a-8108-5bcacd0496d5] Running
	I0731 17:57:40.458806   64441 system_pods.go:89] "kube-proxy-865dv" [c39af123-e09d-4d6a-8f62-72b09dfc4768] Running
	I0731 17:57:40.458811   64441 system_pods.go:89] "kube-scheduler-flannel-985288" [05d6c650-2f5f-4b07-aeae-cca7cd97aab7] Running
	I0731 17:57:40.458816   64441 system_pods.go:89] "storage-provisioner" [4a714db1-bd20-4ff3-a4ff-c0db2c9ccf7c] Running
	I0731 17:57:40.458823   64441 system_pods.go:126] duration metric: took 202.672556ms to wait for k8s-apps to be running ...
	I0731 17:57:40.458832   64441 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 17:57:40.458878   64441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 17:57:40.473560   64441 system_svc.go:56] duration metric: took 14.720082ms WaitForService to wait for kubelet
	I0731 17:57:40.473589   64441 kubeadm.go:582] duration metric: took 25.922596972s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 17:57:40.473617   64441 node_conditions.go:102] verifying NodePressure condition ...
	I0731 17:57:40.657652   64441 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 17:57:40.657686   64441 node_conditions.go:123] node cpu capacity is 2
	I0731 17:57:40.657699   64441 node_conditions.go:105] duration metric: took 184.075706ms to run NodePressure ...
	I0731 17:57:40.657715   64441 start.go:241] waiting for startup goroutines ...
	I0731 17:57:40.657725   64441 start.go:246] waiting for cluster config update ...
	I0731 17:57:40.657739   64441 start.go:255] writing updated cluster config ...
	I0731 17:57:40.658026   64441 ssh_runner.go:195] Run: rm -f paused
	I0731 17:57:40.724518   64441 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 17:57:40.727755   64441 out.go:177] * Done! kubectl is now configured to use "flannel-985288" cluster and "default" namespace by default
	I0731 17:57:37.774092   66339 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 206a4121a79649e682ff94ef24c8661b0ea16a74394af1f79502c1bdff994938 ad9115ef25d6c62650db1af3db0851e77fe94c2532bf7955814a4e98ba8783ad ff5fcc9fc7c5612da84814e64df5f38f1958d94615a2af74d4961e146d18f5cb 8d71f853e14817ede82c6e339d8dc76a66cfc97c086886e2151dc299a4e7c9c0 18f923af77f99a86e9e79c9d3da51029f9f0828ab4112e6f7c8ec3c858068163 8817455f700c7831efb13c71044d6b1b813c31e0d5a632259f23c2a1e7a87595 8b06b35c4ae1b835ad2634098192f08ccc58ce1449710608870143dbbf92e364 89fd134efa85f72b905731e6cfc5deb3dfa58c77e1d0f610bba984100fab74c8 699130ec5e274a2aec9e57b93f590999d5c8bd36026d73fa7742284337018ab3: (9.78495259s)
	I0731 17:57:37.774162   66339 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 17:57:37.817770   66339 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 17:57:37.831275   66339 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5651 Jul 31 17:56 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5658 Jul 31 17:56 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5755 Jul 31 17:56 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 Jul 31 17:56 /etc/kubernetes/scheduler.conf
	
	I0731 17:57:37.831348   66339 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 17:57:37.842628   66339 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 17:57:37.852157   66339 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 17:57:37.861699   66339 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0731 17:57:37.861760   66339 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 17:57:37.874424   66339 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 17:57:37.887318   66339 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0731 17:57:37.887388   66339 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 17:57:37.900271   66339 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 17:57:37.915826   66339 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 17:57:37.977398   66339 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 17:57:38.985601   66339 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.008168574s)
	I0731 17:57:38.985641   66339 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 17:57:39.251224   66339 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 17:57:39.329738   66339 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 17:57:39.419206   66339 api_server.go:52] waiting for apiserver process to appear ...
	I0731 17:57:39.419301   66339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 17:57:39.920355   66339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 17:57:40.420236   66339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 17:57:40.437203   66339 api_server.go:72] duration metric: took 1.017995183s to wait for apiserver process to appear ...
	I0731 17:57:40.437231   66339 api_server.go:88] waiting for apiserver healthz status ...
	I0731 17:57:40.437252   66339 api_server.go:253] Checking apiserver healthz at https://192.168.61.234:8443/healthz ...
	I0731 17:57:38.221989   66100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 17:57:38.721372   66100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 17:57:39.221179   66100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 17:57:39.721620   66100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 17:57:40.220812   66100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 17:57:40.721314   66100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 17:57:41.221758   66100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 17:57:41.721805   66100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 17:57:42.220773   66100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 17:57:42.720943   66100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 17:57:39.192130   67995 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 17:57:39.192285   67995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:57:39.192338   67995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:57:39.207371   67995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38035
	I0731 17:57:39.207857   67995 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:57:39.208398   67995 main.go:141] libmachine: Using API Version  1
	I0731 17:57:39.208419   67995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:57:39.208831   67995 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:57:39.209071   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetMachineName
	I0731 17:57:39.209265   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 17:57:39.209468   67995 start.go:159] libmachine.API.Create for "old-k8s-version-276459" (driver="kvm2")
	I0731 17:57:39.209498   67995 client.go:168] LocalClient.Create starting
	I0731 17:57:39.209536   67995 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem
	I0731 17:57:39.209581   67995 main.go:141] libmachine: Decoding PEM data...
	I0731 17:57:39.209601   67995 main.go:141] libmachine: Parsing certificate...
	I0731 17:57:39.209678   67995 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem
	I0731 17:57:39.209706   67995 main.go:141] libmachine: Decoding PEM data...
	I0731 17:57:39.209729   67995 main.go:141] libmachine: Parsing certificate...
	I0731 17:57:39.209754   67995 main.go:141] libmachine: Running pre-create checks...
	I0731 17:57:39.209775   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .PreCreateCheck
	I0731 17:57:39.210233   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetConfigRaw
	I0731 17:57:39.210764   67995 main.go:141] libmachine: Creating machine...
	I0731 17:57:39.210779   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .Create
	I0731 17:57:39.210966   67995 main.go:141] libmachine: (old-k8s-version-276459) Creating KVM machine...
	I0731 17:57:39.212423   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | found existing default KVM network
	I0731 17:57:39.213833   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 17:57:39.213686   68018 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000112c10}
	I0731 17:57:39.213853   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | created network xml: 
	I0731 17:57:39.213862   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | <network>
	I0731 17:57:39.213868   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG |   <name>mk-old-k8s-version-276459</name>
	I0731 17:57:39.213875   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG |   <dns enable='no'/>
	I0731 17:57:39.213880   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG |   
	I0731 17:57:39.213886   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0731 17:57:39.213905   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG |     <dhcp>
	I0731 17:57:39.213919   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0731 17:57:39.213928   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG |     </dhcp>
	I0731 17:57:39.214021   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG |   </ip>
	I0731 17:57:39.214085   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG |   
	I0731 17:57:39.214101   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | </network>
	I0731 17:57:39.214119   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | 
	I0731 17:57:39.219563   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | trying to create private KVM network mk-old-k8s-version-276459 192.168.39.0/24...
	I0731 17:57:39.310430   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | private KVM network mk-old-k8s-version-276459 192.168.39.0/24 created
	I0731 17:57:39.310478   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 17:57:39.310310   68018 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19349-8084/.minikube
	I0731 17:57:39.310493   67995 main.go:141] libmachine: (old-k8s-version-276459) Setting up store path in /home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459 ...
	I0731 17:57:39.310522   67995 main.go:141] libmachine: (old-k8s-version-276459) Building disk image from file:///home/jenkins/minikube-integration/19349-8084/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0731 17:57:39.310574   67995 main.go:141] libmachine: (old-k8s-version-276459) Downloading /home/jenkins/minikube-integration/19349-8084/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19349-8084/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0731 17:57:39.576495   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 17:57:39.576315   68018 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459/id_rsa...
	I0731 17:57:39.680936   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 17:57:39.680813   68018 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459/old-k8s-version-276459.rawdisk...
	I0731 17:57:39.680965   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | Writing magic tar header
	I0731 17:57:39.680982   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | Writing SSH key tar header
	I0731 17:57:39.681031   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 17:57:39.680975   68018 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459 ...
	I0731 17:57:39.681109   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459
	I0731 17:57:39.681131   67995 main.go:141] libmachine: (old-k8s-version-276459) Setting executable bit set on /home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459 (perms=drwx------)
	I0731 17:57:39.681144   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19349-8084/.minikube/machines
	I0731 17:57:39.681160   67995 main.go:141] libmachine: (old-k8s-version-276459) Setting executable bit set on /home/jenkins/minikube-integration/19349-8084/.minikube/machines (perms=drwxr-xr-x)
	I0731 17:57:39.681181   67995 main.go:141] libmachine: (old-k8s-version-276459) Setting executable bit set on /home/jenkins/minikube-integration/19349-8084/.minikube (perms=drwxr-xr-x)
	I0731 17:57:39.681196   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19349-8084/.minikube
	I0731 17:57:39.681211   67995 main.go:141] libmachine: (old-k8s-version-276459) Setting executable bit set on /home/jenkins/minikube-integration/19349-8084 (perms=drwxrwxr-x)
	I0731 17:57:39.681229   67995 main.go:141] libmachine: (old-k8s-version-276459) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0731 17:57:39.681243   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19349-8084
	I0731 17:57:39.681261   67995 main.go:141] libmachine: (old-k8s-version-276459) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0731 17:57:39.681274   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0731 17:57:39.681291   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | Checking permissions on dir: /home/jenkins
	I0731 17:57:39.681300   67995 main.go:141] libmachine: (old-k8s-version-276459) Creating domain...
	I0731 17:57:39.681307   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | Checking permissions on dir: /home
	I0731 17:57:39.681317   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | Skipping /home - not owner
	I0731 17:57:39.682579   67995 main.go:141] libmachine: (old-k8s-version-276459) define libvirt domain using xml: 
	I0731 17:57:39.682601   67995 main.go:141] libmachine: (old-k8s-version-276459) <domain type='kvm'>
	I0731 17:57:39.682612   67995 main.go:141] libmachine: (old-k8s-version-276459)   <name>old-k8s-version-276459</name>
	I0731 17:57:39.682630   67995 main.go:141] libmachine: (old-k8s-version-276459)   <memory unit='MiB'>2200</memory>
	I0731 17:57:39.682656   67995 main.go:141] libmachine: (old-k8s-version-276459)   <vcpu>2</vcpu>
	I0731 17:57:39.682679   67995 main.go:141] libmachine: (old-k8s-version-276459)   <features>
	I0731 17:57:39.682691   67995 main.go:141] libmachine: (old-k8s-version-276459)     <acpi/>
	I0731 17:57:39.682706   67995 main.go:141] libmachine: (old-k8s-version-276459)     <apic/>
	I0731 17:57:39.682716   67995 main.go:141] libmachine: (old-k8s-version-276459)     <pae/>
	I0731 17:57:39.682738   67995 main.go:141] libmachine: (old-k8s-version-276459)     
	I0731 17:57:39.682749   67995 main.go:141] libmachine: (old-k8s-version-276459)   </features>
	I0731 17:57:39.682761   67995 main.go:141] libmachine: (old-k8s-version-276459)   <cpu mode='host-passthrough'>
	I0731 17:57:39.682772   67995 main.go:141] libmachine: (old-k8s-version-276459)   
	I0731 17:57:39.682782   67995 main.go:141] libmachine: (old-k8s-version-276459)   </cpu>
	I0731 17:57:39.682793   67995 main.go:141] libmachine: (old-k8s-version-276459)   <os>
	I0731 17:57:39.682804   67995 main.go:141] libmachine: (old-k8s-version-276459)     <type>hvm</type>
	I0731 17:57:39.682812   67995 main.go:141] libmachine: (old-k8s-version-276459)     <boot dev='cdrom'/>
	I0731 17:57:39.682826   67995 main.go:141] libmachine: (old-k8s-version-276459)     <boot dev='hd'/>
	I0731 17:57:39.682837   67995 main.go:141] libmachine: (old-k8s-version-276459)     <bootmenu enable='no'/>
	I0731 17:57:39.682850   67995 main.go:141] libmachine: (old-k8s-version-276459)   </os>
	I0731 17:57:39.682871   67995 main.go:141] libmachine: (old-k8s-version-276459)   <devices>
	I0731 17:57:39.682885   67995 main.go:141] libmachine: (old-k8s-version-276459)     <disk type='file' device='cdrom'>
	I0731 17:57:39.682902   67995 main.go:141] libmachine: (old-k8s-version-276459)       <source file='/home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459/boot2docker.iso'/>
	I0731 17:57:39.682913   67995 main.go:141] libmachine: (old-k8s-version-276459)       <target dev='hdc' bus='scsi'/>
	I0731 17:57:39.682922   67995 main.go:141] libmachine: (old-k8s-version-276459)       <readonly/>
	I0731 17:57:39.682931   67995 main.go:141] libmachine: (old-k8s-version-276459)     </disk>
	I0731 17:57:39.682941   67995 main.go:141] libmachine: (old-k8s-version-276459)     <disk type='file' device='disk'>
	I0731 17:57:39.682952   67995 main.go:141] libmachine: (old-k8s-version-276459)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0731 17:57:39.682969   67995 main.go:141] libmachine: (old-k8s-version-276459)       <source file='/home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459/old-k8s-version-276459.rawdisk'/>
	I0731 17:57:39.682979   67995 main.go:141] libmachine: (old-k8s-version-276459)       <target dev='hda' bus='virtio'/>
	I0731 17:57:39.682986   67995 main.go:141] libmachine: (old-k8s-version-276459)     </disk>
	I0731 17:57:39.682998   67995 main.go:141] libmachine: (old-k8s-version-276459)     <interface type='network'>
	I0731 17:57:39.683010   67995 main.go:141] libmachine: (old-k8s-version-276459)       <source network='mk-old-k8s-version-276459'/>
	I0731 17:57:39.683021   67995 main.go:141] libmachine: (old-k8s-version-276459)       <model type='virtio'/>
	I0731 17:57:39.683036   67995 main.go:141] libmachine: (old-k8s-version-276459)     </interface>
	I0731 17:57:39.683048   67995 main.go:141] libmachine: (old-k8s-version-276459)     <interface type='network'>
	I0731 17:57:39.683060   67995 main.go:141] libmachine: (old-k8s-version-276459)       <source network='default'/>
	I0731 17:57:39.683072   67995 main.go:141] libmachine: (old-k8s-version-276459)       <model type='virtio'/>
	I0731 17:57:39.683081   67995 main.go:141] libmachine: (old-k8s-version-276459)     </interface>
	I0731 17:57:39.683089   67995 main.go:141] libmachine: (old-k8s-version-276459)     <serial type='pty'>
	I0731 17:57:39.683098   67995 main.go:141] libmachine: (old-k8s-version-276459)       <target port='0'/>
	I0731 17:57:39.683138   67995 main.go:141] libmachine: (old-k8s-version-276459)     </serial>
	I0731 17:57:39.683161   67995 main.go:141] libmachine: (old-k8s-version-276459)     <console type='pty'>
	I0731 17:57:39.683176   67995 main.go:141] libmachine: (old-k8s-version-276459)       <target type='serial' port='0'/>
	I0731 17:57:39.683188   67995 main.go:141] libmachine: (old-k8s-version-276459)     </console>
	I0731 17:57:39.683201   67995 main.go:141] libmachine: (old-k8s-version-276459)     <rng model='virtio'>
	I0731 17:57:39.683223   67995 main.go:141] libmachine: (old-k8s-version-276459)       <backend model='random'>/dev/random</backend>
	I0731 17:57:39.683247   67995 main.go:141] libmachine: (old-k8s-version-276459)     </rng>
	I0731 17:57:39.683266   67995 main.go:141] libmachine: (old-k8s-version-276459)     
	I0731 17:57:39.683282   67995 main.go:141] libmachine: (old-k8s-version-276459)     
	I0731 17:57:39.683292   67995 main.go:141] libmachine: (old-k8s-version-276459)   </devices>
	I0731 17:57:39.683298   67995 main.go:141] libmachine: (old-k8s-version-276459) </domain>
	I0731 17:57:39.683310   67995 main.go:141] libmachine: (old-k8s-version-276459) 
	I0731 17:57:39.688638   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:89:39:4e in network default
	I0731 17:57:39.689230   67995 main.go:141] libmachine: (old-k8s-version-276459) Ensuring networks are active...
	I0731 17:57:39.689252   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 17:57:39.689888   67995 main.go:141] libmachine: (old-k8s-version-276459) Ensuring network default is active
	I0731 17:57:39.690220   67995 main.go:141] libmachine: (old-k8s-version-276459) Ensuring network mk-old-k8s-version-276459 is active
	I0731 17:57:39.690746   67995 main.go:141] libmachine: (old-k8s-version-276459) Getting domain xml...
	I0731 17:57:39.691416   67995 main.go:141] libmachine: (old-k8s-version-276459) Creating domain...
	I0731 17:57:41.026258   67995 main.go:141] libmachine: (old-k8s-version-276459) Waiting to get IP...
	I0731 17:57:41.027135   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 17:57:41.027688   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 17:57:41.027755   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 17:57:41.027658   68018 retry.go:31] will retry after 299.311272ms: waiting for machine to come up
	I0731 17:57:41.327950   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 17:57:41.328415   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 17:57:41.328439   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 17:57:41.328372   68018 retry.go:31] will retry after 362.13362ms: waiting for machine to come up
	I0731 17:57:41.691998   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 17:57:41.692503   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 17:57:41.692536   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 17:57:41.692464   68018 retry.go:31] will retry after 432.407689ms: waiting for machine to come up
	I0731 17:57:42.126805   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 17:57:42.127489   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 17:57:42.127515   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 17:57:42.127444   68018 retry.go:31] will retry after 413.716495ms: waiting for machine to come up
	I0731 17:57:42.543029   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 17:57:42.543593   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 17:57:42.543623   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 17:57:42.543539   68018 retry.go:31] will retry after 485.377079ms: waiting for machine to come up
	I0731 17:57:43.030441   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 17:57:43.031267   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 17:57:43.031290   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 17:57:43.031219   68018 retry.go:31] will retry after 923.154755ms: waiting for machine to come up
	I0731 17:57:43.956323   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 17:57:43.956817   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 17:57:43.956862   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 17:57:43.956771   68018 retry.go:31] will retry after 962.456791ms: waiting for machine to come up
	I0731 17:57:43.221427   66100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 17:57:43.720773   66100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 17:57:44.221388   66100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 17:57:44.314119   66100 kubeadm.go:1113] duration metric: took 12.239135555s to wait for elevateKubeSystemPrivileges
	I0731 17:57:44.314161   66100 kubeadm.go:394] duration metric: took 24.013024613s to StartCluster
	I0731 17:57:44.314181   66100 settings.go:142] acquiring lock: {Name:mk8ac8269296bf0b887a00ff74caaad180005f89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 17:57:44.314250   66100 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 17:57:44.316062   66100 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/kubeconfig: {Name:mkf2340bc1ada3c683f132aca37228d7588e1fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 17:57:44.316324   66100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0731 17:57:44.316337   66100 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.103 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 17:57:44.316399   66100 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 17:57:44.316500   66100 addons.go:69] Setting storage-provisioner=true in profile "enable-default-cni-985288"
	I0731 17:57:44.316522   66100 config.go:182] Loaded profile config "enable-default-cni-985288": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 17:57:44.316530   66100 addons.go:234] Setting addon storage-provisioner=true in "enable-default-cni-985288"
	I0731 17:57:44.316562   66100 host.go:66] Checking if "enable-default-cni-985288" exists ...
	I0731 17:57:44.316569   66100 addons.go:69] Setting default-storageclass=true in profile "enable-default-cni-985288"
	I0731 17:57:44.316604   66100 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "enable-default-cni-985288"
	I0731 17:57:44.317026   66100 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:57:44.317042   66100 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:57:44.317069   66100 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:57:44.317179   66100 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:57:44.318154   66100 out.go:177] * Verifying Kubernetes components...
	I0731 17:57:44.319445   66100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 17:57:44.337635   66100 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35799
	I0731 17:57:44.337889   66100 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33371
	I0731 17:57:44.338048   66100 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:57:44.338600   66100 main.go:141] libmachine: Using API Version  1
	I0731 17:57:44.338617   66100 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:57:44.338722   66100 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:57:44.338963   66100 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:57:44.339213   66100 main.go:141] libmachine: (enable-default-cni-985288) Calling .GetState
	I0731 17:57:44.339355   66100 main.go:141] libmachine: Using API Version  1
	I0731 17:57:44.339382   66100 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:57:44.340199   66100 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:57:44.340882   66100 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:57:44.341002   66100 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:57:44.343223   66100 addons.go:234] Setting addon default-storageclass=true in "enable-default-cni-985288"
	I0731 17:57:44.343261   66100 host.go:66] Checking if "enable-default-cni-985288" exists ...
	I0731 17:57:44.343640   66100 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:57:44.343661   66100 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:57:44.366785   66100 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43581
	I0731 17:57:44.367940   66100 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42547
	I0731 17:57:44.368080   66100 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:57:44.368349   66100 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:57:44.368636   66100 main.go:141] libmachine: Using API Version  1
	I0731 17:57:44.368656   66100 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:57:44.368815   66100 main.go:141] libmachine: Using API Version  1
	I0731 17:57:44.368831   66100 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:57:44.369143   66100 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:57:44.369189   66100 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:57:44.369303   66100 main.go:141] libmachine: (enable-default-cni-985288) Calling .GetState
	I0731 17:57:44.369834   66100 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:57:44.369852   66100 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:57:44.375216   66100 main.go:141] libmachine: (enable-default-cni-985288) Calling .DriverName
	I0731 17:57:44.380301   66100 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 17:57:43.304972   66339 api_server.go:279] https://192.168.61.234:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 17:57:43.305002   66339 api_server.go:103] status: https://192.168.61.234:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 17:57:43.305017   66339 api_server.go:253] Checking apiserver healthz at https://192.168.61.234:8443/healthz ...
	I0731 17:57:43.431399   66339 api_server.go:279] https://192.168.61.234:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 17:57:43.431432   66339 api_server.go:103] status: https://192.168.61.234:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 17:57:43.437565   66339 api_server.go:253] Checking apiserver healthz at https://192.168.61.234:8443/healthz ...
	I0731 17:57:43.456197   66339 api_server.go:279] https://192.168.61.234:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 17:57:43.456225   66339 api_server.go:103] status: https://192.168.61.234:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 17:57:43.937404   66339 api_server.go:253] Checking apiserver healthz at https://192.168.61.234:8443/healthz ...
	I0731 17:57:43.943022   66339 api_server.go:279] https://192.168.61.234:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 17:57:43.943050   66339 api_server.go:103] status: https://192.168.61.234:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 17:57:44.437990   66339 api_server.go:253] Checking apiserver healthz at https://192.168.61.234:8443/healthz ...
	I0731 17:57:44.443650   66339 api_server.go:279] https://192.168.61.234:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 17:57:44.443682   66339 api_server.go:103] status: https://192.168.61.234:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 17:57:44.937917   66339 api_server.go:253] Checking apiserver healthz at https://192.168.61.234:8443/healthz ...
	I0731 17:57:44.948633   66339 api_server.go:279] https://192.168.61.234:8443/healthz returned 200:
	ok
	I0731 17:57:44.963571   66339 api_server.go:141] control plane version: v1.31.0-beta.0
	I0731 17:57:44.963603   66339 api_server.go:131] duration metric: took 4.526363862s to wait for apiserver health ...
	I0731 17:57:44.963615   66339 cni.go:84] Creating CNI manager for ""
	I0731 17:57:44.963626   66339 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 17:57:44.965147   66339 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 17:57:44.966515   66339 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 17:57:44.990700   66339 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 17:57:45.011809   66339 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 17:57:45.032145   66339 system_pods.go:59] 8 kube-system pods found
	I0731 17:57:45.032186   66339 system_pods.go:61] "coredns-5cfdc65f69-w89wl" [5ff43ce4-ded1-4e1d-a56b-7b26520a67ca] Running
	I0731 17:57:45.032197   66339 system_pods.go:61] "coredns-5cfdc65f69-wxfw7" [10f2f1ac-c2ba-4b22-8796-d55a5466d2d7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 17:57:45.032207   66339 system_pods.go:61] "etcd-kubernetes-upgrade-410576" [1f0668d9-cb3e-4518-aeb0-59dd9201fd8c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 17:57:45.032218   66339 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-410576" [46d4af37-ecef-4373-915d-f27b0ffe8afc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 17:57:45.032227   66339 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-410576" [49e80679-2966-41d8-9493-1522b22526ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 17:57:45.032235   66339 system_pods.go:61] "kube-proxy-njk2g" [56020402-3c78-4887-b4d1-a0c482227876] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 17:57:45.032242   66339 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-410576" [94a5f2e0-e42e-4367-aec9-2cb6a2b20343] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 17:57:45.032250   66339 system_pods.go:61] "storage-provisioner" [c9f10202-322d-48c8-9c27-2eccc9c698e3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 17:57:45.032264   66339 system_pods.go:74] duration metric: took 20.430949ms to wait for pod list to return data ...
	I0731 17:57:45.032277   66339 node_conditions.go:102] verifying NodePressure condition ...
	I0731 17:57:45.041310   66339 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 17:57:45.041345   66339 node_conditions.go:123] node cpu capacity is 2
	I0731 17:57:45.041357   66339 node_conditions.go:105] duration metric: took 9.073121ms to run NodePressure ...
	I0731 17:57:45.041377   66339 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 17:57:45.401504   66339 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 17:57:45.414979   66339 ops.go:34] apiserver oom_adj: -16
	I0731 17:57:45.415001   66339 kubeadm.go:597] duration metric: took 17.493744415s to restartPrimaryControlPlane
	I0731 17:57:45.415012   66339 kubeadm.go:394] duration metric: took 17.593324108s to StartCluster
	I0731 17:57:45.415030   66339 settings.go:142] acquiring lock: {Name:mk8ac8269296bf0b887a00ff74caaad180005f89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 17:57:45.415142   66339 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 17:57:45.416813   66339 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/kubeconfig: {Name:mkf2340bc1ada3c683f132aca37228d7588e1fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 17:57:45.417119   66339 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.234 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 17:57:45.417245   66339 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 17:57:45.417325   66339 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-410576"
	I0731 17:57:45.417361   66339 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-410576"
	W0731 17:57:45.417377   66339 addons.go:243] addon storage-provisioner should already be in state true
	I0731 17:57:45.417386   66339 config.go:182] Loaded profile config "kubernetes-upgrade-410576": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 17:57:45.417407   66339 host.go:66] Checking if "kubernetes-upgrade-410576" exists ...
	I0731 17:57:45.417520   66339 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-410576"
	I0731 17:57:45.417573   66339 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-410576"
	I0731 17:57:45.417890   66339 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:57:45.417915   66339 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:57:45.417968   66339 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:57:45.417996   66339 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:57:45.419865   66339 out.go:177] * Verifying Kubernetes components...
	I0731 17:57:45.421325   66339 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 17:57:45.435282   66339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36137
	I0731 17:57:45.435848   66339 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:57:45.436403   66339 main.go:141] libmachine: Using API Version  1
	I0731 17:57:45.436424   66339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:57:45.436811   66339 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:57:45.437392   66339 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:57:45.437416   66339 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:57:45.437425   66339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38687
	I0731 17:57:45.437784   66339 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:57:45.438258   66339 main.go:141] libmachine: Using API Version  1
	I0731 17:57:45.438273   66339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:57:45.438636   66339 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:57:45.438829   66339 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetState
	I0731 17:57:45.442176   66339 kapi.go:59] client config for kubernetes-upgrade-410576: &rest.Config{Host:"https://192.168.61.234:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19349-8084/.minikube/profiles/kubernetes-upgrade-410576/client.crt", KeyFile:"/home/jenkins/minikube-integration/19349-8084/.minikube/profiles/kubernetes-upgrade-410576/client.key", CAFile:"/home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 17:57:45.442501   66339 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-410576"
	W0731 17:57:45.442523   66339 addons.go:243] addon default-storageclass should already be in state true
	I0731 17:57:45.442558   66339 host.go:66] Checking if "kubernetes-upgrade-410576" exists ...
	I0731 17:57:45.442950   66339 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:57:45.442983   66339 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:57:45.457161   66339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43689
	I0731 17:57:45.457783   66339 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:57:45.458495   66339 main.go:141] libmachine: Using API Version  1
	I0731 17:57:45.458511   66339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:57:45.458940   66339 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:57:45.459222   66339 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetState
	I0731 17:57:45.461387   66339 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .DriverName
	I0731 17:57:45.463489   66339 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 17:57:44.386633   66100 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 17:57:44.386656   66100 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 17:57:44.386680   66100 main.go:141] libmachine: (enable-default-cni-985288) Calling .GetSSHHostname
	I0731 17:57:44.388955   66100 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39155
	I0731 17:57:44.389464   66100 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:57:44.389960   66100 main.go:141] libmachine: Using API Version  1
	I0731 17:57:44.389975   66100 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:57:44.390357   66100 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:57:44.390567   66100 main.go:141] libmachine: (enable-default-cni-985288) Calling .GetState
	I0731 17:57:44.393003   66100 main.go:141] libmachine: (enable-default-cni-985288) Calling .DriverName
	I0731 17:57:44.393289   66100 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 17:57:44.393303   66100 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 17:57:44.393321   66100 main.go:141] libmachine: (enable-default-cni-985288) Calling .GetSSHHostname
	I0731 17:57:44.395333   66100 main.go:141] libmachine: (enable-default-cni-985288) DBG | domain enable-default-cni-985288 has defined MAC address 52:54:00:ea:c1:da in network mk-enable-default-cni-985288
	I0731 17:57:44.395748   66100 main.go:141] libmachine: (enable-default-cni-985288) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:c1:da", ip: ""} in network mk-enable-default-cni-985288: {Iface:virbr1 ExpiryTime:2024-07-31 18:56:59 +0000 UTC Type:0 Mac:52:54:00:ea:c1:da Iaid: IPaddr:192.168.72.103 Prefix:24 Hostname:enable-default-cni-985288 Clientid:01:52:54:00:ea:c1:da}
	I0731 17:57:44.395765   66100 main.go:141] libmachine: (enable-default-cni-985288) DBG | domain enable-default-cni-985288 has defined IP address 192.168.72.103 and MAC address 52:54:00:ea:c1:da in network mk-enable-default-cni-985288
	I0731 17:57:44.396017   66100 main.go:141] libmachine: (enable-default-cni-985288) Calling .GetSSHPort
	I0731 17:57:44.396168   66100 main.go:141] libmachine: (enable-default-cni-985288) Calling .GetSSHKeyPath
	I0731 17:57:44.396370   66100 main.go:141] libmachine: (enable-default-cni-985288) Calling .GetSSHUsername
	I0731 17:57:44.396567   66100 sshutil.go:53] new ssh client: &{IP:192.168.72.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/enable-default-cni-985288/id_rsa Username:docker}
	I0731 17:57:44.399257   66100 main.go:141] libmachine: (enable-default-cni-985288) DBG | domain enable-default-cni-985288 has defined MAC address 52:54:00:ea:c1:da in network mk-enable-default-cni-985288
	I0731 17:57:44.399283   66100 main.go:141] libmachine: (enable-default-cni-985288) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:c1:da", ip: ""} in network mk-enable-default-cni-985288: {Iface:virbr1 ExpiryTime:2024-07-31 18:56:59 +0000 UTC Type:0 Mac:52:54:00:ea:c1:da Iaid: IPaddr:192.168.72.103 Prefix:24 Hostname:enable-default-cni-985288 Clientid:01:52:54:00:ea:c1:da}
	I0731 17:57:44.399309   66100 main.go:141] libmachine: (enable-default-cni-985288) DBG | domain enable-default-cni-985288 has defined IP address 192.168.72.103 and MAC address 52:54:00:ea:c1:da in network mk-enable-default-cni-985288
	I0731 17:57:44.399325   66100 main.go:141] libmachine: (enable-default-cni-985288) Calling .GetSSHPort
	I0731 17:57:44.399627   66100 main.go:141] libmachine: (enable-default-cni-985288) Calling .GetSSHKeyPath
	I0731 17:57:44.399778   66100 main.go:141] libmachine: (enable-default-cni-985288) Calling .GetSSHUsername
	I0731 17:57:44.399909   66100 sshutil.go:53] new ssh client: &{IP:192.168.72.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/enable-default-cni-985288/id_rsa Username:docker}
	I0731 17:57:44.581582   66100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0731 17:57:44.581743   66100 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 17:57:44.741322   66100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 17:57:44.783312   66100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 17:57:45.240643   66100 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0731 17:57:45.244529   66100 node_ready.go:35] waiting up to 15m0s for node "enable-default-cni-985288" to be "Ready" ...
	I0731 17:57:45.270185   66100 node_ready.go:49] node "enable-default-cni-985288" has status "Ready":"True"
	I0731 17:57:45.270211   66100 node_ready.go:38] duration metric: took 25.65171ms for node "enable-default-cni-985288" to be "Ready" ...
	I0731 17:57:45.270221   66100 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 17:57:45.284714   66100 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-jnf9w" in "kube-system" namespace to be "Ready" ...
	I0731 17:57:45.750601   66100 kapi.go:214] "coredns" deployment in "kube-system" namespace and "enable-default-cni-985288" context rescaled to 1 replicas
	I0731 17:57:46.111624   66100 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.328263993s)
	I0731 17:57:46.111675   66100 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.370315514s)
	I0731 17:57:46.111684   66100 main.go:141] libmachine: Making call to close driver server
	I0731 17:57:46.111703   66100 main.go:141] libmachine: (enable-default-cni-985288) Calling .Close
	I0731 17:57:46.111709   66100 main.go:141] libmachine: Making call to close driver server
	I0731 17:57:46.111721   66100 main.go:141] libmachine: (enable-default-cni-985288) Calling .Close
	I0731 17:57:46.112110   66100 main.go:141] libmachine: (enable-default-cni-985288) DBG | Closing plugin on server side
	I0731 17:57:46.112136   66100 main.go:141] libmachine: (enable-default-cni-985288) DBG | Closing plugin on server side
	I0731 17:57:46.112170   66100 main.go:141] libmachine: Successfully made call to close driver server
	I0731 17:57:46.112180   66100 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 17:57:46.112190   66100 main.go:141] libmachine: Making call to close driver server
	I0731 17:57:46.112199   66100 main.go:141] libmachine: (enable-default-cni-985288) Calling .Close
	I0731 17:57:46.112293   66100 main.go:141] libmachine: Successfully made call to close driver server
	I0731 17:57:46.112311   66100 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 17:57:46.112321   66100 main.go:141] libmachine: Making call to close driver server
	I0731 17:57:46.112332   66100 main.go:141] libmachine: (enable-default-cni-985288) Calling .Close
	I0731 17:57:46.114255   66100 main.go:141] libmachine: (enable-default-cni-985288) DBG | Closing plugin on server side
	I0731 17:57:46.114278   66100 main.go:141] libmachine: (enable-default-cni-985288) DBG | Closing plugin on server side
	I0731 17:57:46.114288   66100 main.go:141] libmachine: Successfully made call to close driver server
	I0731 17:57:46.114303   66100 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 17:57:46.114321   66100 main.go:141] libmachine: Successfully made call to close driver server
	I0731 17:57:46.114330   66100 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 17:57:46.151419   66100 main.go:141] libmachine: Making call to close driver server
	I0731 17:57:46.151447   66100 main.go:141] libmachine: (enable-default-cni-985288) Calling .Close
	I0731 17:57:46.151770   66100 main.go:141] libmachine: Successfully made call to close driver server
	I0731 17:57:46.151795   66100 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 17:57:46.154169   66100 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0731 17:57:45.465928   66339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39861
	I0731 17:57:45.466281   66339 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:57:45.466745   66339 main.go:141] libmachine: Using API Version  1
	I0731 17:57:45.466761   66339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:57:45.467123   66339 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:57:45.467548   66339 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 17:57:45.467566   66339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 17:57:45.467592   66339 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetSSHHostname
	I0731 17:57:45.467723   66339 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:57:45.467769   66339 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:57:45.471981   66339 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | domain kubernetes-upgrade-410576 has defined MAC address 52:54:00:c8:b0:36 in network mk-kubernetes-upgrade-410576
	I0731 17:57:45.472483   66339 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:b0:36", ip: ""} in network mk-kubernetes-upgrade-410576: {Iface:virbr2 ExpiryTime:2024-07-31 18:56:11 +0000 UTC Type:0 Mac:52:54:00:c8:b0:36 Iaid: IPaddr:192.168.61.234 Prefix:24 Hostname:kubernetes-upgrade-410576 Clientid:01:52:54:00:c8:b0:36}
	I0731 17:57:45.472502   66339 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | domain kubernetes-upgrade-410576 has defined IP address 192.168.61.234 and MAC address 52:54:00:c8:b0:36 in network mk-kubernetes-upgrade-410576
	I0731 17:57:45.472787   66339 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetSSHPort
	I0731 17:57:45.475325   66339 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetSSHKeyPath
	I0731 17:57:45.475502   66339 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetSSHUsername
	I0731 17:57:45.475655   66339 sshutil.go:53] new ssh client: &{IP:192.168.61.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/kubernetes-upgrade-410576/id_rsa Username:docker}
	I0731 17:57:45.490058   66339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39375
	I0731 17:57:45.490723   66339 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:57:45.491351   66339 main.go:141] libmachine: Using API Version  1
	I0731 17:57:45.491377   66339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:57:45.491952   66339 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:57:45.492173   66339 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetState
	I0731 17:57:45.494320   66339 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .DriverName
	I0731 17:57:45.494701   66339 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 17:57:45.494719   66339 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 17:57:45.494745   66339 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetSSHHostname
	I0731 17:57:45.498197   66339 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | domain kubernetes-upgrade-410576 has defined MAC address 52:54:00:c8:b0:36 in network mk-kubernetes-upgrade-410576
	I0731 17:57:45.498693   66339 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:b0:36", ip: ""} in network mk-kubernetes-upgrade-410576: {Iface:virbr2 ExpiryTime:2024-07-31 18:56:11 +0000 UTC Type:0 Mac:52:54:00:c8:b0:36 Iaid: IPaddr:192.168.61.234 Prefix:24 Hostname:kubernetes-upgrade-410576 Clientid:01:52:54:00:c8:b0:36}
	I0731 17:57:45.498719   66339 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | domain kubernetes-upgrade-410576 has defined IP address 192.168.61.234 and MAC address 52:54:00:c8:b0:36 in network mk-kubernetes-upgrade-410576
	I0731 17:57:45.498912   66339 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetSSHPort
	I0731 17:57:45.499100   66339 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetSSHKeyPath
	I0731 17:57:45.499298   66339 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .GetSSHUsername
	I0731 17:57:45.499472   66339 sshutil.go:53] new ssh client: &{IP:192.168.61.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/kubernetes-upgrade-410576/id_rsa Username:docker}
	I0731 17:57:45.679365   66339 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 17:57:45.706132   66339 api_server.go:52] waiting for apiserver process to appear ...
	I0731 17:57:45.706226   66339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 17:57:45.726191   66339 api_server.go:72] duration metric: took 309.029929ms to wait for apiserver process to appear ...
	I0731 17:57:45.726220   66339 api_server.go:88] waiting for apiserver healthz status ...
	I0731 17:57:45.726245   66339 api_server.go:253] Checking apiserver healthz at https://192.168.61.234:8443/healthz ...
	I0731 17:57:45.734087   66339 api_server.go:279] https://192.168.61.234:8443/healthz returned 200:
	ok
	I0731 17:57:45.735992   66339 api_server.go:141] control plane version: v1.31.0-beta.0
	I0731 17:57:45.736017   66339 api_server.go:131] duration metric: took 9.789652ms to wait for apiserver health ...
	I0731 17:57:45.736026   66339 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 17:57:45.743429   66339 system_pods.go:59] 8 kube-system pods found
	I0731 17:57:45.743456   66339 system_pods.go:61] "coredns-5cfdc65f69-w89wl" [5ff43ce4-ded1-4e1d-a56b-7b26520a67ca] Running
	I0731 17:57:45.743462   66339 system_pods.go:61] "coredns-5cfdc65f69-wxfw7" [10f2f1ac-c2ba-4b22-8796-d55a5466d2d7] Running
	I0731 17:57:45.743471   66339 system_pods.go:61] "etcd-kubernetes-upgrade-410576" [1f0668d9-cb3e-4518-aeb0-59dd9201fd8c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 17:57:45.743477   66339 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-410576" [46d4af37-ecef-4373-915d-f27b0ffe8afc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 17:57:45.743487   66339 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-410576" [49e80679-2966-41d8-9493-1522b22526ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 17:57:45.743494   66339 system_pods.go:61] "kube-proxy-njk2g" [56020402-3c78-4887-b4d1-a0c482227876] Running
	I0731 17:57:45.743509   66339 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-410576" [94a5f2e0-e42e-4367-aec9-2cb6a2b20343] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 17:57:45.743514   66339 system_pods.go:61] "storage-provisioner" [c9f10202-322d-48c8-9c27-2eccc9c698e3] Running
	I0731 17:57:45.743523   66339 system_pods.go:74] duration metric: took 7.488941ms to wait for pod list to return data ...
	I0731 17:57:45.743537   66339 kubeadm.go:582] duration metric: took 326.380883ms to wait for: map[apiserver:true system_pods:true]
	I0731 17:57:45.743555   66339 node_conditions.go:102] verifying NodePressure condition ...
	I0731 17:57:45.748493   66339 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 17:57:45.748515   66339 node_conditions.go:123] node cpu capacity is 2
	I0731 17:57:45.748524   66339 node_conditions.go:105] duration metric: took 4.964653ms to run NodePressure ...
	I0731 17:57:45.748535   66339 start.go:241] waiting for startup goroutines ...
	I0731 17:57:45.852351   66339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 17:57:45.871206   66339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 17:57:46.660350   66339 main.go:141] libmachine: Making call to close driver server
	I0731 17:57:46.660374   66339 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .Close
	I0731 17:57:46.660553   66339 main.go:141] libmachine: Making call to close driver server
	I0731 17:57:46.660571   66339 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .Close
	I0731 17:57:46.660702   66339 main.go:141] libmachine: Successfully made call to close driver server
	I0731 17:57:46.660718   66339 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 17:57:46.660728   66339 main.go:141] libmachine: Making call to close driver server
	I0731 17:57:46.660753   66339 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .Close
	I0731 17:57:46.661164   66339 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | Closing plugin on server side
	I0731 17:57:46.661214   66339 main.go:141] libmachine: Successfully made call to close driver server
	I0731 17:57:46.661224   66339 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 17:57:46.661231   66339 main.go:141] libmachine: Making call to close driver server
	I0731 17:57:46.661239   66339 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .Close
	I0731 17:57:46.661244   66339 main.go:141] libmachine: Successfully made call to close driver server
	I0731 17:57:46.661266   66339 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 17:57:46.661265   66339 main.go:141] libmachine: (kubernetes-upgrade-410576) DBG | Closing plugin on server side
	I0731 17:57:46.662885   66339 main.go:141] libmachine: Successfully made call to close driver server
	I0731 17:57:46.662916   66339 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 17:57:46.678938   66339 main.go:141] libmachine: Making call to close driver server
	I0731 17:57:46.678963   66339 main.go:141] libmachine: (kubernetes-upgrade-410576) Calling .Close
	I0731 17:57:46.679306   66339 main.go:141] libmachine: Successfully made call to close driver server
	I0731 17:57:46.679327   66339 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 17:57:46.681515   66339 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0731 17:57:46.682858   66339 addons.go:510] duration metric: took 1.265615598s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0731 17:57:46.682895   66339 start.go:246] waiting for cluster config update ...
	I0731 17:57:46.682909   66339 start.go:255] writing updated cluster config ...
	I0731 17:57:46.683181   66339 ssh_runner.go:195] Run: rm -f paused
	I0731 17:57:46.751730   66339 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0731 17:57:46.753534   66339 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-410576" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 31 17:57:47 kubernetes-upgrade-410576 crio[3090]: time="2024-07-31 17:57:47.578724886Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722448667578672329,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=53e306bf-37a0-40aa-b2b1-422601974bf7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:57:47 kubernetes-upgrade-410576 crio[3090]: time="2024-07-31 17:57:47.580085368Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ba8462a5-0c37-43a4-91bd-c15005da6975 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:57:47 kubernetes-upgrade-410576 crio[3090]: time="2024-07-31 17:57:47.580475638Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ba8462a5-0c37-43a4-91bd-c15005da6975 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:57:47 kubernetes-upgrade-410576 crio[3090]: time="2024-07-31 17:57:47.581325209Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:629de0eaa1b081c2caf6f671bcd9e0403b26332e03feff44af6d2e0c2ad8adf4,PodSandboxId:736f7b787e9a8ff0330d847504632133fd9cbb8d3c56741a1822287f7d25abae,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722448664732971325,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-njk2g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56020402-3c78-4887-b4d1-a0c482227876,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c40b9ac3395c99df1a84de445e0cb06eadfedee69bfdf09ad378eb4068006d6d,PodSandboxId:c963dd8f11ae92fea8eb01f85a6564163a6b59a558e609af9222d0ff09952758,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722448664716065728,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9f10202-322d-48c8-9c27-2eccc9c698e3,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eacb3e1a02267ba1c8c9f551fde21eb6ab9608d40e27000f5b5ac19d4f9dfe22,PodSandboxId:2ae8e9103bf15322c13289f332bd594336aca45dc54f021ec00d501a05e564c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722448664690439482,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-wxfw7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10f2f1ac-c2ba-4b22-8796-d55a5466d2d7,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40773a005fdcee0f6217894d50edb05613801f5ad6625dd3315f24a1db18e7e3,PodSandboxId:23aa46163cecc96909d6b49c01ad3c1c695d84aa92941458e6f13084cd69904d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722448659853271733,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-410576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bf7953ecb66f0ef8427b99d45
a66b16,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31df6357a157b3147bc06c033fc167cee993645b9412e914f7c984d2ac6285da,PodSandboxId:9319a1dafe0daf23e1df43f574162eda63910e915d331d6a7c0faf0d57d32db5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722448659859999050,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-410576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09214
d06b48cae7e5e3758d6c895fac2,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:429753ff875f7ef2feba8957d2717b031d9aa4eba2ad16d1ce47ae87da4e6314,PodSandboxId:0bd4a644f5c6cfafd6ffed52cf9dc4d5838b07f9dc177642025495846b857c98,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722448659836718280,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-410576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1d3e63af2
14df4aad14da8a3bc95d92,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:381a87a71330dc352155eb7dc040d753e19bd6d80c227ec956192b984d57c4d3,PodSandboxId:e741a9b7a831f4ba8c7b82e223ec3ba76ab5c2bbc4156d220d6a299782780e56,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722448656191519477,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-410576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c069dd300897f9
14ba30fdd5464a743,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9364ff8bf26e16a16bf273d54905712d78f1ed9dee9d0a29c3f404694ea6aff2,PodSandboxId:2f0cf8d7f45ba400f577da59d5efcd325547bd6fc496cbebf1eb20a7ea7b95b2,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722448655680171780,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-w89wl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ff43ce4-ded1-4e1d-a56b-7b26520a67ca,},Annotations
:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:206a4121a79649e682ff94ef24c8661b0ea16a74394af1f79502c1bdff994938,PodSandboxId:2ae8e9103bf15322c13289f332bd594336aca45dc54f021ec00d501a05e564c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722448647400815645,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-wxfw7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10f2f1ac-c2ba-4b22-8796-d55a5466d2d7,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad9115ef25d6c62650db1af3db0851e77fe94c2532bf7955814a4e98ba8783ad,PodSandboxId:58ed9203fdbe6e9fca6f4ae1c80ed58c6a996f233aef581e21e5542b07d9697f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722448645159900842,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-w89wl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ff43ce4-ded1-4e1d-a56b-7b26520a67ca,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b06b35c4ae1b835ad2634098192f08ccc58ce1449710608870143dbbf92e364,PodSandboxId:b1e3fdef5976de519617b76d350de4b630499fefcdaca7a
ecd7cd0f3e64dbf15,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1722448643814533950,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-njk2g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56020402-3c78-4887-b4d1-a0c482227876,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8817455f700c7831efb13c71044d6b1b813c31e0d5a632259f23c2a1e7a87595,PodSandboxId:20ace8b06ff9e1791e658543647b552e9bf050b8866534cd6e4e738c741bbd0d,Metadata:&Contai
nerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722448643882989991,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9f10202-322d-48c8-9c27-2eccc9c698e3,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff5fcc9fc7c5612da84814e64df5f38f1958d94615a2af74d4961e146d18f5cb,PodSandboxId:5200eb3d246ca31bbc7da38f742eac7998598c2ffa2a95717f3592883a5b7c27,Metadata:&ContainerMetadata{N
ame:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1722448644070721870,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-410576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bf7953ecb66f0ef8427b99d45a66b16,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d71f853e14817ede82c6e339d8dc76a66cfc97c086886e2151dc299a4e7c9c0,PodSandboxId:ecc82059531ea9294803392abba88300b415cb9e0c6d83847b57eb82ccc77bf6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:
&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1722448643951832562,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-410576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c069dd300897f914ba30fdd5464a743,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18f923af77f99a86e9e79c9d3da51029f9f0828ab4112e6f7c8ec3c858068163,PodSandboxId:afc70eba8ddfcc325912a39e1f6c3a304a8195d4052330058ad7a72da28848d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&Image
Spec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722448643931716572,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-410576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1d3e63af214df4aad14da8a3bc95d92,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89fd134efa85f72b905731e6cfc5deb3dfa58c77e1d0f610bba984100fab74c8,PodSandboxId:52c47f23e64839db2840f493d3e32b4a218d410962459d11b9842152e7ff2a58,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&Im
ageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1722448643777376741,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-410576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09214d06b48cae7e5e3758d6c895fac2,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ba8462a5-0c37-43a4-91bd-c15005da6975 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:57:47 kubernetes-upgrade-410576 crio[3090]: time="2024-07-31 17:57:47.630459011Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5413ed69-6fb7-4382-900a-0f5170d23e45 name=/runtime.v1.RuntimeService/Version
	Jul 31 17:57:47 kubernetes-upgrade-410576 crio[3090]: time="2024-07-31 17:57:47.630809597Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5413ed69-6fb7-4382-900a-0f5170d23e45 name=/runtime.v1.RuntimeService/Version
	Jul 31 17:57:47 kubernetes-upgrade-410576 crio[3090]: time="2024-07-31 17:57:47.638060223Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2e831c12-f506-4e8e-92a7-631f9bec8730 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:57:47 kubernetes-upgrade-410576 crio[3090]: time="2024-07-31 17:57:47.639024742Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722448667638988658,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2e831c12-f506-4e8e-92a7-631f9bec8730 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:57:47 kubernetes-upgrade-410576 crio[3090]: time="2024-07-31 17:57:47.639838306Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5684f37a-188f-4820-99c5-ca5c7a7ba153 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:57:47 kubernetes-upgrade-410576 crio[3090]: time="2024-07-31 17:57:47.639908626Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5684f37a-188f-4820-99c5-ca5c7a7ba153 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:57:47 kubernetes-upgrade-410576 crio[3090]: time="2024-07-31 17:57:47.640346844Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:629de0eaa1b081c2caf6f671bcd9e0403b26332e03feff44af6d2e0c2ad8adf4,PodSandboxId:736f7b787e9a8ff0330d847504632133fd9cbb8d3c56741a1822287f7d25abae,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722448664732971325,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-njk2g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56020402-3c78-4887-b4d1-a0c482227876,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c40b9ac3395c99df1a84de445e0cb06eadfedee69bfdf09ad378eb4068006d6d,PodSandboxId:c963dd8f11ae92fea8eb01f85a6564163a6b59a558e609af9222d0ff09952758,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722448664716065728,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9f10202-322d-48c8-9c27-2eccc9c698e3,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eacb3e1a02267ba1c8c9f551fde21eb6ab9608d40e27000f5b5ac19d4f9dfe22,PodSandboxId:2ae8e9103bf15322c13289f332bd594336aca45dc54f021ec00d501a05e564c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722448664690439482,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-wxfw7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10f2f1ac-c2ba-4b22-8796-d55a5466d2d7,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40773a005fdcee0f6217894d50edb05613801f5ad6625dd3315f24a1db18e7e3,PodSandboxId:23aa46163cecc96909d6b49c01ad3c1c695d84aa92941458e6f13084cd69904d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722448659853271733,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-410576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bf7953ecb66f0ef8427b99d45
a66b16,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31df6357a157b3147bc06c033fc167cee993645b9412e914f7c984d2ac6285da,PodSandboxId:9319a1dafe0daf23e1df43f574162eda63910e915d331d6a7c0faf0d57d32db5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722448659859999050,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-410576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09214
d06b48cae7e5e3758d6c895fac2,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:429753ff875f7ef2feba8957d2717b031d9aa4eba2ad16d1ce47ae87da4e6314,PodSandboxId:0bd4a644f5c6cfafd6ffed52cf9dc4d5838b07f9dc177642025495846b857c98,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722448659836718280,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-410576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1d3e63af2
14df4aad14da8a3bc95d92,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:381a87a71330dc352155eb7dc040d753e19bd6d80c227ec956192b984d57c4d3,PodSandboxId:e741a9b7a831f4ba8c7b82e223ec3ba76ab5c2bbc4156d220d6a299782780e56,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722448656191519477,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-410576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c069dd300897f9
14ba30fdd5464a743,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9364ff8bf26e16a16bf273d54905712d78f1ed9dee9d0a29c3f404694ea6aff2,PodSandboxId:2f0cf8d7f45ba400f577da59d5efcd325547bd6fc496cbebf1eb20a7ea7b95b2,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722448655680171780,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-w89wl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ff43ce4-ded1-4e1d-a56b-7b26520a67ca,},Annotations
:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:206a4121a79649e682ff94ef24c8661b0ea16a74394af1f79502c1bdff994938,PodSandboxId:2ae8e9103bf15322c13289f332bd594336aca45dc54f021ec00d501a05e564c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722448647400815645,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-wxfw7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10f2f1ac-c2ba-4b22-8796-d55a5466d2d7,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad9115ef25d6c62650db1af3db0851e77fe94c2532bf7955814a4e98ba8783ad,PodSandboxId:58ed9203fdbe6e9fca6f4ae1c80ed58c6a996f233aef581e21e5542b07d9697f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722448645159900842,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-w89wl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ff43ce4-ded1-4e1d-a56b-7b26520a67ca,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b06b35c4ae1b835ad2634098192f08ccc58ce1449710608870143dbbf92e364,PodSandboxId:b1e3fdef5976de519617b76d350de4b630499fefcdaca7a
ecd7cd0f3e64dbf15,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1722448643814533950,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-njk2g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56020402-3c78-4887-b4d1-a0c482227876,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8817455f700c7831efb13c71044d6b1b813c31e0d5a632259f23c2a1e7a87595,PodSandboxId:20ace8b06ff9e1791e658543647b552e9bf050b8866534cd6e4e738c741bbd0d,Metadata:&Contai
nerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722448643882989991,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9f10202-322d-48c8-9c27-2eccc9c698e3,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff5fcc9fc7c5612da84814e64df5f38f1958d94615a2af74d4961e146d18f5cb,PodSandboxId:5200eb3d246ca31bbc7da38f742eac7998598c2ffa2a95717f3592883a5b7c27,Metadata:&ContainerMetadata{N
ame:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1722448644070721870,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-410576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bf7953ecb66f0ef8427b99d45a66b16,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d71f853e14817ede82c6e339d8dc76a66cfc97c086886e2151dc299a4e7c9c0,PodSandboxId:ecc82059531ea9294803392abba88300b415cb9e0c6d83847b57eb82ccc77bf6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:
&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1722448643951832562,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-410576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c069dd300897f914ba30fdd5464a743,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18f923af77f99a86e9e79c9d3da51029f9f0828ab4112e6f7c8ec3c858068163,PodSandboxId:afc70eba8ddfcc325912a39e1f6c3a304a8195d4052330058ad7a72da28848d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&Image
Spec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722448643931716572,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-410576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1d3e63af214df4aad14da8a3bc95d92,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89fd134efa85f72b905731e6cfc5deb3dfa58c77e1d0f610bba984100fab74c8,PodSandboxId:52c47f23e64839db2840f493d3e32b4a218d410962459d11b9842152e7ff2a58,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&Im
ageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1722448643777376741,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-410576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09214d06b48cae7e5e3758d6c895fac2,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5684f37a-188f-4820-99c5-ca5c7a7ba153 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:57:47 kubernetes-upgrade-410576 crio[3090]: time="2024-07-31 17:57:47.687999358Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1cb37064-78fb-4af1-9066-5fe19fb9de78 name=/runtime.v1.RuntimeService/Version
	Jul 31 17:57:47 kubernetes-upgrade-410576 crio[3090]: time="2024-07-31 17:57:47.688383973Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1cb37064-78fb-4af1-9066-5fe19fb9de78 name=/runtime.v1.RuntimeService/Version
	Jul 31 17:57:47 kubernetes-upgrade-410576 crio[3090]: time="2024-07-31 17:57:47.690126532Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f7243241-02ac-4536-b211-b00cbee40030 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:57:47 kubernetes-upgrade-410576 crio[3090]: time="2024-07-31 17:57:47.690617758Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722448667690589414,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f7243241-02ac-4536-b211-b00cbee40030 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:57:47 kubernetes-upgrade-410576 crio[3090]: time="2024-07-31 17:57:47.691579697Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1fca7811-cb82-49a7-858c-a9fe84caeef2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:57:47 kubernetes-upgrade-410576 crio[3090]: time="2024-07-31 17:57:47.691653074Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1fca7811-cb82-49a7-858c-a9fe84caeef2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:57:47 kubernetes-upgrade-410576 crio[3090]: time="2024-07-31 17:57:47.692271209Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:629de0eaa1b081c2caf6f671bcd9e0403b26332e03feff44af6d2e0c2ad8adf4,PodSandboxId:736f7b787e9a8ff0330d847504632133fd9cbb8d3c56741a1822287f7d25abae,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722448664732971325,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-njk2g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56020402-3c78-4887-b4d1-a0c482227876,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c40b9ac3395c99df1a84de445e0cb06eadfedee69bfdf09ad378eb4068006d6d,PodSandboxId:c963dd8f11ae92fea8eb01f85a6564163a6b59a558e609af9222d0ff09952758,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722448664716065728,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9f10202-322d-48c8-9c27-2eccc9c698e3,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eacb3e1a02267ba1c8c9f551fde21eb6ab9608d40e27000f5b5ac19d4f9dfe22,PodSandboxId:2ae8e9103bf15322c13289f332bd594336aca45dc54f021ec00d501a05e564c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722448664690439482,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-wxfw7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10f2f1ac-c2ba-4b22-8796-d55a5466d2d7,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40773a005fdcee0f6217894d50edb05613801f5ad6625dd3315f24a1db18e7e3,PodSandboxId:23aa46163cecc96909d6b49c01ad3c1c695d84aa92941458e6f13084cd69904d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722448659853271733,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-410576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bf7953ecb66f0ef8427b99d45
a66b16,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31df6357a157b3147bc06c033fc167cee993645b9412e914f7c984d2ac6285da,PodSandboxId:9319a1dafe0daf23e1df43f574162eda63910e915d331d6a7c0faf0d57d32db5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722448659859999050,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-410576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09214
d06b48cae7e5e3758d6c895fac2,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:429753ff875f7ef2feba8957d2717b031d9aa4eba2ad16d1ce47ae87da4e6314,PodSandboxId:0bd4a644f5c6cfafd6ffed52cf9dc4d5838b07f9dc177642025495846b857c98,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722448659836718280,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-410576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1d3e63af2
14df4aad14da8a3bc95d92,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:381a87a71330dc352155eb7dc040d753e19bd6d80c227ec956192b984d57c4d3,PodSandboxId:e741a9b7a831f4ba8c7b82e223ec3ba76ab5c2bbc4156d220d6a299782780e56,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722448656191519477,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-410576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c069dd300897f9
14ba30fdd5464a743,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9364ff8bf26e16a16bf273d54905712d78f1ed9dee9d0a29c3f404694ea6aff2,PodSandboxId:2f0cf8d7f45ba400f577da59d5efcd325547bd6fc496cbebf1eb20a7ea7b95b2,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722448655680171780,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-w89wl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ff43ce4-ded1-4e1d-a56b-7b26520a67ca,},Annotations
:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:206a4121a79649e682ff94ef24c8661b0ea16a74394af1f79502c1bdff994938,PodSandboxId:2ae8e9103bf15322c13289f332bd594336aca45dc54f021ec00d501a05e564c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722448647400815645,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-wxfw7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10f2f1ac-c2ba-4b22-8796-d55a5466d2d7,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad9115ef25d6c62650db1af3db0851e77fe94c2532bf7955814a4e98ba8783ad,PodSandboxId:58ed9203fdbe6e9fca6f4ae1c80ed58c6a996f233aef581e21e5542b07d9697f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722448645159900842,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-w89wl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ff43ce4-ded1-4e1d-a56b-7b26520a67ca,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b06b35c4ae1b835ad2634098192f08ccc58ce1449710608870143dbbf92e364,PodSandboxId:b1e3fdef5976de519617b76d350de4b630499fefcdaca7a
ecd7cd0f3e64dbf15,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1722448643814533950,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-njk2g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56020402-3c78-4887-b4d1-a0c482227876,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8817455f700c7831efb13c71044d6b1b813c31e0d5a632259f23c2a1e7a87595,PodSandboxId:20ace8b06ff9e1791e658543647b552e9bf050b8866534cd6e4e738c741bbd0d,Metadata:&Contai
nerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722448643882989991,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9f10202-322d-48c8-9c27-2eccc9c698e3,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff5fcc9fc7c5612da84814e64df5f38f1958d94615a2af74d4961e146d18f5cb,PodSandboxId:5200eb3d246ca31bbc7da38f742eac7998598c2ffa2a95717f3592883a5b7c27,Metadata:&ContainerMetadata{N
ame:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1722448644070721870,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-410576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bf7953ecb66f0ef8427b99d45a66b16,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d71f853e14817ede82c6e339d8dc76a66cfc97c086886e2151dc299a4e7c9c0,PodSandboxId:ecc82059531ea9294803392abba88300b415cb9e0c6d83847b57eb82ccc77bf6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:
&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1722448643951832562,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-410576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c069dd300897f914ba30fdd5464a743,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18f923af77f99a86e9e79c9d3da51029f9f0828ab4112e6f7c8ec3c858068163,PodSandboxId:afc70eba8ddfcc325912a39e1f6c3a304a8195d4052330058ad7a72da28848d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&Image
Spec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722448643931716572,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-410576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1d3e63af214df4aad14da8a3bc95d92,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89fd134efa85f72b905731e6cfc5deb3dfa58c77e1d0f610bba984100fab74c8,PodSandboxId:52c47f23e64839db2840f493d3e32b4a218d410962459d11b9842152e7ff2a58,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&Im
ageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1722448643777376741,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-410576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09214d06b48cae7e5e3758d6c895fac2,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1fca7811-cb82-49a7-858c-a9fe84caeef2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:57:47 kubernetes-upgrade-410576 crio[3090]: time="2024-07-31 17:57:47.738070202Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=01fe7f9b-8266-48fa-9064-d747b2aea8c6 name=/runtime.v1.RuntimeService/Version
	Jul 31 17:57:47 kubernetes-upgrade-410576 crio[3090]: time="2024-07-31 17:57:47.739095948Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=01fe7f9b-8266-48fa-9064-d747b2aea8c6 name=/runtime.v1.RuntimeService/Version
	Jul 31 17:57:47 kubernetes-upgrade-410576 crio[3090]: time="2024-07-31 17:57:47.740956702Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=35aa73c8-d191-4859-852b-80ecbc86f172 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:57:47 kubernetes-upgrade-410576 crio[3090]: time="2024-07-31 17:57:47.741712154Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722448667741681635,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=35aa73c8-d191-4859-852b-80ecbc86f172 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:57:47 kubernetes-upgrade-410576 crio[3090]: time="2024-07-31 17:57:47.742616671Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6e7cf6ac-2492-42bd-b1d1-92dc209ec194 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:57:47 kubernetes-upgrade-410576 crio[3090]: time="2024-07-31 17:57:47.742717182Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6e7cf6ac-2492-42bd-b1d1-92dc209ec194 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:57:47 kubernetes-upgrade-410576 crio[3090]: time="2024-07-31 17:57:47.743831258Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:629de0eaa1b081c2caf6f671bcd9e0403b26332e03feff44af6d2e0c2ad8adf4,PodSandboxId:736f7b787e9a8ff0330d847504632133fd9cbb8d3c56741a1822287f7d25abae,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722448664732971325,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-njk2g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56020402-3c78-4887-b4d1-a0c482227876,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c40b9ac3395c99df1a84de445e0cb06eadfedee69bfdf09ad378eb4068006d6d,PodSandboxId:c963dd8f11ae92fea8eb01f85a6564163a6b59a558e609af9222d0ff09952758,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722448664716065728,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9f10202-322d-48c8-9c27-2eccc9c698e3,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eacb3e1a02267ba1c8c9f551fde21eb6ab9608d40e27000f5b5ac19d4f9dfe22,PodSandboxId:2ae8e9103bf15322c13289f332bd594336aca45dc54f021ec00d501a05e564c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722448664690439482,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-wxfw7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10f2f1ac-c2ba-4b22-8796-d55a5466d2d7,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40773a005fdcee0f6217894d50edb05613801f5ad6625dd3315f24a1db18e7e3,PodSandboxId:23aa46163cecc96909d6b49c01ad3c1c695d84aa92941458e6f13084cd69904d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722448659853271733,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-410576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bf7953ecb66f0ef8427b99d45
a66b16,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31df6357a157b3147bc06c033fc167cee993645b9412e914f7c984d2ac6285da,PodSandboxId:9319a1dafe0daf23e1df43f574162eda63910e915d331d6a7c0faf0d57d32db5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722448659859999050,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-410576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09214
d06b48cae7e5e3758d6c895fac2,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:429753ff875f7ef2feba8957d2717b031d9aa4eba2ad16d1ce47ae87da4e6314,PodSandboxId:0bd4a644f5c6cfafd6ffed52cf9dc4d5838b07f9dc177642025495846b857c98,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722448659836718280,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-410576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1d3e63af2
14df4aad14da8a3bc95d92,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:381a87a71330dc352155eb7dc040d753e19bd6d80c227ec956192b984d57c4d3,PodSandboxId:e741a9b7a831f4ba8c7b82e223ec3ba76ab5c2bbc4156d220d6a299782780e56,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722448656191519477,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-410576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c069dd300897f9
14ba30fdd5464a743,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9364ff8bf26e16a16bf273d54905712d78f1ed9dee9d0a29c3f404694ea6aff2,PodSandboxId:2f0cf8d7f45ba400f577da59d5efcd325547bd6fc496cbebf1eb20a7ea7b95b2,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722448655680171780,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-w89wl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ff43ce4-ded1-4e1d-a56b-7b26520a67ca,},Annotations
:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:206a4121a79649e682ff94ef24c8661b0ea16a74394af1f79502c1bdff994938,PodSandboxId:2ae8e9103bf15322c13289f332bd594336aca45dc54f021ec00d501a05e564c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722448647400815645,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-wxfw7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10f2f1ac-c2ba-4b22-8796-d55a5466d2d7,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad9115ef25d6c62650db1af3db0851e77fe94c2532bf7955814a4e98ba8783ad,PodSandboxId:58ed9203fdbe6e9fca6f4ae1c80ed58c6a996f233aef581e21e5542b07d9697f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722448645159900842,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-w89wl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ff43ce4-ded1-4e1d-a56b-7b26520a67ca,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b06b35c4ae1b835ad2634098192f08ccc58ce1449710608870143dbbf92e364,PodSandboxId:b1e3fdef5976de519617b76d350de4b630499fefcdaca7a
ecd7cd0f3e64dbf15,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1722448643814533950,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-njk2g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56020402-3c78-4887-b4d1-a0c482227876,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8817455f700c7831efb13c71044d6b1b813c31e0d5a632259f23c2a1e7a87595,PodSandboxId:20ace8b06ff9e1791e658543647b552e9bf050b8866534cd6e4e738c741bbd0d,Metadata:&Contai
nerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722448643882989991,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9f10202-322d-48c8-9c27-2eccc9c698e3,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff5fcc9fc7c5612da84814e64df5f38f1958d94615a2af74d4961e146d18f5cb,PodSandboxId:5200eb3d246ca31bbc7da38f742eac7998598c2ffa2a95717f3592883a5b7c27,Metadata:&ContainerMetadata{N
ame:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1722448644070721870,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-410576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bf7953ecb66f0ef8427b99d45a66b16,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d71f853e14817ede82c6e339d8dc76a66cfc97c086886e2151dc299a4e7c9c0,PodSandboxId:ecc82059531ea9294803392abba88300b415cb9e0c6d83847b57eb82ccc77bf6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:
&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1722448643951832562,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-410576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c069dd300897f914ba30fdd5464a743,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18f923af77f99a86e9e79c9d3da51029f9f0828ab4112e6f7c8ec3c858068163,PodSandboxId:afc70eba8ddfcc325912a39e1f6c3a304a8195d4052330058ad7a72da28848d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&Image
Spec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722448643931716572,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-410576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1d3e63af214df4aad14da8a3bc95d92,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89fd134efa85f72b905731e6cfc5deb3dfa58c77e1d0f610bba984100fab74c8,PodSandboxId:52c47f23e64839db2840f493d3e32b4a218d410962459d11b9842152e7ff2a58,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&Im
ageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1722448643777376741,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-410576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09214d06b48cae7e5e3758d6c895fac2,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6e7cf6ac-2492-42bd-b1d1-92dc209ec194 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	629de0eaa1b08       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   3 seconds ago       Running             kube-proxy                2                   736f7b787e9a8       kube-proxy-njk2g
	c40b9ac3395c9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago       Running             storage-provisioner       2                   c963dd8f11ae9       storage-provisioner
	eacb3e1a02267       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   2                   2ae8e9103bf15       coredns-5cfdc65f69-wxfw7
	31df6357a157b       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   7 seconds ago       Running             kube-controller-manager   2                   9319a1dafe0da       kube-controller-manager-kubernetes-upgrade-410576
	40773a005fdce       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   7 seconds ago       Running             etcd                      2                   23aa46163cecc       etcd-kubernetes-upgrade-410576
	429753ff875f7       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   7 seconds ago       Running             kube-apiserver            2                   0bd4a644f5c6c       kube-apiserver-kubernetes-upgrade-410576
	381a87a71330d       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   11 seconds ago      Running             kube-scheduler            2                   e741a9b7a831f       kube-scheduler-kubernetes-upgrade-410576
	9364ff8bf26e1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   12 seconds ago      Running             coredns                   2                   2f0cf8d7f45ba       coredns-5cfdc65f69-w89wl
	206a4121a7964       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   20 seconds ago      Exited              coredns                   1                   2ae8e9103bf15       coredns-5cfdc65f69-wxfw7
	ad9115ef25d6c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   22 seconds ago      Exited              coredns                   1                   58ed9203fdbe6       coredns-5cfdc65f69-w89wl
	ff5fcc9fc7c56       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   23 seconds ago      Exited              etcd                      1                   5200eb3d246ca       etcd-kubernetes-upgrade-410576
	8d71f853e1481       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   23 seconds ago      Exited              kube-scheduler            1                   ecc82059531ea       kube-scheduler-kubernetes-upgrade-410576
	18f923af77f99       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   23 seconds ago      Exited              kube-apiserver            1                   afc70eba8ddfc       kube-apiserver-kubernetes-upgrade-410576
	8817455f700c7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   23 seconds ago      Exited              storage-provisioner       1                   20ace8b06ff9e       storage-provisioner
	8b06b35c4ae1b       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   24 seconds ago      Exited              kube-proxy                1                   b1e3fdef5976d       kube-proxy-njk2g
	89fd134efa85f       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   24 seconds ago      Exited              kube-controller-manager   1                   52c47f23e6483       kube-controller-manager-kubernetes-upgrade-410576
	
	
	==> coredns [206a4121a79649e682ff94ef24c8661b0ea16a74394af1f79502c1bdff994938] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9364ff8bf26e16a16bf273d54905712d78f1ed9dee9d0a29c3f404694ea6aff2] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [ad9115ef25d6c62650db1af3db0851e77fe94c2532bf7955814a4e98ba8783ad] <==
	
	
	==> coredns [eacb3e1a02267ba1c8c9f551fde21eb6ab9608d40e27000f5b5ac19d4f9dfe22] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-410576
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-410576
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 17:56:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-410576
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 17:57:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 17:57:43 +0000   Wed, 31 Jul 2024 17:56:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 17:57:43 +0000   Wed, 31 Jul 2024 17:56:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 17:57:43 +0000   Wed, 31 Jul 2024 17:56:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 17:57:43 +0000   Wed, 31 Jul 2024 17:56:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.234
	  Hostname:    kubernetes-upgrade-410576
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 aa3d0f061c2e4dfb8597c717f19863a8
	  System UUID:                aa3d0f06-1c2e-4dfb-8597-c717f19863a8
	  Boot ID:                    38c1a7a7-9ee7-4f87-9d05-d47f1a78f688
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5cfdc65f69-w89wl                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     64s
	  kube-system                 coredns-5cfdc65f69-wxfw7                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     64s
	  kube-system                 etcd-kubernetes-upgrade-410576                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         71s
	  kube-system                 kube-apiserver-kubernetes-upgrade-410576             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-410576    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         64s
	  kube-system                 kube-proxy-njk2g                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         64s
	  kube-system                 kube-scheduler-kubernetes-upgrade-410576             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         64s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 62s                kube-proxy       
	  Normal  Starting                 2s                 kube-proxy       
	  Normal  NodeAllocatableEnforced  80s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  79s (x8 over 82s)  kubelet          Node kubernetes-upgrade-410576 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    79s (x8 over 82s)  kubelet          Node kubernetes-upgrade-410576 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     79s (x7 over 82s)  kubelet          Node kubernetes-upgrade-410576 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           67s                node-controller  Node kubernetes-upgrade-410576 event: Registered Node kubernetes-upgrade-410576 in Controller
	  Normal  RegisteredNode           1s                 node-controller  Node kubernetes-upgrade-410576 event: Registered Node kubernetes-upgrade-410576 in Controller
	
	
	==> dmesg <==
	[  +1.782894] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.406460] systemd-fstab-generator[567]: Ignoring "noauto" option for root device
	[  +0.056219] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064373] systemd-fstab-generator[579]: Ignoring "noauto" option for root device
	[  +0.189734] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.144825] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.317113] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +3.977369] systemd-fstab-generator[731]: Ignoring "noauto" option for root device
	[  +2.036739] systemd-fstab-generator[852]: Ignoring "noauto" option for root device
	[  +0.064234] kauditd_printk_skb: 158 callbacks suppressed
	[ +15.124575] kauditd_printk_skb: 69 callbacks suppressed
	[  +2.826726] systemd-fstab-generator[1250]: Ignoring "noauto" option for root device
	[  +2.697872] kauditd_printk_skb: 99 callbacks suppressed
	[Jul31 17:57] systemd-fstab-generator[2196]: Ignoring "noauto" option for root device
	[  +0.457516] systemd-fstab-generator[2390]: Ignoring "noauto" option for root device
	[  +0.591440] systemd-fstab-generator[2615]: Ignoring "noauto" option for root device
	[  +0.401276] systemd-fstab-generator[2761]: Ignoring "noauto" option for root device
	[  +0.525810] systemd-fstab-generator[2901]: Ignoring "noauto" option for root device
	[  +1.615071] systemd-fstab-generator[3386]: Ignoring "noauto" option for root device
	[  +8.943082] kauditd_printk_skb: 308 callbacks suppressed
	[  +3.344058] systemd-fstab-generator[3996]: Ignoring "noauto" option for root device
	[  +5.684322] kauditd_printk_skb: 39 callbacks suppressed
	[  +0.719411] systemd-fstab-generator[4402]: Ignoring "noauto" option for root device
	
	
	==> etcd [40773a005fdcee0f6217894d50edb05613801f5ad6625dd3315f24a1db18e7e3] <==
	{"level":"info","ts":"2024-07-31T17:57:40.178329Z","caller":"embed/etcd.go:727","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-31T17:57:40.176552Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"31558a355326f8e0 switched to configuration voters=(3554899442511837408)"}
	{"level":"info","ts":"2024-07-31T17:57:40.180842Z","caller":"embed/etcd.go:598","msg":"serving peer traffic","address":"192.168.61.234:2380"}
	{"level":"info","ts":"2024-07-31T17:57:40.180951Z","caller":"embed/etcd.go:570","msg":"cmux::serve","address":"192.168.61.234:2380"}
	{"level":"info","ts":"2024-07-31T17:57:40.181148Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"31558a355326f8e0","initial-advertise-peer-urls":["https://192.168.61.234:2380"],"listen-peer-urls":["https://192.168.61.234:2380"],"advertise-client-urls":["https://192.168.61.234:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.234:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-31T17:57:40.181191Z","caller":"embed/etcd.go:858","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-31T17:57:40.181309Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"c0e6a32814930a83","local-member-id":"31558a355326f8e0","added-peer-id":"31558a355326f8e0","added-peer-peer-urls":["https://192.168.61.234:2380"]}
	{"level":"info","ts":"2024-07-31T17:57:40.181445Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c0e6a32814930a83","local-member-id":"31558a355326f8e0","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T17:57:40.18383Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T17:57:41.925235Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"31558a355326f8e0 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-31T17:57:41.925312Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"31558a355326f8e0 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-31T17:57:41.925349Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"31558a355326f8e0 received MsgPreVoteResp from 31558a355326f8e0 at term 2"}
	{"level":"info","ts":"2024-07-31T17:57:41.925365Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"31558a355326f8e0 became candidate at term 3"}
	{"level":"info","ts":"2024-07-31T17:57:41.925373Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"31558a355326f8e0 received MsgVoteResp from 31558a355326f8e0 at term 3"}
	{"level":"info","ts":"2024-07-31T17:57:41.925385Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"31558a355326f8e0 became leader at term 3"}
	{"level":"info","ts":"2024-07-31T17:57:41.925395Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 31558a355326f8e0 elected leader 31558a355326f8e0 at term 3"}
	{"level":"info","ts":"2024-07-31T17:57:41.930548Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"31558a355326f8e0","local-member-attributes":"{Name:kubernetes-upgrade-410576 ClientURLs:[https://192.168.61.234:2379]}","request-path":"/0/members/31558a355326f8e0/attributes","cluster-id":"c0e6a32814930a83","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T17:57:41.930706Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T17:57:41.930873Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T17:57:41.930923Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-31T17:57:41.930949Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T17:57:41.932076Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-31T17:57:41.932299Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-31T17:57:41.932982Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-31T17:57:41.933312Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.234:2379"}
	
	
	==> etcd [ff5fcc9fc7c5612da84814e64df5f38f1958d94615a2af74d4961e146d18f5cb] <==
	{"level":"info","ts":"2024-07-31T17:57:25.005632Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-07-31T17:57:25.074368Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"c0e6a32814930a83","local-member-id":"31558a355326f8e0","commit-index":416}
	{"level":"info","ts":"2024-07-31T17:57:25.074515Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"31558a355326f8e0 switched to configuration voters=()"}
	{"level":"info","ts":"2024-07-31T17:57:25.074576Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"31558a355326f8e0 became follower at term 2"}
	{"level":"info","ts":"2024-07-31T17:57:25.074591Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 31558a355326f8e0 [peers: [], term: 2, commit: 416, applied: 0, lastindex: 416, lastterm: 2]"}
	{"level":"warn","ts":"2024-07-31T17:57:25.109834Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-07-31T17:57:25.173284Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":402}
	{"level":"info","ts":"2024-07-31T17:57:25.183173Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-07-31T17:57:25.193467Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"31558a355326f8e0","timeout":"7s"}
	{"level":"info","ts":"2024-07-31T17:57:25.193723Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"31558a355326f8e0"}
	{"level":"info","ts":"2024-07-31T17:57:25.198852Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"31558a355326f8e0","local-server-version":"3.5.14","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-07-31T17:57:25.199478Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-31T17:57:25.201912Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-07-31T17:57:25.202075Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-31T17:57:25.202132Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-31T17:57:25.202145Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-31T17:57:25.202366Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"31558a355326f8e0 switched to configuration voters=(3554899442511837408)"}
	{"level":"info","ts":"2024-07-31T17:57:25.202426Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"c0e6a32814930a83","local-member-id":"31558a355326f8e0","added-peer-id":"31558a355326f8e0","added-peer-peer-urls":["https://192.168.61.234:2380"]}
	{"level":"info","ts":"2024-07-31T17:57:25.202576Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c0e6a32814930a83","local-member-id":"31558a355326f8e0","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T17:57:25.20261Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T17:57:25.245859Z","caller":"embed/etcd.go:727","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-31T17:57:25.246009Z","caller":"embed/etcd.go:598","msg":"serving peer traffic","address":"192.168.61.234:2380"}
	{"level":"info","ts":"2024-07-31T17:57:25.246018Z","caller":"embed/etcd.go:570","msg":"cmux::serve","address":"192.168.61.234:2380"}
	{"level":"info","ts":"2024-07-31T17:57:25.255205Z","caller":"embed/etcd.go:858","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-31T17:57:25.255149Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"31558a355326f8e0","initial-advertise-peer-urls":["https://192.168.61.234:2380"],"listen-peer-urls":["https://192.168.61.234:2380"],"advertise-client-urls":["https://192.168.61.234:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.234:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	
	
	==> kernel <==
	 17:57:48 up 1 min,  0 users,  load average: 1.39, 0.42, 0.15
	Linux kubernetes-upgrade-410576 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [18f923af77f99a86e9e79c9d3da51029f9f0828ab4112e6f7c8ec3c858068163] <==
	I0731 17:57:24.917058       1 options.go:228] external host was not specified, using 192.168.61.234
	I0731 17:57:24.942940       1 server.go:142] Version: v1.31.0-beta.0
	I0731 17:57:24.943735       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-apiserver [429753ff875f7ef2feba8957d2717b031d9aa4eba2ad16d1ce47ae87da4e6314] <==
	I0731 17:57:43.434486       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0731 17:57:43.435402       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0731 17:57:43.435430       1 policy_source.go:224] refreshing policies
	I0731 17:57:43.439486       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0731 17:57:43.443032       1 aggregator.go:171] initial CRD sync complete...
	I0731 17:57:43.443102       1 autoregister_controller.go:144] Starting autoregister controller
	I0731 17:57:43.443127       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0731 17:57:43.452196       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0731 17:57:43.526669       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0731 17:57:43.526697       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0731 17:57:43.526861       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0731 17:57:43.527377       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0731 17:57:43.528818       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0731 17:57:43.528889       1 shared_informer.go:320] Caches are synced for configmaps
	E0731 17:57:43.534424       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0731 17:57:43.534575       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0731 17:57:43.560656       1 cache.go:39] Caches are synced for autoregister controller
	I0731 17:57:44.232355       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0731 17:57:45.181986       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0731 17:57:45.217625       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0731 17:57:45.275626       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0731 17:57:45.337298       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0731 17:57:45.352918       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0731 17:57:47.904514       1 controller.go:615] quota admission added evaluator for: endpoints
	I0731 17:57:48.051843       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [31df6357a157b3147bc06c033fc167cee993645b9412e914f7c984d2ac6285da] <==
	I0731 17:57:47.788027       1 shared_informer.go:320] Caches are synced for GC
	I0731 17:57:47.792425       1 shared_informer.go:320] Caches are synced for stateful set
	I0731 17:57:47.792470       1 shared_informer.go:320] Caches are synced for TTL
	I0731 17:57:47.793707       1 shared_informer.go:320] Caches are synced for taint
	I0731 17:57:47.793901       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0731 17:57:47.793990       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-410576"
	I0731 17:57:47.794031       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0731 17:57:47.830999       1 shared_informer.go:320] Caches are synced for attach detach
	I0731 17:57:47.833951       1 shared_informer.go:320] Caches are synced for expand
	I0731 17:57:47.892360       1 shared_informer.go:320] Caches are synced for endpoint
	I0731 17:57:47.897785       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0731 17:57:47.899809       1 shared_informer.go:320] Caches are synced for namespace
	I0731 17:57:47.921214       1 shared_informer.go:320] Caches are synced for job
	I0731 17:57:47.926274       1 shared_informer.go:320] Caches are synced for disruption
	I0731 17:57:47.942025       1 shared_informer.go:320] Caches are synced for cronjob
	I0731 17:57:47.946449       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0731 17:57:47.949842       1 shared_informer.go:320] Caches are synced for service account
	I0731 17:57:47.973813       1 shared_informer.go:320] Caches are synced for garbage collector
	I0731 17:57:48.002051       1 shared_informer.go:320] Caches are synced for garbage collector
	I0731 17:57:48.002102       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0731 17:57:48.013197       1 shared_informer.go:320] Caches are synced for resource quota
	I0731 17:57:48.017789       1 shared_informer.go:320] Caches are synced for resource quota
	I0731 17:57:48.039304       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0731 17:57:48.042793       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0731 17:57:48.042887       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-410576"
	
	
	==> kube-controller-manager [89fd134efa85f72b905731e6cfc5deb3dfa58c77e1d0f610bba984100fab74c8] <==
	
	
	==> kube-proxy [629de0eaa1b081c2caf6f671bcd9e0403b26332e03feff44af6d2e0c2ad8adf4] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0731 17:57:45.118877       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0731 17:57:45.129333       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.61.234"]
	E0731 17:57:45.129425       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0731 17:57:45.184090       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0731 17:57:45.184159       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 17:57:45.184202       1 server_linux.go:170] "Using iptables Proxier"
	I0731 17:57:45.187212       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0731 17:57:45.187659       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0731 17:57:45.187689       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 17:57:45.189714       1 config.go:197] "Starting service config controller"
	I0731 17:57:45.190391       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 17:57:45.191966       1 config.go:104] "Starting endpoint slice config controller"
	I0731 17:57:45.193845       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 17:57:45.192829       1 config.go:326] "Starting node config controller"
	I0731 17:57:45.193907       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 17:57:45.293867       1 shared_informer.go:320] Caches are synced for service config
	I0731 17:57:45.293931       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 17:57:45.294065       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [8b06b35c4ae1b835ad2634098192f08ccc58ce1449710608870143dbbf92e364] <==
	
	
	==> kube-scheduler [381a87a71330dc352155eb7dc040d753e19bd6d80c227ec956192b984d57c4d3] <==
	W0731 17:57:43.299136       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 17:57:43.299221       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0731 17:57:43.299322       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 17:57:43.299352       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0731 17:57:43.299423       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0731 17:57:43.299484       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0731 17:57:43.299596       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0731 17:57:43.299644       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0731 17:57:43.299687       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	E0731 17:57:43.299718       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0731 17:57:43.305895       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 17:57:43.305980       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0731 17:57:43.306099       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 17:57:43.306140       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0731 17:57:43.306202       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 17:57:43.306230       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0731 17:57:43.306275       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 17:57:43.306308       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0731 17:57:43.306380       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 17:57:43.306427       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0731 17:57:43.306508       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0731 17:57:43.306537       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0731 17:57:43.435115       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 17:57:43.435218       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0731 17:57:47.800042       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [8d71f853e14817ede82c6e339d8dc76a66cfc97c086886e2151dc299a4e7c9c0] <==
	
	
	==> kubelet <==
	Jul 31 17:57:39 kubernetes-upgrade-410576 kubelet[4002]: I0731 17:57:39.584032    4002 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e1d3e63af214df4aad14da8a3bc95d92-usr-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-410576\" (UID: \"e1d3e63af214df4aad14da8a3bc95d92\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-410576"
	Jul 31 17:57:39 kubernetes-upgrade-410576 kubelet[4002]: I0731 17:57:39.584053    4002 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/09214d06b48cae7e5e3758d6c895fac2-k8s-certs\") pod \"kube-controller-manager-kubernetes-upgrade-410576\" (UID: \"09214d06b48cae7e5e3758d6c895fac2\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-410576"
	Jul 31 17:57:39 kubernetes-upgrade-410576 kubelet[4002]: I0731 17:57:39.584066    4002 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/09214d06b48cae7e5e3758d6c895fac2-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-410576\" (UID: \"09214d06b48cae7e5e3758d6c895fac2\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-410576"
	Jul 31 17:57:39 kubernetes-upgrade-410576 kubelet[4002]: I0731 17:57:39.584083    4002 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/09214d06b48cae7e5e3758d6c895fac2-usr-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-410576\" (UID: \"09214d06b48cae7e5e3758d6c895fac2\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-410576"
	Jul 31 17:57:39 kubernetes-upgrade-410576 kubelet[4002]: I0731 17:57:39.671662    4002 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-410576"
	Jul 31 17:57:39 kubernetes-upgrade-410576 kubelet[4002]: E0731 17:57:39.672596    4002 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.234:8443: connect: connection refused" node="kubernetes-upgrade-410576"
	Jul 31 17:57:39 kubernetes-upgrade-410576 kubelet[4002]: I0731 17:57:39.820040    4002 scope.go:117] "RemoveContainer" containerID="18f923af77f99a86e9e79c9d3da51029f9f0828ab4112e6f7c8ec3c858068163"
	Jul 31 17:57:39 kubernetes-upgrade-410576 kubelet[4002]: I0731 17:57:39.820260    4002 scope.go:117] "RemoveContainer" containerID="89fd134efa85f72b905731e6cfc5deb3dfa58c77e1d0f610bba984100fab74c8"
	Jul 31 17:57:39 kubernetes-upgrade-410576 kubelet[4002]: I0731 17:57:39.822068    4002 scope.go:117] "RemoveContainer" containerID="ff5fcc9fc7c5612da84814e64df5f38f1958d94615a2af74d4961e146d18f5cb"
	Jul 31 17:57:39 kubernetes-upgrade-410576 kubelet[4002]: E0731 17:57:39.982334    4002 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-410576?timeout=10s\": dial tcp 192.168.61.234:8443: connect: connection refused" interval="800ms"
	Jul 31 17:57:40 kubernetes-upgrade-410576 kubelet[4002]: I0731 17:57:40.074553    4002 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-410576"
	Jul 31 17:57:40 kubernetes-upgrade-410576 kubelet[4002]: E0731 17:57:40.075451    4002 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.234:8443: connect: connection refused" node="kubernetes-upgrade-410576"
	Jul 31 17:57:40 kubernetes-upgrade-410576 kubelet[4002]: I0731 17:57:40.877671    4002 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-410576"
	Jul 31 17:57:43 kubernetes-upgrade-410576 kubelet[4002]: I0731 17:57:43.472108    4002 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-410576"
	Jul 31 17:57:43 kubernetes-upgrade-410576 kubelet[4002]: I0731 17:57:43.472609    4002 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-410576"
	Jul 31 17:57:43 kubernetes-upgrade-410576 kubelet[4002]: I0731 17:57:43.472708    4002 kuberuntime_manager.go:1524] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 31 17:57:43 kubernetes-upgrade-410576 kubelet[4002]: I0731 17:57:43.473882    4002 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 31 17:57:44 kubernetes-upgrade-410576 kubelet[4002]: I0731 17:57:44.345293    4002 apiserver.go:52] "Watching apiserver"
	Jul 31 17:57:44 kubernetes-upgrade-410576 kubelet[4002]: I0731 17:57:44.371205    4002 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Jul 31 17:57:44 kubernetes-upgrade-410576 kubelet[4002]: I0731 17:57:44.419448    4002 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/56020402-3c78-4887-b4d1-a0c482227876-xtables-lock\") pod \"kube-proxy-njk2g\" (UID: \"56020402-3c78-4887-b4d1-a0c482227876\") " pod="kube-system/kube-proxy-njk2g"
	Jul 31 17:57:44 kubernetes-upgrade-410576 kubelet[4002]: I0731 17:57:44.419738    4002 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c9f10202-322d-48c8-9c27-2eccc9c698e3-tmp\") pod \"storage-provisioner\" (UID: \"c9f10202-322d-48c8-9c27-2eccc9c698e3\") " pod="kube-system/storage-provisioner"
	Jul 31 17:57:44 kubernetes-upgrade-410576 kubelet[4002]: I0731 17:57:44.420155    4002 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56020402-3c78-4887-b4d1-a0c482227876-lib-modules\") pod \"kube-proxy-njk2g\" (UID: \"56020402-3c78-4887-b4d1-a0c482227876\") " pod="kube-system/kube-proxy-njk2g"
	Jul 31 17:57:44 kubernetes-upgrade-410576 kubelet[4002]: I0731 17:57:44.661842    4002 scope.go:117] "RemoveContainer" containerID="8817455f700c7831efb13c71044d6b1b813c31e0d5a632259f23c2a1e7a87595"
	Jul 31 17:57:44 kubernetes-upgrade-410576 kubelet[4002]: I0731 17:57:44.663826    4002 scope.go:117] "RemoveContainer" containerID="206a4121a79649e682ff94ef24c8661b0ea16a74394af1f79502c1bdff994938"
	Jul 31 17:57:44 kubernetes-upgrade-410576 kubelet[4002]: I0731 17:57:44.675269    4002 scope.go:117] "RemoveContainer" containerID="8b06b35c4ae1b835ad2634098192f08ccc58ce1449710608870143dbbf92e364"
	
	
	==> storage-provisioner [8817455f700c7831efb13c71044d6b1b813c31e0d5a632259f23c2a1e7a87595] <==
	
	
	==> storage-provisioner [c40b9ac3395c99df1a84de445e0cb06eadfedee69bfdf09ad378eb4068006d6d] <==
	I0731 17:57:44.880582       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0731 17:57:44.893336       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0731 17:57:44.893460       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-410576 -n kubernetes-upgrade-410576
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-410576 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-410576" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-410576
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-410576: (1.252759963s)
--- FAIL: TestKubernetesUpgrade (408.99s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (36.76s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-957141 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-957141 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (33.183346938s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-957141] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19349
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19349-8084/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19349-8084/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-957141" primary control-plane node in "pause-957141" cluster
	* Updating the running kvm2 "pause-957141" VM ...
	* Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-957141" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 17:53:04.536250   57809 out.go:291] Setting OutFile to fd 1 ...
	I0731 17:53:04.536498   57809 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 17:53:04.536507   57809 out.go:304] Setting ErrFile to fd 2...
	I0731 17:53:04.536514   57809 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 17:53:04.536693   57809 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19349-8084/.minikube/bin
	I0731 17:53:04.537244   57809 out.go:298] Setting JSON to false
	I0731 17:53:04.538163   57809 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5728,"bootTime":1722442656,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 17:53:04.538219   57809 start.go:139] virtualization: kvm guest
	I0731 17:53:04.540387   57809 out.go:177] * [pause-957141] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 17:53:04.541735   57809 notify.go:220] Checking for updates...
	I0731 17:53:04.541784   57809 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 17:53:04.543053   57809 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 17:53:04.544239   57809 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 17:53:04.545548   57809 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19349-8084/.minikube
	I0731 17:53:04.546903   57809 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 17:53:04.548421   57809 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 17:53:04.550230   57809 config.go:182] Loaded profile config "pause-957141": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 17:53:04.550905   57809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:53:04.550967   57809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:53:04.565983   57809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37579
	I0731 17:53:04.566420   57809 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:53:04.567090   57809 main.go:141] libmachine: Using API Version  1
	I0731 17:53:04.567141   57809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:53:04.567539   57809 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:53:04.567731   57809 main.go:141] libmachine: (pause-957141) Calling .DriverName
	I0731 17:53:04.567970   57809 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 17:53:04.568306   57809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:53:04.568375   57809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:53:04.585751   57809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34369
	I0731 17:53:04.586174   57809 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:53:04.586808   57809 main.go:141] libmachine: Using API Version  1
	I0731 17:53:04.586847   57809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:53:04.587245   57809 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:53:04.587466   57809 main.go:141] libmachine: (pause-957141) Calling .DriverName
	I0731 17:53:04.623078   57809 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 17:53:04.624523   57809 start.go:297] selected driver: kvm2
	I0731 17:53:04.624550   57809 start.go:901] validating driver "kvm2" against &{Name:pause-957141 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.30.3 ClusterName:pause-957141 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-devi
ce-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 17:53:04.624701   57809 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 17:53:04.625028   57809 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 17:53:04.625116   57809 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19349-8084/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 17:53:04.640033   57809 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 17:53:04.640985   57809 cni.go:84] Creating CNI manager for ""
	I0731 17:53:04.641006   57809 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 17:53:04.641087   57809 start.go:340] cluster config:
	{Name:pause-957141 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:pause-957141 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:f
alse registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 17:53:04.641267   57809 iso.go:125] acquiring lock: {Name:mk4494bb5a63902bf12a909c3109db85229bc516 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 17:53:04.643077   57809 out.go:177] * Starting "pause-957141" primary control-plane node in "pause-957141" cluster
	I0731 17:53:04.644316   57809 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 17:53:04.644346   57809 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0731 17:53:04.644353   57809 cache.go:56] Caching tarball of preloaded images
	I0731 17:53:04.644438   57809 preload.go:172] Found /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 17:53:04.644451   57809 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 17:53:04.644560   57809 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/pause-957141/config.json ...
	I0731 17:53:04.644736   57809 start.go:360] acquireMachinesLock for pause-957141: {Name:mkd78eacfab7d6c3058c6674434b3d889ec957e0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 17:53:04.644775   57809 start.go:364] duration metric: took 21.521µs to acquireMachinesLock for "pause-957141"
	I0731 17:53:04.644788   57809 start.go:96] Skipping create...Using existing machine configuration
	I0731 17:53:04.644795   57809 fix.go:54] fixHost starting: 
	I0731 17:53:04.645042   57809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:53:04.645073   57809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:53:04.659377   57809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33731
	I0731 17:53:04.659793   57809 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:53:04.660210   57809 main.go:141] libmachine: Using API Version  1
	I0731 17:53:04.660233   57809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:53:04.660543   57809 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:53:04.660755   57809 main.go:141] libmachine: (pause-957141) Calling .DriverName
	I0731 17:53:04.660913   57809 main.go:141] libmachine: (pause-957141) Calling .GetState
	I0731 17:53:04.662331   57809 fix.go:112] recreateIfNeeded on pause-957141: state=Running err=<nil>
	W0731 17:53:04.662350   57809 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 17:53:04.664077   57809 out.go:177] * Updating the running kvm2 "pause-957141" VM ...
	I0731 17:53:04.665250   57809 machine.go:94] provisionDockerMachine start ...
	I0731 17:53:04.665268   57809 main.go:141] libmachine: (pause-957141) Calling .DriverName
	I0731 17:53:04.665438   57809 main.go:141] libmachine: (pause-957141) Calling .GetSSHHostname
	I0731 17:53:04.667770   57809 main.go:141] libmachine: (pause-957141) DBG | domain pause-957141 has defined MAC address 52:54:00:f9:60:b3 in network mk-pause-957141
	I0731 17:53:04.668121   57809 main.go:141] libmachine: (pause-957141) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:60:b3", ip: ""} in network mk-pause-957141: {Iface:virbr3 ExpiryTime:2024-07-31 18:51:39 +0000 UTC Type:0 Mac:52:54:00:f9:60:b3 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-957141 Clientid:01:52:54:00:f9:60:b3}
	I0731 17:53:04.668153   57809 main.go:141] libmachine: (pause-957141) DBG | domain pause-957141 has defined IP address 192.168.39.24 and MAC address 52:54:00:f9:60:b3 in network mk-pause-957141
	I0731 17:53:04.668326   57809 main.go:141] libmachine: (pause-957141) Calling .GetSSHPort
	I0731 17:53:04.668485   57809 main.go:141] libmachine: (pause-957141) Calling .GetSSHKeyPath
	I0731 17:53:04.668629   57809 main.go:141] libmachine: (pause-957141) Calling .GetSSHKeyPath
	I0731 17:53:04.668768   57809 main.go:141] libmachine: (pause-957141) Calling .GetSSHUsername
	I0731 17:53:04.668889   57809 main.go:141] libmachine: Using SSH client type: native
	I0731 17:53:04.669136   57809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I0731 17:53:04.669155   57809 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 17:53:04.767910   57809 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-957141
	
	I0731 17:53:04.767939   57809 main.go:141] libmachine: (pause-957141) Calling .GetMachineName
	I0731 17:53:04.768164   57809 buildroot.go:166] provisioning hostname "pause-957141"
	I0731 17:53:04.768193   57809 main.go:141] libmachine: (pause-957141) Calling .GetMachineName
	I0731 17:53:04.768382   57809 main.go:141] libmachine: (pause-957141) Calling .GetSSHHostname
	I0731 17:53:04.771093   57809 main.go:141] libmachine: (pause-957141) DBG | domain pause-957141 has defined MAC address 52:54:00:f9:60:b3 in network mk-pause-957141
	I0731 17:53:04.771549   57809 main.go:141] libmachine: (pause-957141) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:60:b3", ip: ""} in network mk-pause-957141: {Iface:virbr3 ExpiryTime:2024-07-31 18:51:39 +0000 UTC Type:0 Mac:52:54:00:f9:60:b3 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-957141 Clientid:01:52:54:00:f9:60:b3}
	I0731 17:53:04.771574   57809 main.go:141] libmachine: (pause-957141) DBG | domain pause-957141 has defined IP address 192.168.39.24 and MAC address 52:54:00:f9:60:b3 in network mk-pause-957141
	I0731 17:53:04.771754   57809 main.go:141] libmachine: (pause-957141) Calling .GetSSHPort
	I0731 17:53:04.771898   57809 main.go:141] libmachine: (pause-957141) Calling .GetSSHKeyPath
	I0731 17:53:04.772016   57809 main.go:141] libmachine: (pause-957141) Calling .GetSSHKeyPath
	I0731 17:53:04.772127   57809 main.go:141] libmachine: (pause-957141) Calling .GetSSHUsername
	I0731 17:53:04.772264   57809 main.go:141] libmachine: Using SSH client type: native
	I0731 17:53:04.772439   57809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I0731 17:53:04.772451   57809 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-957141 && echo "pause-957141" | sudo tee /etc/hostname
	I0731 17:53:04.901229   57809 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-957141
	
	I0731 17:53:04.901260   57809 main.go:141] libmachine: (pause-957141) Calling .GetSSHHostname
	I0731 17:53:04.904587   57809 main.go:141] libmachine: (pause-957141) DBG | domain pause-957141 has defined MAC address 52:54:00:f9:60:b3 in network mk-pause-957141
	I0731 17:53:04.904979   57809 main.go:141] libmachine: (pause-957141) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:60:b3", ip: ""} in network mk-pause-957141: {Iface:virbr3 ExpiryTime:2024-07-31 18:51:39 +0000 UTC Type:0 Mac:52:54:00:f9:60:b3 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-957141 Clientid:01:52:54:00:f9:60:b3}
	I0731 17:53:04.905013   57809 main.go:141] libmachine: (pause-957141) DBG | domain pause-957141 has defined IP address 192.168.39.24 and MAC address 52:54:00:f9:60:b3 in network mk-pause-957141
	I0731 17:53:04.905267   57809 main.go:141] libmachine: (pause-957141) Calling .GetSSHPort
	I0731 17:53:04.905425   57809 main.go:141] libmachine: (pause-957141) Calling .GetSSHKeyPath
	I0731 17:53:04.905571   57809 main.go:141] libmachine: (pause-957141) Calling .GetSSHKeyPath
	I0731 17:53:04.905722   57809 main.go:141] libmachine: (pause-957141) Calling .GetSSHUsername
	I0731 17:53:04.905936   57809 main.go:141] libmachine: Using SSH client type: native
	I0731 17:53:04.906179   57809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I0731 17:53:04.906200   57809 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-957141' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-957141/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-957141' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 17:53:05.012557   57809 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 17:53:05.012588   57809 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19349-8084/.minikube CaCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19349-8084/.minikube}
	I0731 17:53:05.012612   57809 buildroot.go:174] setting up certificates
	I0731 17:53:05.012625   57809 provision.go:84] configureAuth start
	I0731 17:53:05.012639   57809 main.go:141] libmachine: (pause-957141) Calling .GetMachineName
	I0731 17:53:05.012900   57809 main.go:141] libmachine: (pause-957141) Calling .GetIP
	I0731 17:53:05.015936   57809 main.go:141] libmachine: (pause-957141) DBG | domain pause-957141 has defined MAC address 52:54:00:f9:60:b3 in network mk-pause-957141
	I0731 17:53:05.016378   57809 main.go:141] libmachine: (pause-957141) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:60:b3", ip: ""} in network mk-pause-957141: {Iface:virbr3 ExpiryTime:2024-07-31 18:51:39 +0000 UTC Type:0 Mac:52:54:00:f9:60:b3 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-957141 Clientid:01:52:54:00:f9:60:b3}
	I0731 17:53:05.016407   57809 main.go:141] libmachine: (pause-957141) DBG | domain pause-957141 has defined IP address 192.168.39.24 and MAC address 52:54:00:f9:60:b3 in network mk-pause-957141
	I0731 17:53:05.016569   57809 main.go:141] libmachine: (pause-957141) Calling .GetSSHHostname
	I0731 17:53:05.019097   57809 main.go:141] libmachine: (pause-957141) DBG | domain pause-957141 has defined MAC address 52:54:00:f9:60:b3 in network mk-pause-957141
	I0731 17:53:05.019564   57809 main.go:141] libmachine: (pause-957141) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:60:b3", ip: ""} in network mk-pause-957141: {Iface:virbr3 ExpiryTime:2024-07-31 18:51:39 +0000 UTC Type:0 Mac:52:54:00:f9:60:b3 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-957141 Clientid:01:52:54:00:f9:60:b3}
	I0731 17:53:05.019591   57809 main.go:141] libmachine: (pause-957141) DBG | domain pause-957141 has defined IP address 192.168.39.24 and MAC address 52:54:00:f9:60:b3 in network mk-pause-957141
	I0731 17:53:05.019718   57809 provision.go:143] copyHostCerts
	I0731 17:53:05.019784   57809 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem, removing ...
	I0731 17:53:05.019796   57809 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem
	I0731 17:53:05.019865   57809 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem (1082 bytes)
	I0731 17:53:05.019981   57809 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem, removing ...
	I0731 17:53:05.019994   57809 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem
	I0731 17:53:05.020027   57809 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem (1123 bytes)
	I0731 17:53:05.020105   57809 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem, removing ...
	I0731 17:53:05.020115   57809 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem
	I0731 17:53:05.020146   57809 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem (1675 bytes)
	I0731 17:53:05.020229   57809 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem org=jenkins.pause-957141 san=[127.0.0.1 192.168.39.24 localhost minikube pause-957141]
	I0731 17:53:05.248779   57809 provision.go:177] copyRemoteCerts
	I0731 17:53:05.248852   57809 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 17:53:05.248886   57809 main.go:141] libmachine: (pause-957141) Calling .GetSSHHostname
	I0731 17:53:05.252457   57809 main.go:141] libmachine: (pause-957141) DBG | domain pause-957141 has defined MAC address 52:54:00:f9:60:b3 in network mk-pause-957141
	I0731 17:53:05.252864   57809 main.go:141] libmachine: (pause-957141) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:60:b3", ip: ""} in network mk-pause-957141: {Iface:virbr3 ExpiryTime:2024-07-31 18:51:39 +0000 UTC Type:0 Mac:52:54:00:f9:60:b3 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-957141 Clientid:01:52:54:00:f9:60:b3}
	I0731 17:53:05.252891   57809 main.go:141] libmachine: (pause-957141) DBG | domain pause-957141 has defined IP address 192.168.39.24 and MAC address 52:54:00:f9:60:b3 in network mk-pause-957141
	I0731 17:53:05.253081   57809 main.go:141] libmachine: (pause-957141) Calling .GetSSHPort
	I0731 17:53:05.253273   57809 main.go:141] libmachine: (pause-957141) Calling .GetSSHKeyPath
	I0731 17:53:05.253455   57809 main.go:141] libmachine: (pause-957141) Calling .GetSSHUsername
	I0731 17:53:05.253638   57809 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/pause-957141/id_rsa Username:docker}
	I0731 17:53:05.334975   57809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 17:53:05.365158   57809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0731 17:53:05.396969   57809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 17:53:05.431525   57809 provision.go:87] duration metric: took 418.885229ms to configureAuth
	I0731 17:53:05.431559   57809 buildroot.go:189] setting minikube options for container-runtime
	I0731 17:53:05.431843   57809 config.go:182] Loaded profile config "pause-957141": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 17:53:05.431947   57809 main.go:141] libmachine: (pause-957141) Calling .GetSSHHostname
	I0731 17:53:05.435385   57809 main.go:141] libmachine: (pause-957141) DBG | domain pause-957141 has defined MAC address 52:54:00:f9:60:b3 in network mk-pause-957141
	I0731 17:53:05.435854   57809 main.go:141] libmachine: (pause-957141) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:60:b3", ip: ""} in network mk-pause-957141: {Iface:virbr3 ExpiryTime:2024-07-31 18:51:39 +0000 UTC Type:0 Mac:52:54:00:f9:60:b3 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-957141 Clientid:01:52:54:00:f9:60:b3}
	I0731 17:53:05.435886   57809 main.go:141] libmachine: (pause-957141) DBG | domain pause-957141 has defined IP address 192.168.39.24 and MAC address 52:54:00:f9:60:b3 in network mk-pause-957141
	I0731 17:53:05.436056   57809 main.go:141] libmachine: (pause-957141) Calling .GetSSHPort
	I0731 17:53:05.436260   57809 main.go:141] libmachine: (pause-957141) Calling .GetSSHKeyPath
	I0731 17:53:05.436437   57809 main.go:141] libmachine: (pause-957141) Calling .GetSSHKeyPath
	I0731 17:53:05.436667   57809 main.go:141] libmachine: (pause-957141) Calling .GetSSHUsername
	I0731 17:53:05.436896   57809 main.go:141] libmachine: Using SSH client type: native
	I0731 17:53:05.437127   57809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I0731 17:53:05.437149   57809 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 17:53:10.953154   57809 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 17:53:10.953180   57809 machine.go:97] duration metric: took 6.287915694s to provisionDockerMachine
	I0731 17:53:10.953191   57809 start.go:293] postStartSetup for "pause-957141" (driver="kvm2")
	I0731 17:53:10.953201   57809 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 17:53:10.953216   57809 main.go:141] libmachine: (pause-957141) Calling .DriverName
	I0731 17:53:10.953652   57809 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 17:53:10.953685   57809 main.go:141] libmachine: (pause-957141) Calling .GetSSHHostname
	I0731 17:53:10.956886   57809 main.go:141] libmachine: (pause-957141) DBG | domain pause-957141 has defined MAC address 52:54:00:f9:60:b3 in network mk-pause-957141
	I0731 17:53:10.957283   57809 main.go:141] libmachine: (pause-957141) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:60:b3", ip: ""} in network mk-pause-957141: {Iface:virbr3 ExpiryTime:2024-07-31 18:51:39 +0000 UTC Type:0 Mac:52:54:00:f9:60:b3 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-957141 Clientid:01:52:54:00:f9:60:b3}
	I0731 17:53:10.957330   57809 main.go:141] libmachine: (pause-957141) DBG | domain pause-957141 has defined IP address 192.168.39.24 and MAC address 52:54:00:f9:60:b3 in network mk-pause-957141
	I0731 17:53:10.957583   57809 main.go:141] libmachine: (pause-957141) Calling .GetSSHPort
	I0731 17:53:10.957765   57809 main.go:141] libmachine: (pause-957141) Calling .GetSSHKeyPath
	I0731 17:53:10.957963   57809 main.go:141] libmachine: (pause-957141) Calling .GetSSHUsername
	I0731 17:53:10.958183   57809 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/pause-957141/id_rsa Username:docker}
	I0731 17:53:11.038434   57809 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 17:53:11.042904   57809 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 17:53:11.042935   57809 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/addons for local assets ...
	I0731 17:53:11.043002   57809 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/files for local assets ...
	I0731 17:53:11.043084   57809 filesync.go:149] local asset: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem -> 152592.pem in /etc/ssl/certs
	I0731 17:53:11.043215   57809 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 17:53:11.053050   57809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /etc/ssl/certs/152592.pem (1708 bytes)
	I0731 17:53:11.077473   57809 start.go:296] duration metric: took 124.266832ms for postStartSetup
	I0731 17:53:11.077523   57809 fix.go:56] duration metric: took 6.432726831s for fixHost
	I0731 17:53:11.077555   57809 main.go:141] libmachine: (pause-957141) Calling .GetSSHHostname
	I0731 17:53:11.080393   57809 main.go:141] libmachine: (pause-957141) DBG | domain pause-957141 has defined MAC address 52:54:00:f9:60:b3 in network mk-pause-957141
	I0731 17:53:11.080804   57809 main.go:141] libmachine: (pause-957141) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:60:b3", ip: ""} in network mk-pause-957141: {Iface:virbr3 ExpiryTime:2024-07-31 18:51:39 +0000 UTC Type:0 Mac:52:54:00:f9:60:b3 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-957141 Clientid:01:52:54:00:f9:60:b3}
	I0731 17:53:11.080839   57809 main.go:141] libmachine: (pause-957141) DBG | domain pause-957141 has defined IP address 192.168.39.24 and MAC address 52:54:00:f9:60:b3 in network mk-pause-957141
	I0731 17:53:11.080972   57809 main.go:141] libmachine: (pause-957141) Calling .GetSSHPort
	I0731 17:53:11.081173   57809 main.go:141] libmachine: (pause-957141) Calling .GetSSHKeyPath
	I0731 17:53:11.081348   57809 main.go:141] libmachine: (pause-957141) Calling .GetSSHKeyPath
	I0731 17:53:11.081486   57809 main.go:141] libmachine: (pause-957141) Calling .GetSSHUsername
	I0731 17:53:11.081635   57809 main.go:141] libmachine: Using SSH client type: native
	I0731 17:53:11.081849   57809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I0731 17:53:11.081864   57809 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0731 17:53:11.183614   57809 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722448391.178256514
	
	I0731 17:53:11.183649   57809 fix.go:216] guest clock: 1722448391.178256514
	I0731 17:53:11.183656   57809 fix.go:229] Guest: 2024-07-31 17:53:11.178256514 +0000 UTC Remote: 2024-07-31 17:53:11.07752839 +0000 UTC m=+6.581475268 (delta=100.728124ms)
	I0731 17:53:11.183685   57809 fix.go:200] guest clock delta is within tolerance: 100.728124ms
	I0731 17:53:11.183689   57809 start.go:83] releasing machines lock for "pause-957141", held for 6.538906455s
	I0731 17:53:11.183707   57809 main.go:141] libmachine: (pause-957141) Calling .DriverName
	I0731 17:53:11.183968   57809 main.go:141] libmachine: (pause-957141) Calling .GetIP
	I0731 17:53:11.186721   57809 main.go:141] libmachine: (pause-957141) DBG | domain pause-957141 has defined MAC address 52:54:00:f9:60:b3 in network mk-pause-957141
	I0731 17:53:11.187064   57809 main.go:141] libmachine: (pause-957141) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:60:b3", ip: ""} in network mk-pause-957141: {Iface:virbr3 ExpiryTime:2024-07-31 18:51:39 +0000 UTC Type:0 Mac:52:54:00:f9:60:b3 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-957141 Clientid:01:52:54:00:f9:60:b3}
	I0731 17:53:11.187090   57809 main.go:141] libmachine: (pause-957141) DBG | domain pause-957141 has defined IP address 192.168.39.24 and MAC address 52:54:00:f9:60:b3 in network mk-pause-957141
	I0731 17:53:11.187196   57809 main.go:141] libmachine: (pause-957141) Calling .DriverName
	I0731 17:53:11.187706   57809 main.go:141] libmachine: (pause-957141) Calling .DriverName
	I0731 17:53:11.187873   57809 main.go:141] libmachine: (pause-957141) Calling .DriverName
	I0731 17:53:11.187984   57809 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 17:53:11.188024   57809 main.go:141] libmachine: (pause-957141) Calling .GetSSHHostname
	I0731 17:53:11.188079   57809 ssh_runner.go:195] Run: cat /version.json
	I0731 17:53:11.188104   57809 main.go:141] libmachine: (pause-957141) Calling .GetSSHHostname
	I0731 17:53:11.190699   57809 main.go:141] libmachine: (pause-957141) DBG | domain pause-957141 has defined MAC address 52:54:00:f9:60:b3 in network mk-pause-957141
	I0731 17:53:11.190724   57809 main.go:141] libmachine: (pause-957141) DBG | domain pause-957141 has defined MAC address 52:54:00:f9:60:b3 in network mk-pause-957141
	I0731 17:53:11.191072   57809 main.go:141] libmachine: (pause-957141) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:60:b3", ip: ""} in network mk-pause-957141: {Iface:virbr3 ExpiryTime:2024-07-31 18:51:39 +0000 UTC Type:0 Mac:52:54:00:f9:60:b3 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-957141 Clientid:01:52:54:00:f9:60:b3}
	I0731 17:53:11.191129   57809 main.go:141] libmachine: (pause-957141) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:60:b3", ip: ""} in network mk-pause-957141: {Iface:virbr3 ExpiryTime:2024-07-31 18:51:39 +0000 UTC Type:0 Mac:52:54:00:f9:60:b3 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-957141 Clientid:01:52:54:00:f9:60:b3}
	I0731 17:53:11.191156   57809 main.go:141] libmachine: (pause-957141) DBG | domain pause-957141 has defined IP address 192.168.39.24 and MAC address 52:54:00:f9:60:b3 in network mk-pause-957141
	I0731 17:53:11.191198   57809 main.go:141] libmachine: (pause-957141) DBG | domain pause-957141 has defined IP address 192.168.39.24 and MAC address 52:54:00:f9:60:b3 in network mk-pause-957141
	I0731 17:53:11.191349   57809 main.go:141] libmachine: (pause-957141) Calling .GetSSHPort
	I0731 17:53:11.191512   57809 main.go:141] libmachine: (pause-957141) Calling .GetSSHPort
	I0731 17:53:11.191518   57809 main.go:141] libmachine: (pause-957141) Calling .GetSSHKeyPath
	I0731 17:53:11.191663   57809 main.go:141] libmachine: (pause-957141) Calling .GetSSHUsername
	I0731 17:53:11.191668   57809 main.go:141] libmachine: (pause-957141) Calling .GetSSHKeyPath
	I0731 17:53:11.191825   57809 main.go:141] libmachine: (pause-957141) Calling .GetSSHUsername
	I0731 17:53:11.191872   57809 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/pause-957141/id_rsa Username:docker}
	I0731 17:53:11.191927   57809 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/pause-957141/id_rsa Username:docker}
	I0731 17:53:11.295369   57809 ssh_runner.go:195] Run: systemctl --version
	I0731 17:53:11.301630   57809 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 17:53:11.455268   57809 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 17:53:11.461686   57809 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 17:53:11.461754   57809 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 17:53:11.470896   57809 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0731 17:53:11.470917   57809 start.go:495] detecting cgroup driver to use...
	I0731 17:53:11.470975   57809 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 17:53:11.490271   57809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 17:53:11.508882   57809 docker.go:217] disabling cri-docker service (if available) ...
	I0731 17:53:11.508947   57809 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 17:53:11.523160   57809 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 17:53:11.535847   57809 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 17:53:11.659564   57809 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 17:53:11.780674   57809 docker.go:233] disabling docker service ...
	I0731 17:53:11.780744   57809 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 17:53:11.796163   57809 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 17:53:11.808798   57809 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 17:53:11.935632   57809 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 17:53:12.061459   57809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 17:53:12.075067   57809 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 17:53:12.092554   57809 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 17:53:12.092612   57809 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:53:12.102028   57809 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 17:53:12.102098   57809 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:53:12.111871   57809 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:53:12.121337   57809 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:53:12.131050   57809 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 17:53:12.140870   57809 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:53:12.150364   57809 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:53:12.160267   57809 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:53:12.170903   57809 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 17:53:12.179576   57809 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 17:53:12.188052   57809 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 17:53:12.307121   57809 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 17:53:13.121017   57809 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 17:53:13.121097   57809 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 17:53:13.125538   57809 start.go:563] Will wait 60s for crictl version
	I0731 17:53:13.125597   57809 ssh_runner.go:195] Run: which crictl
	I0731 17:53:13.129456   57809 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 17:53:13.166316   57809 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 17:53:13.166390   57809 ssh_runner.go:195] Run: crio --version
	I0731 17:53:13.197265   57809 ssh_runner.go:195] Run: crio --version
	I0731 17:53:13.229704   57809 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 17:53:13.231081   57809 main.go:141] libmachine: (pause-957141) Calling .GetIP
	I0731 17:53:13.234072   57809 main.go:141] libmachine: (pause-957141) DBG | domain pause-957141 has defined MAC address 52:54:00:f9:60:b3 in network mk-pause-957141
	I0731 17:53:13.234540   57809 main.go:141] libmachine: (pause-957141) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:60:b3", ip: ""} in network mk-pause-957141: {Iface:virbr3 ExpiryTime:2024-07-31 18:51:39 +0000 UTC Type:0 Mac:52:54:00:f9:60:b3 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-957141 Clientid:01:52:54:00:f9:60:b3}
	I0731 17:53:13.234574   57809 main.go:141] libmachine: (pause-957141) DBG | domain pause-957141 has defined IP address 192.168.39.24 and MAC address 52:54:00:f9:60:b3 in network mk-pause-957141
	I0731 17:53:13.234873   57809 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 17:53:13.239884   57809 kubeadm.go:883] updating cluster {Name:pause-957141 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:pause-957141 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false
olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 17:53:13.240045   57809 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 17:53:13.240105   57809 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 17:53:13.297564   57809 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 17:53:13.297588   57809 crio.go:433] Images already preloaded, skipping extraction
	I0731 17:53:13.297636   57809 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 17:53:13.341280   57809 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 17:53:13.341307   57809 cache_images.go:84] Images are preloaded, skipping loading
	I0731 17:53:13.341317   57809 kubeadm.go:934] updating node { 192.168.39.24 8443 v1.30.3 crio true true} ...
	I0731 17:53:13.341488   57809 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-957141 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.24
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:pause-957141 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 17:53:13.341585   57809 ssh_runner.go:195] Run: crio config
	I0731 17:53:13.398764   57809 cni.go:84] Creating CNI manager for ""
	I0731 17:53:13.398796   57809 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 17:53:13.398813   57809 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 17:53:13.398838   57809 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.24 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-957141 NodeName:pause-957141 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.24"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.24 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 17:53:13.399050   57809 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.24
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-957141"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.24
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.24"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 17:53:13.399154   57809 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 17:53:13.409376   57809 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 17:53:13.409451   57809 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 17:53:13.423520   57809 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0731 17:53:13.445420   57809 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 17:53:13.464120   57809 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0731 17:53:13.484629   57809 ssh_runner.go:195] Run: grep 192.168.39.24	control-plane.minikube.internal$ /etc/hosts
	I0731 17:53:13.488482   57809 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 17:53:13.623825   57809 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 17:53:13.640251   57809 certs.go:68] Setting up /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/pause-957141 for IP: 192.168.39.24
	I0731 17:53:13.640280   57809 certs.go:194] generating shared ca certs ...
	I0731 17:53:13.640295   57809 certs.go:226] acquiring lock for ca certs: {Name:mk49eed439085f9c30746706460ce213570d1997 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 17:53:13.640465   57809 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key
	I0731 17:53:13.640512   57809 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key
	I0731 17:53:13.640527   57809 certs.go:256] generating profile certs ...
	I0731 17:53:13.640624   57809 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/pause-957141/client.key
	I0731 17:53:13.640724   57809 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/pause-957141/apiserver.key.df5cb48b
	I0731 17:53:13.640780   57809 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/pause-957141/proxy-client.key
	I0731 17:53:13.640911   57809 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem (1338 bytes)
	W0731 17:53:13.640941   57809 certs.go:480] ignoring /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259_empty.pem, impossibly tiny 0 bytes
	I0731 17:53:13.640949   57809 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 17:53:13.640985   57809 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem (1082 bytes)
	I0731 17:53:13.641023   57809 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem (1123 bytes)
	I0731 17:53:13.641051   57809 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem (1675 bytes)
	I0731 17:53:13.641105   57809 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem (1708 bytes)
	I0731 17:53:13.641730   57809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 17:53:13.666366   57809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 17:53:13.688907   57809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 17:53:13.715462   57809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 17:53:13.738462   57809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/pause-957141/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0731 17:53:13.764016   57809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/pause-957141/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 17:53:13.788424   57809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/pause-957141/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 17:53:13.810305   57809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/pause-957141/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 17:53:13.832328   57809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem --> /usr/share/ca-certificates/15259.pem (1338 bytes)
	I0731 17:53:13.859901   57809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /usr/share/ca-certificates/152592.pem (1708 bytes)
	I0731 17:53:13.884903   57809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 17:53:13.906882   57809 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 17:53:13.973187   57809 ssh_runner.go:195] Run: openssl version
	I0731 17:53:14.008337   57809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15259.pem && ln -fs /usr/share/ca-certificates/15259.pem /etc/ssl/certs/15259.pem"
	I0731 17:53:14.075507   57809 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15259.pem
	I0731 17:53:14.117717   57809 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 16:54 /usr/share/ca-certificates/15259.pem
	I0731 17:53:14.117802   57809 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15259.pem
	I0731 17:53:14.182378   57809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15259.pem /etc/ssl/certs/51391683.0"
	I0731 17:53:14.230942   57809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/152592.pem && ln -fs /usr/share/ca-certificates/152592.pem /etc/ssl/certs/152592.pem"
	I0731 17:53:14.267097   57809 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/152592.pem
	I0731 17:53:14.285463   57809 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 16:54 /usr/share/ca-certificates/152592.pem
	I0731 17:53:14.285531   57809 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/152592.pem
	I0731 17:53:14.292599   57809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/152592.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 17:53:14.304039   57809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 17:53:14.316737   57809 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 17:53:14.321259   57809 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 16:42 /usr/share/ca-certificates/minikubeCA.pem
	I0731 17:53:14.321320   57809 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 17:53:14.328388   57809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 17:53:14.339213   57809 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 17:53:14.346167   57809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 17:53:14.358394   57809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 17:53:14.372839   57809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 17:53:14.382966   57809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 17:53:14.391728   57809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 17:53:14.399757   57809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 17:53:14.405778   57809 kubeadm.go:392] StartCluster: {Name:pause-957141 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:pause-957141 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false ol
m:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 17:53:14.405935   57809 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 17:53:14.406009   57809 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 17:53:14.456039   57809 cri.go:89] found id: "f6a604f6cdc9ae83bd68b4618c688c09f096a290bca0e7623a30b67bf0b41ecf"
	I0731 17:53:14.456070   57809 cri.go:89] found id: "4e761a3b6c9c7fa02d83225cddba648d385f3378e3cd8eda575bdc0ba4954a20"
	I0731 17:53:14.456077   57809 cri.go:89] found id: "6ea3a326bb1edec1cc3c4627be32566c2f9d53be0f8549b5bf1efb4e4679a51f"
	I0731 17:53:14.456081   57809 cri.go:89] found id: "599e308fce44b39d9ffaac07fa975de9d252453298ccb54bedd41c3adee1d9f9"
	I0731 17:53:14.456085   57809 cri.go:89] found id: "07fb01c7a66a08ce480a7aa54d46cf2af1212af230d47b4e721bf82d2e3c31f9"
	I0731 17:53:14.456089   57809 cri.go:89] found id: "3ff45ea8e5cd03c0f849f2d625879d14cc1411331de18175d027488dd676aac5"
	I0731 17:53:14.456093   57809 cri.go:89] found id: ""
	I0731 17:53:14.456145   57809 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-957141 -n pause-957141
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-957141 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-957141 logs -n 25: (1.277810376s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p offline-crio-195412                | offline-crio-195412       | jenkins | v1.33.1 | 31 Jul 24 17:48 UTC | 31 Jul 24 17:48 UTC |
	| start   | -p cert-options-241744                | cert-options-241744       | jenkins | v1.33.1 | 31 Jul 24 17:48 UTC | 31 Jul 24 17:50 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-231031                | NoKubernetes-231031       | jenkins | v1.33.1 | 31 Jul 24 17:49 UTC | 31 Jul 24 17:49 UTC |
	| start   | -p NoKubernetes-231031                | NoKubernetes-231031       | jenkins | v1.33.1 | 31 Jul 24 17:49 UTC | 31 Jul 24 17:50 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p running-upgrade-262154             | running-upgrade-262154    | jenkins | v1.33.1 | 31 Jul 24 17:49 UTC | 31 Jul 24 17:51 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-241744 ssh               | cert-options-241744       | jenkins | v1.33.1 | 31 Jul 24 17:50 UTC | 31 Jul 24 17:50 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-241744 -- sudo        | cert-options-241744       | jenkins | v1.33.1 | 31 Jul 24 17:50 UTC | 31 Jul 24 17:50 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-241744                | cert-options-241744       | jenkins | v1.33.1 | 31 Jul 24 17:50 UTC | 31 Jul 24 17:50 UTC |
	| start   | -p force-systemd-env-082965           | force-systemd-env-082965  | jenkins | v1.33.1 | 31 Jul 24 17:50 UTC | 31 Jul 24 17:51 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-231031 sudo           | NoKubernetes-231031       | jenkins | v1.33.1 | 31 Jul 24 17:50 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-231031                | NoKubernetes-231031       | jenkins | v1.33.1 | 31 Jul 24 17:50 UTC | 31 Jul 24 17:50 UTC |
	| start   | -p NoKubernetes-231031                | NoKubernetes-231031       | jenkins | v1.33.1 | 31 Jul 24 17:50 UTC | 31 Jul 24 17:51 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-082965           | force-systemd-env-082965  | jenkins | v1.33.1 | 31 Jul 24 17:51 UTC | 31 Jul 24 17:51 UTC |
	| start   | -p kubernetes-upgrade-410576          | kubernetes-upgrade-410576 | jenkins | v1.33.1 | 31 Jul 24 17:51 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-231031 sudo           | NoKubernetes-231031       | jenkins | v1.33.1 | 31 Jul 24 17:51 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-231031                | NoKubernetes-231031       | jenkins | v1.33.1 | 31 Jul 24 17:51 UTC | 31 Jul 24 17:51 UTC |
	| start   | -p pause-957141 --memory=2048         | pause-957141              | jenkins | v1.33.1 | 31 Jul 24 17:51 UTC | 31 Jul 24 17:53 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-262154             | running-upgrade-262154    | jenkins | v1.33.1 | 31 Jul 24 17:51 UTC | 31 Jul 24 17:51 UTC |
	| start   | -p stopped-upgrade-246118             | minikube                  | jenkins | v1.26.0 | 31 Jul 24 17:51 UTC | 31 Jul 24 17:52 UTC |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-246118 stop           | minikube                  | jenkins | v1.26.0 | 31 Jul 24 17:52 UTC | 31 Jul 24 17:52 UTC |
	| start   | -p stopped-upgrade-246118             | stopped-upgrade-246118    | jenkins | v1.33.1 | 31 Jul 24 17:52 UTC | 31 Jul 24 17:53 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p cert-expiration-761578             | cert-expiration-761578    | jenkins | v1.33.1 | 31 Jul 24 17:52 UTC | 31 Jul 24 17:53 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-957141                       | pause-957141              | jenkins | v1.33.1 | 31 Jul 24 17:53 UTC | 31 Jul 24 17:53 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-761578             | cert-expiration-761578    | jenkins | v1.33.1 | 31 Jul 24 17:53 UTC | 31 Jul 24 17:53 UTC |
	| start   | -p auto-985288 --memory=3072          | auto-985288               | jenkins | v1.33.1 | 31 Jul 24 17:53 UTC |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 17:53:20
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 17:53:20.449009   58027 out.go:291] Setting OutFile to fd 1 ...
	I0731 17:53:20.449126   58027 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 17:53:20.449136   58027 out.go:304] Setting ErrFile to fd 2...
	I0731 17:53:20.449142   58027 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 17:53:20.449350   58027 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19349-8084/.minikube/bin
	I0731 17:53:20.449929   58027 out.go:298] Setting JSON to false
	I0731 17:53:20.450847   58027 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5744,"bootTime":1722442656,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 17:53:20.450905   58027 start.go:139] virtualization: kvm guest
	I0731 17:53:20.452895   58027 out.go:177] * [auto-985288] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 17:53:20.454085   58027 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 17:53:20.454109   58027 notify.go:220] Checking for updates...
	I0731 17:53:20.456515   58027 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 17:53:20.457685   58027 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 17:53:20.458820   58027 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19349-8084/.minikube
	I0731 17:53:20.459991   58027 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 17:53:20.461070   58027 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 17:53:20.462769   58027 config.go:182] Loaded profile config "kubernetes-upgrade-410576": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0731 17:53:20.462960   58027 config.go:182] Loaded profile config "pause-957141": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 17:53:20.463091   58027 config.go:182] Loaded profile config "stopped-upgrade-246118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0731 17:53:20.463252   58027 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 17:53:20.499807   58027 out.go:177] * Using the kvm2 driver based on user configuration
	I0731 17:53:20.501098   58027 start.go:297] selected driver: kvm2
	I0731 17:53:20.501111   58027 start.go:901] validating driver "kvm2" against <nil>
	I0731 17:53:20.501126   58027 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 17:53:20.501825   58027 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 17:53:20.501904   58027 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19349-8084/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 17:53:20.517642   58027 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 17:53:20.517712   58027 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 17:53:20.517948   58027 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 17:53:20.517978   58027 cni.go:84] Creating CNI manager for ""
	I0731 17:53:20.517988   58027 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 17:53:20.517997   58027 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 17:53:20.518070   58027 start.go:340] cluster config:
	{Name:auto-985288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:auto-985288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 17:53:20.518178   58027 iso.go:125] acquiring lock: {Name:mk4494bb5a63902bf12a909c3109db85229bc516 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 17:53:20.520883   58027 out.go:177] * Starting "auto-985288" primary control-plane node in "auto-985288" cluster
	I0731 17:53:20.522101   58027 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 17:53:20.522139   58027 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0731 17:53:20.522146   58027 cache.go:56] Caching tarball of preloaded images
	I0731 17:53:20.522234   58027 preload.go:172] Found /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 17:53:20.522247   58027 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 17:53:20.522357   58027 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/auto-985288/config.json ...
	I0731 17:53:20.522375   58027 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/auto-985288/config.json: {Name:mk076dc498896e75fb7de1e88b63645d2e0c035d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 17:53:20.522526   58027 start.go:360] acquireMachinesLock for auto-985288: {Name:mkd78eacfab7d6c3058c6674434b3d889ec957e0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 17:53:20.522558   58027 start.go:364] duration metric: took 16.247µs to acquireMachinesLock for "auto-985288"
	I0731 17:53:20.522577   58027 start.go:93] Provisioning new machine with config: &{Name:auto-985288 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.3 ClusterName:auto-985288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 17:53:20.522662   58027 start.go:125] createHost starting for "" (driver="kvm2")
	I0731 17:53:19.615722   57809 api_server.go:253] Checking apiserver healthz at https://192.168.39.24:8443/healthz ...
	I0731 17:53:19.621699   57809 api_server.go:279] https://192.168.39.24:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 17:53:19.621721   57809 api_server.go:103] status: https://192.168.39.24:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 17:53:20.116420   57809 api_server.go:253] Checking apiserver healthz at https://192.168.39.24:8443/healthz ...
	I0731 17:53:20.124930   57809 api_server.go:279] https://192.168.39.24:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 17:53:20.124961   57809 api_server.go:103] status: https://192.168.39.24:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 17:53:20.616323   57809 api_server.go:253] Checking apiserver healthz at https://192.168.39.24:8443/healthz ...
	I0731 17:53:20.622362   57809 api_server.go:279] https://192.168.39.24:8443/healthz returned 200:
	ok
	I0731 17:53:20.630394   57809 api_server.go:141] control plane version: v1.30.3
	I0731 17:53:20.630430   57809 api_server.go:131] duration metric: took 3.514936276s to wait for apiserver health ...
	I0731 17:53:20.630441   57809 cni.go:84] Creating CNI manager for ""
	I0731 17:53:20.630450   57809 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 17:53:20.632132   57809 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 17:53:19.589050   57552 api_server.go:269] stopped: https://192.168.50.211:8443/healthz: Get "https://192.168.50.211:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 17:53:19.589096   57552 api_server.go:253] Checking apiserver healthz at https://192.168.50.211:8443/healthz ...
	I0731 17:53:20.633564   57809 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 17:53:20.646940   57809 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 17:53:20.670728   57809 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 17:53:20.691497   57809 system_pods.go:59] 6 kube-system pods found
	I0731 17:53:20.691533   57809 system_pods.go:61] "coredns-7db6d8ff4d-x4692" [88b05208-9a2c-431c-8cdf-bda38e0baf8a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 17:53:20.691546   57809 system_pods.go:61] "etcd-pause-957141" [86f1fd21-0923-42b6-9882-e7a234961753] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 17:53:20.691557   57809 system_pods.go:61] "kube-apiserver-pause-957141" [a46dd278-a7b5-4297-bc94-f8db9a9cbd87] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 17:53:20.691567   57809 system_pods.go:61] "kube-controller-manager-pause-957141" [2eb60947-dca1-430a-9dce-1d00c4a9d91e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 17:53:20.691575   57809 system_pods.go:61] "kube-proxy-7pdbj" [3bf92527-b96a-4157-90cc-5d864b41526d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 17:53:20.691584   57809 system_pods.go:61] "kube-scheduler-pause-957141" [eb761ade-0576-4475-9457-dcfa00a02a9b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 17:53:20.691595   57809 system_pods.go:74] duration metric: took 20.845002ms to wait for pod list to return data ...
	I0731 17:53:20.691608   57809 node_conditions.go:102] verifying NodePressure condition ...
	I0731 17:53:20.701714   57809 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 17:53:20.701742   57809 node_conditions.go:123] node cpu capacity is 2
	I0731 17:53:20.701754   57809 node_conditions.go:105] duration metric: took 10.141052ms to run NodePressure ...
	I0731 17:53:20.701774   57809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 17:53:20.988912   57809 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 17:53:20.992879   57809 kubeadm.go:739] kubelet initialised
	I0731 17:53:20.992900   57809 kubeadm.go:740] duration metric: took 3.962609ms waiting for restarted kubelet to initialise ...
	I0731 17:53:20.992910   57809 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 17:53:20.998012   57809 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-x4692" in "kube-system" namespace to be "Ready" ...
	I0731 17:53:23.005046   57809 pod_ready.go:102] pod "coredns-7db6d8ff4d-x4692" in "kube-system" namespace has status "Ready":"False"
	I0731 17:53:20.524126   58027 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 17:53:20.524270   58027 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:53:20.524318   58027 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:53:20.539181   58027 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39273
	I0731 17:53:20.539587   58027 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:53:20.540126   58027 main.go:141] libmachine: Using API Version  1
	I0731 17:53:20.540148   58027 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:53:20.540464   58027 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:53:20.540663   58027 main.go:141] libmachine: (auto-985288) Calling .GetMachineName
	I0731 17:53:20.540814   58027 main.go:141] libmachine: (auto-985288) Calling .DriverName
	I0731 17:53:20.540975   58027 start.go:159] libmachine.API.Create for "auto-985288" (driver="kvm2")
	I0731 17:53:20.541010   58027 client.go:168] LocalClient.Create starting
	I0731 17:53:20.541091   58027 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem
	I0731 17:53:20.541160   58027 main.go:141] libmachine: Decoding PEM data...
	I0731 17:53:20.541188   58027 main.go:141] libmachine: Parsing certificate...
	I0731 17:53:20.541269   58027 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem
	I0731 17:53:20.541296   58027 main.go:141] libmachine: Decoding PEM data...
	I0731 17:53:20.541334   58027 main.go:141] libmachine: Parsing certificate...
	I0731 17:53:20.541365   58027 main.go:141] libmachine: Running pre-create checks...
	I0731 17:53:20.541376   58027 main.go:141] libmachine: (auto-985288) Calling .PreCreateCheck
	I0731 17:53:20.541746   58027 main.go:141] libmachine: (auto-985288) Calling .GetConfigRaw
	I0731 17:53:20.542185   58027 main.go:141] libmachine: Creating machine...
	I0731 17:53:20.542200   58027 main.go:141] libmachine: (auto-985288) Calling .Create
	I0731 17:53:20.542341   58027 main.go:141] libmachine: (auto-985288) Creating KVM machine...
	I0731 17:53:20.543605   58027 main.go:141] libmachine: (auto-985288) DBG | found existing default KVM network
	I0731 17:53:20.544733   58027 main.go:141] libmachine: (auto-985288) DBG | I0731 17:53:20.544584   58050 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:ce:a1:16} reservation:<nil>}
	I0731 17:53:20.545748   58027 main.go:141] libmachine: (auto-985288) DBG | I0731 17:53:20.545658   58050 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:df:c5:c3} reservation:<nil>}
	I0731 17:53:20.546460   58027 main.go:141] libmachine: (auto-985288) DBG | I0731 17:53:20.546368   58050 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:a3:ff:c2} reservation:<nil>}
	I0731 17:53:20.547481   58027 main.go:141] libmachine: (auto-985288) DBG | I0731 17:53:20.547407   58050 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000289ad0}
	I0731 17:53:20.547546   58027 main.go:141] libmachine: (auto-985288) DBG | created network xml: 
	I0731 17:53:20.547580   58027 main.go:141] libmachine: (auto-985288) DBG | <network>
	I0731 17:53:20.547598   58027 main.go:141] libmachine: (auto-985288) DBG |   <name>mk-auto-985288</name>
	I0731 17:53:20.547608   58027 main.go:141] libmachine: (auto-985288) DBG |   <dns enable='no'/>
	I0731 17:53:20.547623   58027 main.go:141] libmachine: (auto-985288) DBG |   
	I0731 17:53:20.547633   58027 main.go:141] libmachine: (auto-985288) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0731 17:53:20.547645   58027 main.go:141] libmachine: (auto-985288) DBG |     <dhcp>
	I0731 17:53:20.547654   58027 main.go:141] libmachine: (auto-985288) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0731 17:53:20.547674   58027 main.go:141] libmachine: (auto-985288) DBG |     </dhcp>
	I0731 17:53:20.547687   58027 main.go:141] libmachine: (auto-985288) DBG |   </ip>
	I0731 17:53:20.547693   58027 main.go:141] libmachine: (auto-985288) DBG |   
	I0731 17:53:20.547701   58027 main.go:141] libmachine: (auto-985288) DBG | </network>
	I0731 17:53:20.547709   58027 main.go:141] libmachine: (auto-985288) DBG | 
	I0731 17:53:20.552793   58027 main.go:141] libmachine: (auto-985288) DBG | trying to create private KVM network mk-auto-985288 192.168.72.0/24...
	I0731 17:53:20.636887   58027 main.go:141] libmachine: (auto-985288) DBG | private KVM network mk-auto-985288 192.168.72.0/24 created
	I0731 17:53:20.636916   58027 main.go:141] libmachine: (auto-985288) DBG | I0731 17:53:20.636840   58050 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19349-8084/.minikube
	I0731 17:53:20.636929   58027 main.go:141] libmachine: (auto-985288) Setting up store path in /home/jenkins/minikube-integration/19349-8084/.minikube/machines/auto-985288 ...
	I0731 17:53:20.636948   58027 main.go:141] libmachine: (auto-985288) Building disk image from file:///home/jenkins/minikube-integration/19349-8084/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0731 17:53:20.637016   58027 main.go:141] libmachine: (auto-985288) Downloading /home/jenkins/minikube-integration/19349-8084/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19349-8084/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0731 17:53:20.886700   58027 main.go:141] libmachine: (auto-985288) DBG | I0731 17:53:20.886562   58050 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/auto-985288/id_rsa...
	I0731 17:53:21.037677   58027 main.go:141] libmachine: (auto-985288) DBG | I0731 17:53:21.037528   58050 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/auto-985288/auto-985288.rawdisk...
	I0731 17:53:21.037711   58027 main.go:141] libmachine: (auto-985288) DBG | Writing magic tar header
	I0731 17:53:21.037731   58027 main.go:141] libmachine: (auto-985288) DBG | Writing SSH key tar header
	I0731 17:53:21.037756   58027 main.go:141] libmachine: (auto-985288) DBG | I0731 17:53:21.037713   58050 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19349-8084/.minikube/machines/auto-985288 ...
	I0731 17:53:21.037866   58027 main.go:141] libmachine: (auto-985288) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/auto-985288
	I0731 17:53:21.037889   58027 main.go:141] libmachine: (auto-985288) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19349-8084/.minikube/machines
	I0731 17:53:21.037898   58027 main.go:141] libmachine: (auto-985288) Setting executable bit set on /home/jenkins/minikube-integration/19349-8084/.minikube/machines/auto-985288 (perms=drwx------)
	I0731 17:53:21.037908   58027 main.go:141] libmachine: (auto-985288) Setting executable bit set on /home/jenkins/minikube-integration/19349-8084/.minikube/machines (perms=drwxr-xr-x)
	I0731 17:53:21.037929   58027 main.go:141] libmachine: (auto-985288) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19349-8084/.minikube
	I0731 17:53:21.037944   58027 main.go:141] libmachine: (auto-985288) Setting executable bit set on /home/jenkins/minikube-integration/19349-8084/.minikube (perms=drwxr-xr-x)
	I0731 17:53:21.037956   58027 main.go:141] libmachine: (auto-985288) Setting executable bit set on /home/jenkins/minikube-integration/19349-8084 (perms=drwxrwxr-x)
	I0731 17:53:21.037966   58027 main.go:141] libmachine: (auto-985288) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0731 17:53:21.037972   58027 main.go:141] libmachine: (auto-985288) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19349-8084
	I0731 17:53:21.037980   58027 main.go:141] libmachine: (auto-985288) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0731 17:53:21.037986   58027 main.go:141] libmachine: (auto-985288) DBG | Checking permissions on dir: /home/jenkins
	I0731 17:53:21.037993   58027 main.go:141] libmachine: (auto-985288) DBG | Checking permissions on dir: /home
	I0731 17:53:21.038011   58027 main.go:141] libmachine: (auto-985288) DBG | Skipping /home - not owner
	I0731 17:53:21.038025   58027 main.go:141] libmachine: (auto-985288) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0731 17:53:21.038056   58027 main.go:141] libmachine: (auto-985288) Creating domain...
	I0731 17:53:21.039041   58027 main.go:141] libmachine: (auto-985288) define libvirt domain using xml: 
	I0731 17:53:21.039063   58027 main.go:141] libmachine: (auto-985288) <domain type='kvm'>
	I0731 17:53:21.039073   58027 main.go:141] libmachine: (auto-985288)   <name>auto-985288</name>
	I0731 17:53:21.039082   58027 main.go:141] libmachine: (auto-985288)   <memory unit='MiB'>3072</memory>
	I0731 17:53:21.039133   58027 main.go:141] libmachine: (auto-985288)   <vcpu>2</vcpu>
	I0731 17:53:21.039151   58027 main.go:141] libmachine: (auto-985288)   <features>
	I0731 17:53:21.039162   58027 main.go:141] libmachine: (auto-985288)     <acpi/>
	I0731 17:53:21.039170   58027 main.go:141] libmachine: (auto-985288)     <apic/>
	I0731 17:53:21.039183   58027 main.go:141] libmachine: (auto-985288)     <pae/>
	I0731 17:53:21.039198   58027 main.go:141] libmachine: (auto-985288)     
	I0731 17:53:21.039214   58027 main.go:141] libmachine: (auto-985288)   </features>
	I0731 17:53:21.039227   58027 main.go:141] libmachine: (auto-985288)   <cpu mode='host-passthrough'>
	I0731 17:53:21.039259   58027 main.go:141] libmachine: (auto-985288)   
	I0731 17:53:21.039281   58027 main.go:141] libmachine: (auto-985288)   </cpu>
	I0731 17:53:21.039290   58027 main.go:141] libmachine: (auto-985288)   <os>
	I0731 17:53:21.039305   58027 main.go:141] libmachine: (auto-985288)     <type>hvm</type>
	I0731 17:53:21.039317   58027 main.go:141] libmachine: (auto-985288)     <boot dev='cdrom'/>
	I0731 17:53:21.039327   58027 main.go:141] libmachine: (auto-985288)     <boot dev='hd'/>
	I0731 17:53:21.039340   58027 main.go:141] libmachine: (auto-985288)     <bootmenu enable='no'/>
	I0731 17:53:21.039349   58027 main.go:141] libmachine: (auto-985288)   </os>
	I0731 17:53:21.039355   58027 main.go:141] libmachine: (auto-985288)   <devices>
	I0731 17:53:21.039365   58027 main.go:141] libmachine: (auto-985288)     <disk type='file' device='cdrom'>
	I0731 17:53:21.039382   58027 main.go:141] libmachine: (auto-985288)       <source file='/home/jenkins/minikube-integration/19349-8084/.minikube/machines/auto-985288/boot2docker.iso'/>
	I0731 17:53:21.039403   58027 main.go:141] libmachine: (auto-985288)       <target dev='hdc' bus='scsi'/>
	I0731 17:53:21.039415   58027 main.go:141] libmachine: (auto-985288)       <readonly/>
	I0731 17:53:21.039424   58027 main.go:141] libmachine: (auto-985288)     </disk>
	I0731 17:53:21.039435   58027 main.go:141] libmachine: (auto-985288)     <disk type='file' device='disk'>
	I0731 17:53:21.039447   58027 main.go:141] libmachine: (auto-985288)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0731 17:53:21.039462   58027 main.go:141] libmachine: (auto-985288)       <source file='/home/jenkins/minikube-integration/19349-8084/.minikube/machines/auto-985288/auto-985288.rawdisk'/>
	I0731 17:53:21.039477   58027 main.go:141] libmachine: (auto-985288)       <target dev='hda' bus='virtio'/>
	I0731 17:53:21.039489   58027 main.go:141] libmachine: (auto-985288)     </disk>
	I0731 17:53:21.039498   58027 main.go:141] libmachine: (auto-985288)     <interface type='network'>
	I0731 17:53:21.039508   58027 main.go:141] libmachine: (auto-985288)       <source network='mk-auto-985288'/>
	I0731 17:53:21.039517   58027 main.go:141] libmachine: (auto-985288)       <model type='virtio'/>
	I0731 17:53:21.039526   58027 main.go:141] libmachine: (auto-985288)     </interface>
	I0731 17:53:21.039535   58027 main.go:141] libmachine: (auto-985288)     <interface type='network'>
	I0731 17:53:21.039542   58027 main.go:141] libmachine: (auto-985288)       <source network='default'/>
	I0731 17:53:21.039555   58027 main.go:141] libmachine: (auto-985288)       <model type='virtio'/>
	I0731 17:53:21.039567   58027 main.go:141] libmachine: (auto-985288)     </interface>
	I0731 17:53:21.039574   58027 main.go:141] libmachine: (auto-985288)     <serial type='pty'>
	I0731 17:53:21.039586   58027 main.go:141] libmachine: (auto-985288)       <target port='0'/>
	I0731 17:53:21.039599   58027 main.go:141] libmachine: (auto-985288)     </serial>
	I0731 17:53:21.039610   58027 main.go:141] libmachine: (auto-985288)     <console type='pty'>
	I0731 17:53:21.039620   58027 main.go:141] libmachine: (auto-985288)       <target type='serial' port='0'/>
	I0731 17:53:21.039640   58027 main.go:141] libmachine: (auto-985288)     </console>
	I0731 17:53:21.039653   58027 main.go:141] libmachine: (auto-985288)     <rng model='virtio'>
	I0731 17:53:21.039668   58027 main.go:141] libmachine: (auto-985288)       <backend model='random'>/dev/random</backend>
	I0731 17:53:21.039679   58027 main.go:141] libmachine: (auto-985288)     </rng>
	I0731 17:53:21.039690   58027 main.go:141] libmachine: (auto-985288)     
	I0731 17:53:21.039699   58027 main.go:141] libmachine: (auto-985288)     
	I0731 17:53:21.039708   58027 main.go:141] libmachine: (auto-985288)   </devices>
	I0731 17:53:21.039722   58027 main.go:141] libmachine: (auto-985288) </domain>
	I0731 17:53:21.039736   58027 main.go:141] libmachine: (auto-985288) 
	I0731 17:53:21.043862   58027 main.go:141] libmachine: (auto-985288) DBG | domain auto-985288 has defined MAC address 52:54:00:df:47:c1 in network default
	I0731 17:53:21.044401   58027 main.go:141] libmachine: (auto-985288) Ensuring networks are active...
	I0731 17:53:21.044426   58027 main.go:141] libmachine: (auto-985288) DBG | domain auto-985288 has defined MAC address 52:54:00:1d:b8:77 in network mk-auto-985288
	I0731 17:53:21.045152   58027 main.go:141] libmachine: (auto-985288) Ensuring network default is active
	I0731 17:53:21.045584   58027 main.go:141] libmachine: (auto-985288) Ensuring network mk-auto-985288 is active
	I0731 17:53:21.046153   58027 main.go:141] libmachine: (auto-985288) Getting domain xml...
	I0731 17:53:21.046906   58027 main.go:141] libmachine: (auto-985288) Creating domain...
	I0731 17:53:22.252738   58027 main.go:141] libmachine: (auto-985288) Waiting to get IP...
	I0731 17:53:22.253445   58027 main.go:141] libmachine: (auto-985288) DBG | domain auto-985288 has defined MAC address 52:54:00:1d:b8:77 in network mk-auto-985288
	I0731 17:53:22.253898   58027 main.go:141] libmachine: (auto-985288) DBG | unable to find current IP address of domain auto-985288 in network mk-auto-985288
	I0731 17:53:22.253922   58027 main.go:141] libmachine: (auto-985288) DBG | I0731 17:53:22.253881   58050 retry.go:31] will retry after 202.666937ms: waiting for machine to come up
	I0731 17:53:22.458336   58027 main.go:141] libmachine: (auto-985288) DBG | domain auto-985288 has defined MAC address 52:54:00:1d:b8:77 in network mk-auto-985288
	I0731 17:53:22.459071   58027 main.go:141] libmachine: (auto-985288) DBG | unable to find current IP address of domain auto-985288 in network mk-auto-985288
	I0731 17:53:22.459124   58027 main.go:141] libmachine: (auto-985288) DBG | I0731 17:53:22.459028   58050 retry.go:31] will retry after 294.670213ms: waiting for machine to come up
	I0731 17:53:22.755542   58027 main.go:141] libmachine: (auto-985288) DBG | domain auto-985288 has defined MAC address 52:54:00:1d:b8:77 in network mk-auto-985288
	I0731 17:53:22.756015   58027 main.go:141] libmachine: (auto-985288) DBG | unable to find current IP address of domain auto-985288 in network mk-auto-985288
	I0731 17:53:22.756048   58027 main.go:141] libmachine: (auto-985288) DBG | I0731 17:53:22.755958   58050 retry.go:31] will retry after 433.550274ms: waiting for machine to come up
	I0731 17:53:23.191260   58027 main.go:141] libmachine: (auto-985288) DBG | domain auto-985288 has defined MAC address 52:54:00:1d:b8:77 in network mk-auto-985288
	I0731 17:53:23.191749   58027 main.go:141] libmachine: (auto-985288) DBG | unable to find current IP address of domain auto-985288 in network mk-auto-985288
	I0731 17:53:23.191769   58027 main.go:141] libmachine: (auto-985288) DBG | I0731 17:53:23.191707   58050 retry.go:31] will retry after 544.510943ms: waiting for machine to come up
	I0731 17:53:23.737502   58027 main.go:141] libmachine: (auto-985288) DBG | domain auto-985288 has defined MAC address 52:54:00:1d:b8:77 in network mk-auto-985288
	I0731 17:53:23.737936   58027 main.go:141] libmachine: (auto-985288) DBG | unable to find current IP address of domain auto-985288 in network mk-auto-985288
	I0731 17:53:23.737961   58027 main.go:141] libmachine: (auto-985288) DBG | I0731 17:53:23.737887   58050 retry.go:31] will retry after 521.050458ms: waiting for machine to come up
	I0731 17:53:24.260215   58027 main.go:141] libmachine: (auto-985288) DBG | domain auto-985288 has defined MAC address 52:54:00:1d:b8:77 in network mk-auto-985288
	I0731 17:53:24.260774   58027 main.go:141] libmachine: (auto-985288) DBG | unable to find current IP address of domain auto-985288 in network mk-auto-985288
	I0731 17:53:24.260819   58027 main.go:141] libmachine: (auto-985288) DBG | I0731 17:53:24.260724   58050 retry.go:31] will retry after 602.20301ms: waiting for machine to come up
	I0731 17:53:24.864365   58027 main.go:141] libmachine: (auto-985288) DBG | domain auto-985288 has defined MAC address 52:54:00:1d:b8:77 in network mk-auto-985288
	I0731 17:53:24.864918   58027 main.go:141] libmachine: (auto-985288) DBG | unable to find current IP address of domain auto-985288 in network mk-auto-985288
	I0731 17:53:24.864944   58027 main.go:141] libmachine: (auto-985288) DBG | I0731 17:53:24.864868   58050 retry.go:31] will retry after 728.659942ms: waiting for machine to come up
	I0731 17:53:24.589594   57552 api_server.go:269] stopped: https://192.168.50.211:8443/healthz: Get "https://192.168.50.211:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 17:53:24.589633   57552 api_server.go:253] Checking apiserver healthz at https://192.168.50.211:8443/healthz ...
	I0731 17:53:25.504799   57809 pod_ready.go:102] pod "coredns-7db6d8ff4d-x4692" in "kube-system" namespace has status "Ready":"False"
	I0731 17:53:28.004917   57809 pod_ready.go:102] pod "coredns-7db6d8ff4d-x4692" in "kube-system" namespace has status "Ready":"False"
	I0731 17:53:25.595220   58027 main.go:141] libmachine: (auto-985288) DBG | domain auto-985288 has defined MAC address 52:54:00:1d:b8:77 in network mk-auto-985288
	I0731 17:53:25.595734   58027 main.go:141] libmachine: (auto-985288) DBG | unable to find current IP address of domain auto-985288 in network mk-auto-985288
	I0731 17:53:25.595761   58027 main.go:141] libmachine: (auto-985288) DBG | I0731 17:53:25.595688   58050 retry.go:31] will retry after 1.009743689s: waiting for machine to come up
	I0731 17:53:26.606978   58027 main.go:141] libmachine: (auto-985288) DBG | domain auto-985288 has defined MAC address 52:54:00:1d:b8:77 in network mk-auto-985288
	I0731 17:53:26.607645   58027 main.go:141] libmachine: (auto-985288) DBG | unable to find current IP address of domain auto-985288 in network mk-auto-985288
	I0731 17:53:26.607675   58027 main.go:141] libmachine: (auto-985288) DBG | I0731 17:53:26.607569   58050 retry.go:31] will retry after 1.209050974s: waiting for machine to come up
	I0731 17:53:27.819022   58027 main.go:141] libmachine: (auto-985288) DBG | domain auto-985288 has defined MAC address 52:54:00:1d:b8:77 in network mk-auto-985288
	I0731 17:53:27.819516   58027 main.go:141] libmachine: (auto-985288) DBG | unable to find current IP address of domain auto-985288 in network mk-auto-985288
	I0731 17:53:27.819542   58027 main.go:141] libmachine: (auto-985288) DBG | I0731 17:53:27.819470   58050 retry.go:31] will retry after 1.939067792s: waiting for machine to come up
	I0731 17:53:29.760468   58027 main.go:141] libmachine: (auto-985288) DBG | domain auto-985288 has defined MAC address 52:54:00:1d:b8:77 in network mk-auto-985288
	I0731 17:53:29.760957   58027 main.go:141] libmachine: (auto-985288) DBG | unable to find current IP address of domain auto-985288 in network mk-auto-985288
	I0731 17:53:29.760989   58027 main.go:141] libmachine: (auto-985288) DBG | I0731 17:53:29.760890   58050 retry.go:31] will retry after 2.639081106s: waiting for machine to come up
	I0731 17:53:30.669363   56427 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 17:53:30.669479   56427 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0731 17:53:30.671199   56427 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0731 17:53:30.671271   56427 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 17:53:30.671373   56427 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 17:53:30.671521   56427 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 17:53:30.671663   56427 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 17:53:30.671747   56427 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 17:53:30.673172   56427 out.go:204]   - Generating certificates and keys ...
	I0731 17:53:30.673272   56427 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 17:53:30.673367   56427 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 17:53:30.673481   56427 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0731 17:53:30.673571   56427 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0731 17:53:30.673654   56427 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0731 17:53:30.673720   56427 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0731 17:53:30.673792   56427 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0731 17:53:30.673979   56427 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-410576 localhost] and IPs [192.168.61.234 127.0.0.1 ::1]
	I0731 17:53:30.674048   56427 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0731 17:53:30.674211   56427 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-410576 localhost] and IPs [192.168.61.234 127.0.0.1 ::1]
	I0731 17:53:30.674297   56427 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0731 17:53:30.674373   56427 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0731 17:53:30.674430   56427 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0731 17:53:30.674500   56427 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 17:53:30.674562   56427 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 17:53:30.674635   56427 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 17:53:30.674712   56427 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 17:53:30.674781   56427 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 17:53:30.674906   56427 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 17:53:30.675013   56427 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 17:53:30.675061   56427 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 17:53:30.675177   56427 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 17:53:29.590505   57552 api_server.go:269] stopped: https://192.168.50.211:8443/healthz: Get "https://192.168.50.211:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 17:53:29.590554   57552 api_server.go:253] Checking apiserver healthz at https://192.168.50.211:8443/healthz ...
	I0731 17:53:29.627816   57552 api_server.go:269] stopped: https://192.168.50.211:8443/healthz: Get "https://192.168.50.211:8443/healthz": read tcp 192.168.50.1:48112->192.168.50.211:8443: read: connection reset by peer
	I0731 17:53:30.088391   57552 api_server.go:253] Checking apiserver healthz at https://192.168.50.211:8443/healthz ...
	I0731 17:53:30.089020   57552 api_server.go:269] stopped: https://192.168.50.211:8443/healthz: Get "https://192.168.50.211:8443/healthz": dial tcp 192.168.50.211:8443: connect: connection refused
	I0731 17:53:30.588291   57552 api_server.go:253] Checking apiserver healthz at https://192.168.50.211:8443/healthz ...
	I0731 17:53:30.588897   57552 api_server.go:269] stopped: https://192.168.50.211:8443/healthz: Get "https://192.168.50.211:8443/healthz": dial tcp 192.168.50.211:8443: connect: connection refused
	I0731 17:53:31.088313   57552 api_server.go:253] Checking apiserver healthz at https://192.168.50.211:8443/healthz ...
	I0731 17:53:30.676819   56427 out.go:204]   - Booting up control plane ...
	I0731 17:53:30.676951   56427 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 17:53:30.677067   56427 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 17:53:30.677180   56427 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 17:53:30.677288   56427 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 17:53:30.677482   56427 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 17:53:30.677569   56427 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0731 17:53:30.677674   56427 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 17:53:30.677927   56427 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 17:53:30.678029   56427 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 17:53:30.678309   56427 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 17:53:30.678417   56427 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 17:53:30.678693   56427 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 17:53:30.678795   56427 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 17:53:30.679037   56427 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 17:53:30.679169   56427 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 17:53:30.679449   56427 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 17:53:30.679466   56427 kubeadm.go:310] 
	I0731 17:53:30.679529   56427 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 17:53:30.679595   56427 kubeadm.go:310] 		timed out waiting for the condition
	I0731 17:53:30.679617   56427 kubeadm.go:310] 
	I0731 17:53:30.679667   56427 kubeadm.go:310] 	This error is likely caused by:
	I0731 17:53:30.679721   56427 kubeadm.go:310] 		- The kubelet is not running
	I0731 17:53:30.679868   56427 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 17:53:30.679879   56427 kubeadm.go:310] 
	I0731 17:53:30.680020   56427 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 17:53:30.680065   56427 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 17:53:30.680108   56427 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 17:53:30.680125   56427 kubeadm.go:310] 
	I0731 17:53:30.680269   56427 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 17:53:30.680380   56427 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 17:53:30.680390   56427 kubeadm.go:310] 
	I0731 17:53:30.680545   56427 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 17:53:30.680706   56427 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 17:53:30.680802   56427 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 17:53:30.680908   56427 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 17:53:30.680962   56427 kubeadm.go:310] 
	W0731 17:53:30.681055   56427 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-410576 localhost] and IPs [192.168.61.234 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-410576 localhost] and IPs [192.168.61.234 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0731 17:53:30.681110   56427 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 17:53:31.164236   56427 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 17:53:31.182825   56427 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 17:53:31.196478   56427 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 17:53:31.196500   56427 kubeadm.go:157] found existing configuration files:
	
	I0731 17:53:31.196550   56427 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 17:53:31.208319   56427 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 17:53:31.208398   56427 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 17:53:31.221883   56427 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 17:53:31.232768   56427 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 17:53:31.232832   56427 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 17:53:31.245495   56427 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 17:53:31.257689   56427 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 17:53:31.257770   56427 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 17:53:31.270092   56427 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 17:53:31.282965   56427 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 17:53:31.283041   56427 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 17:53:31.296666   56427 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 17:53:31.396524   56427 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0731 17:53:31.396638   56427 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 17:53:31.587380   56427 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 17:53:31.587500   56427 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 17:53:31.587598   56427 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 17:53:31.828871   56427 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 17:53:31.830486   56427 out.go:204]   - Generating certificates and keys ...
	I0731 17:53:31.830631   56427 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 17:53:31.830759   56427 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 17:53:31.830881   56427 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 17:53:31.830966   56427 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 17:53:31.831741   56427 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 17:53:31.831946   56427 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 17:53:31.832319   56427 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 17:53:31.832603   56427 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 17:53:31.833031   56427 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 17:53:31.833610   56427 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 17:53:31.833782   56427 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 17:53:31.833861   56427 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 17:53:32.031783   56427 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 17:53:32.140338   56427 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 17:53:32.388575   56427 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 17:53:32.564520   56427 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 17:53:32.585651   56427 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 17:53:32.587038   56427 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 17:53:32.587130   56427 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 17:53:32.715426   56427 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 17:53:30.004886   57809 pod_ready.go:92] pod "coredns-7db6d8ff4d-x4692" in "kube-system" namespace has status "Ready":"True"
	I0731 17:53:30.004910   57809 pod_ready.go:81] duration metric: took 9.006871906s for pod "coredns-7db6d8ff4d-x4692" in "kube-system" namespace to be "Ready" ...
	I0731 17:53:30.004919   57809 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-957141" in "kube-system" namespace to be "Ready" ...
	I0731 17:53:32.012047   57809 pod_ready.go:102] pod "etcd-pause-957141" in "kube-system" namespace has status "Ready":"False"
	I0731 17:53:34.011580   57809 pod_ready.go:92] pod "etcd-pause-957141" in "kube-system" namespace has status "Ready":"True"
	I0731 17:53:34.011603   57809 pod_ready.go:81] duration metric: took 4.006677522s for pod "etcd-pause-957141" in "kube-system" namespace to be "Ready" ...
	I0731 17:53:34.011611   57809 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-957141" in "kube-system" namespace to be "Ready" ...
	I0731 17:53:34.016677   57809 pod_ready.go:92] pod "kube-apiserver-pause-957141" in "kube-system" namespace has status "Ready":"True"
	I0731 17:53:34.016698   57809 pod_ready.go:81] duration metric: took 5.07976ms for pod "kube-apiserver-pause-957141" in "kube-system" namespace to be "Ready" ...
	I0731 17:53:34.016711   57809 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-957141" in "kube-system" namespace to be "Ready" ...
	I0731 17:53:34.021489   57809 pod_ready.go:92] pod "kube-controller-manager-pause-957141" in "kube-system" namespace has status "Ready":"True"
	I0731 17:53:34.021515   57809 pod_ready.go:81] duration metric: took 4.789749ms for pod "kube-controller-manager-pause-957141" in "kube-system" namespace to be "Ready" ...
	I0731 17:53:34.021527   57809 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7pdbj" in "kube-system" namespace to be "Ready" ...
	I0731 17:53:34.025834   57809 pod_ready.go:92] pod "kube-proxy-7pdbj" in "kube-system" namespace has status "Ready":"True"
	I0731 17:53:34.025851   57809 pod_ready.go:81] duration metric: took 4.317306ms for pod "kube-proxy-7pdbj" in "kube-system" namespace to be "Ready" ...
	I0731 17:53:34.025864   57809 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-957141" in "kube-system" namespace to be "Ready" ...
	I0731 17:53:34.030168   57809 pod_ready.go:92] pod "kube-scheduler-pause-957141" in "kube-system" namespace has status "Ready":"True"
	I0731 17:53:34.030187   57809 pod_ready.go:81] duration metric: took 4.314742ms for pod "kube-scheduler-pause-957141" in "kube-system" namespace to be "Ready" ...
	I0731 17:53:34.030195   57809 pod_ready.go:38] duration metric: took 13.037275114s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 17:53:34.030217   57809 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 17:53:34.041935   57809 ops.go:34] apiserver oom_adj: -16
	I0731 17:53:34.041955   57809 kubeadm.go:597] duration metric: took 19.537000795s to restartPrimaryControlPlane
	I0731 17:53:34.041964   57809 kubeadm.go:394] duration metric: took 19.636195508s to StartCluster
	I0731 17:53:34.041979   57809 settings.go:142] acquiring lock: {Name:mk8ac8269296bf0b887a00ff74caaad180005f89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 17:53:34.042061   57809 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 17:53:34.043186   57809 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/kubeconfig: {Name:mkf2340bc1ada3c683f132aca37228d7588e1fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 17:53:34.043501   57809 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 17:53:34.043616   57809 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 17:53:34.043754   57809 config.go:182] Loaded profile config "pause-957141": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 17:53:34.045315   57809 out.go:177] * Verifying Kubernetes components...
	I0731 17:53:34.046104   57809 out.go:177] * Enabled addons: 
	I0731 17:53:34.046964   57809 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 17:53:34.047645   57809 addons.go:510] duration metric: took 4.031038ms for enable addons: enabled=[]
	I0731 17:53:34.222679   57809 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 17:53:34.239345   57809 node_ready.go:35] waiting up to 6m0s for node "pause-957141" to be "Ready" ...
	I0731 17:53:34.243478   57809 node_ready.go:49] node "pause-957141" has status "Ready":"True"
	I0731 17:53:34.243499   57809 node_ready.go:38] duration metric: took 4.113003ms for node "pause-957141" to be "Ready" ...
	I0731 17:53:34.243508   57809 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 17:53:34.411914   57809 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-x4692" in "kube-system" namespace to be "Ready" ...
	I0731 17:53:32.401518   58027 main.go:141] libmachine: (auto-985288) DBG | domain auto-985288 has defined MAC address 52:54:00:1d:b8:77 in network mk-auto-985288
	I0731 17:53:32.401982   58027 main.go:141] libmachine: (auto-985288) DBG | unable to find current IP address of domain auto-985288 in network mk-auto-985288
	I0731 17:53:32.402010   58027 main.go:141] libmachine: (auto-985288) DBG | I0731 17:53:32.401933   58050 retry.go:31] will retry after 3.235426967s: waiting for machine to come up
	I0731 17:53:34.603647   57552 api_server.go:279] https://192.168.50.211:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 17:53:34.603683   57552 api_server.go:103] status: https://192.168.50.211:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 17:53:34.603701   57552 api_server.go:253] Checking apiserver healthz at https://192.168.50.211:8443/healthz ...
	I0731 17:53:34.647165   57552 api_server.go:279] https://192.168.50.211:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 17:53:34.647208   57552 api_server.go:103] status: https://192.168.50.211:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 17:53:35.087665   57552 api_server.go:253] Checking apiserver healthz at https://192.168.50.211:8443/healthz ...
	I0731 17:53:35.092525   57552 api_server.go:279] https://192.168.50.211:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 17:53:35.092546   57552 api_server.go:103] status: https://192.168.50.211:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 17:53:35.587872   57552 api_server.go:253] Checking apiserver healthz at https://192.168.50.211:8443/healthz ...
	I0731 17:53:35.593019   57552 api_server.go:279] https://192.168.50.211:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 17:53:35.593064   57552 api_server.go:103] status: https://192.168.50.211:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 17:53:36.087831   57552 api_server.go:253] Checking apiserver healthz at https://192.168.50.211:8443/healthz ...
	I0731 17:53:36.092716   57552 api_server.go:279] https://192.168.50.211:8443/healthz returned 200:
	ok
	I0731 17:53:36.100048   57552 api_server.go:141] control plane version: v1.24.1
	I0731 17:53:36.100081   57552 api_server.go:131] duration metric: took 27.012571397s to wait for apiserver health ...
	I0731 17:53:36.100089   57552 cni.go:84] Creating CNI manager for ""
	I0731 17:53:36.100096   57552 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 17:53:36.102034   57552 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 17:53:36.103326   57552 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 17:53:36.111821   57552 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 17:53:36.125751   57552 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 17:53:36.135273   57552 system_pods.go:59] 5 kube-system pods found
	I0731 17:53:36.135300   57552 system_pods.go:61] "etcd-stopped-upgrade-246118" [90a9454d-5cda-4309-b75b-05cba2f3f016] Pending
	I0731 17:53:36.135305   57552 system_pods.go:61] "kube-apiserver-stopped-upgrade-246118" [681e771d-aa45-4c10-9661-c72225484be6] Pending
	I0731 17:53:36.135309   57552 system_pods.go:61] "kube-controller-manager-stopped-upgrade-246118" [48f4dc64-d2e7-4031-87bf-e05aa130b2e6] Pending
	I0731 17:53:36.135313   57552 system_pods.go:61] "kube-scheduler-stopped-upgrade-246118" [145706f5-830f-48c8-974a-088a77a21df4] Pending
	I0731 17:53:36.135319   57552 system_pods.go:61] "storage-provisioner" [8a03afe9-5333-4d1e-a786-8cc0d604c504] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0731 17:53:36.135325   57552 system_pods.go:74] duration metric: took 9.557138ms to wait for pod list to return data ...
	I0731 17:53:36.135334   57552 node_conditions.go:102] verifying NodePressure condition ...
	I0731 17:53:36.138511   57552 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0731 17:53:36.138539   57552 node_conditions.go:123] node cpu capacity is 2
	I0731 17:53:36.138571   57552 node_conditions.go:105] duration metric: took 3.232274ms to run NodePressure ...
	I0731 17:53:36.138586   57552 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 17:53:36.309846   57552 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 17:53:36.319156   57552 ops.go:34] apiserver oom_adj: -16
	I0731 17:53:36.319182   57552 kubeadm.go:597] duration metric: took 30.65158708s to restartPrimaryControlPlane
	I0731 17:53:36.319193   57552 kubeadm.go:394] duration metric: took 30.690644994s to StartCluster
	I0731 17:53:36.319209   57552 settings.go:142] acquiring lock: {Name:mk8ac8269296bf0b887a00ff74caaad180005f89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 17:53:36.319293   57552 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 17:53:36.320086   57552 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/kubeconfig: {Name:mkf2340bc1ada3c683f132aca37228d7588e1fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 17:53:36.320325   57552 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.211 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 17:53:36.320391   57552 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 17:53:36.320492   57552 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-246118"
	I0731 17:53:36.320522   57552 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-246118"
	I0731 17:53:36.320520   57552 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-246118"
	I0731 17:53:36.320559   57552 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-246118"
	I0731 17:53:36.320588   57552 config.go:182] Loaded profile config "stopped-upgrade-246118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	W0731 17:53:36.320533   57552 addons.go:243] addon storage-provisioner should already be in state true
	I0731 17:53:36.320643   57552 host.go:66] Checking if "stopped-upgrade-246118" exists ...
	I0731 17:53:36.320909   57552 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:53:36.320935   57552 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:53:36.320962   57552 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:53:36.321003   57552 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:53:36.322078   57552 out.go:177] * Verifying Kubernetes components...
	I0731 17:53:36.323355   57552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 17:53:36.335630   57552 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45501
	I0731 17:53:36.336032   57552 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:53:36.336383   57552 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42629
	I0731 17:53:36.336536   57552 main.go:141] libmachine: Using API Version  1
	I0731 17:53:36.336562   57552 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:53:36.336865   57552 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:53:36.336900   57552 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:53:36.337174   57552 main.go:141] libmachine: (stopped-upgrade-246118) Calling .GetState
	I0731 17:53:36.337323   57552 main.go:141] libmachine: Using API Version  1
	I0731 17:53:36.337352   57552 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:53:36.337724   57552 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:53:36.338276   57552 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:53:36.338322   57552 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:53:36.339962   57552 kapi.go:59] client config for stopped-upgrade-246118: &rest.Config{Host:"https://192.168.50.211:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19349-8084/.minikube/profiles/stopped-upgrade-246118/client.crt", KeyFile:"/home/jenkins/minikube-integration/19349-8084/.minikube/profiles/stopped-upgrade-246118/client.key", CAFile:"/home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]
uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 17:53:36.340303   57552 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-246118"
	W0731 17:53:36.340323   57552 addons.go:243] addon default-storageclass should already be in state true
	I0731 17:53:36.340351   57552 host.go:66] Checking if "stopped-upgrade-246118" exists ...
	I0731 17:53:36.340737   57552 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:53:36.340779   57552 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:53:36.354047   57552 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37049
	I0731 17:53:36.354587   57552 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:53:36.354651   57552 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46423
	I0731 17:53:36.355126   57552 main.go:141] libmachine: Using API Version  1
	I0731 17:53:36.355149   57552 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:53:36.355187   57552 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:53:36.355505   57552 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:53:36.355723   57552 main.go:141] libmachine: Using API Version  1
	I0731 17:53:36.355739   57552 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:53:36.355772   57552 main.go:141] libmachine: (stopped-upgrade-246118) Calling .GetState
	I0731 17:53:36.356064   57552 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:53:36.356803   57552 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:53:36.356834   57552 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:53:36.357797   57552 main.go:141] libmachine: (stopped-upgrade-246118) Calling .DriverName
	I0731 17:53:36.359941   57552 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 17:53:32.717427   56427 out.go:204]   - Booting up control plane ...
	I0731 17:53:32.717565   56427 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 17:53:32.724208   56427 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 17:53:32.727127   56427 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 17:53:32.728483   56427 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 17:53:32.730677   56427 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 17:53:36.361410   57552 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 17:53:36.361430   57552 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 17:53:36.361444   57552 main.go:141] libmachine: (stopped-upgrade-246118) Calling .GetSSHHostname
	I0731 17:53:36.364749   57552 main.go:141] libmachine: (stopped-upgrade-246118) DBG | domain stopped-upgrade-246118 has defined MAC address 52:54:00:8d:97:35 in network mk-stopped-upgrade-246118
	I0731 17:53:36.365243   57552 main.go:141] libmachine: (stopped-upgrade-246118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:97:35", ip: ""} in network mk-stopped-upgrade-246118: {Iface:virbr4 ExpiryTime:2024-07-31 18:52:02 +0000 UTC Type:0 Mac:52:54:00:8d:97:35 Iaid: IPaddr:192.168.50.211 Prefix:24 Hostname:stopped-upgrade-246118 Clientid:01:52:54:00:8d:97:35}
	I0731 17:53:36.365263   57552 main.go:141] libmachine: (stopped-upgrade-246118) DBG | domain stopped-upgrade-246118 has defined IP address 192.168.50.211 and MAC address 52:54:00:8d:97:35 in network mk-stopped-upgrade-246118
	I0731 17:53:36.365457   57552 main.go:141] libmachine: (stopped-upgrade-246118) Calling .GetSSHPort
	I0731 17:53:36.365620   57552 main.go:141] libmachine: (stopped-upgrade-246118) Calling .GetSSHKeyPath
	I0731 17:53:36.365803   57552 main.go:141] libmachine: (stopped-upgrade-246118) Calling .GetSSHUsername
	I0731 17:53:36.365900   57552 sshutil.go:53] new ssh client: &{IP:192.168.50.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/stopped-upgrade-246118/id_rsa Username:docker}
	I0731 17:53:36.372444   57552 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44735
	I0731 17:53:36.372842   57552 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:53:36.373293   57552 main.go:141] libmachine: Using API Version  1
	I0731 17:53:36.373316   57552 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:53:36.373661   57552 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:53:36.373842   57552 main.go:141] libmachine: (stopped-upgrade-246118) Calling .GetState
	I0731 17:53:36.375354   57552 main.go:141] libmachine: (stopped-upgrade-246118) Calling .DriverName
	I0731 17:53:36.375560   57552 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 17:53:36.375573   57552 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 17:53:36.375585   57552 main.go:141] libmachine: (stopped-upgrade-246118) Calling .GetSSHHostname
	I0731 17:53:36.378203   57552 main.go:141] libmachine: (stopped-upgrade-246118) DBG | domain stopped-upgrade-246118 has defined MAC address 52:54:00:8d:97:35 in network mk-stopped-upgrade-246118
	I0731 17:53:36.378634   57552 main.go:141] libmachine: (stopped-upgrade-246118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:97:35", ip: ""} in network mk-stopped-upgrade-246118: {Iface:virbr4 ExpiryTime:2024-07-31 18:52:02 +0000 UTC Type:0 Mac:52:54:00:8d:97:35 Iaid: IPaddr:192.168.50.211 Prefix:24 Hostname:stopped-upgrade-246118 Clientid:01:52:54:00:8d:97:35}
	I0731 17:53:36.378658   57552 main.go:141] libmachine: (stopped-upgrade-246118) DBG | domain stopped-upgrade-246118 has defined IP address 192.168.50.211 and MAC address 52:54:00:8d:97:35 in network mk-stopped-upgrade-246118
	I0731 17:53:36.378827   57552 main.go:141] libmachine: (stopped-upgrade-246118) Calling .GetSSHPort
	I0731 17:53:36.379003   57552 main.go:141] libmachine: (stopped-upgrade-246118) Calling .GetSSHKeyPath
	I0731 17:53:36.379536   57552 main.go:141] libmachine: (stopped-upgrade-246118) Calling .GetSSHUsername
	I0731 17:53:36.379682   57552 sshutil.go:53] new ssh client: &{IP:192.168.50.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/stopped-upgrade-246118/id_rsa Username:docker}
	I0731 17:53:36.452355   57552 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 17:53:36.465227   57552 api_server.go:52] waiting for apiserver process to appear ...
	I0731 17:53:36.465340   57552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 17:53:36.476656   57552 api_server.go:72] duration metric: took 156.299869ms to wait for apiserver process to appear ...
	I0731 17:53:36.476681   57552 api_server.go:88] waiting for apiserver healthz status ...
	I0731 17:53:36.476702   57552 api_server.go:253] Checking apiserver healthz at https://192.168.50.211:8443/healthz ...
	I0731 17:53:36.483228   57552 api_server.go:279] https://192.168.50.211:8443/healthz returned 200:
	ok
	I0731 17:53:36.484432   57552 api_server.go:141] control plane version: v1.24.1
	I0731 17:53:36.484454   57552 api_server.go:131] duration metric: took 7.76628ms to wait for apiserver health ...
	I0731 17:53:36.484464   57552 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 17:53:36.489415   57552 system_pods.go:59] 5 kube-system pods found
	I0731 17:53:36.489446   57552 system_pods.go:61] "etcd-stopped-upgrade-246118" [90a9454d-5cda-4309-b75b-05cba2f3f016] Pending
	I0731 17:53:36.489460   57552 system_pods.go:61] "kube-apiserver-stopped-upgrade-246118" [681e771d-aa45-4c10-9661-c72225484be6] Pending
	I0731 17:53:36.489465   57552 system_pods.go:61] "kube-controller-manager-stopped-upgrade-246118" [48f4dc64-d2e7-4031-87bf-e05aa130b2e6] Pending
	I0731 17:53:36.489470   57552 system_pods.go:61] "kube-scheduler-stopped-upgrade-246118" [145706f5-830f-48c8-974a-088a77a21df4] Pending
	I0731 17:53:36.489479   57552 system_pods.go:61] "storage-provisioner" [8a03afe9-5333-4d1e-a786-8cc0d604c504] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0731 17:53:36.489493   57552 system_pods.go:74] duration metric: took 5.021149ms to wait for pod list to return data ...
	I0731 17:53:36.489508   57552 kubeadm.go:582] duration metric: took 169.15513ms to wait for: map[apiserver:true system_pods:true]
	I0731 17:53:36.489532   57552 node_conditions.go:102] verifying NodePressure condition ...
	I0731 17:53:36.492543   57552 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0731 17:53:36.492561   57552 node_conditions.go:123] node cpu capacity is 2
	I0731 17:53:36.492570   57552 node_conditions.go:105] duration metric: took 3.033807ms to run NodePressure ...
	I0731 17:53:36.492581   57552 start.go:241] waiting for startup goroutines ...
	I0731 17:53:36.543427   57552 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 17:53:36.599895   57552 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 17:53:37.421894   57552 main.go:141] libmachine: Making call to close driver server
	I0731 17:53:37.421925   57552 main.go:141] libmachine: (stopped-upgrade-246118) Calling .Close
	I0731 17:53:37.422203   57552 main.go:141] libmachine: Successfully made call to close driver server
	I0731 17:53:37.422257   57552 main.go:141] libmachine: (stopped-upgrade-246118) DBG | Closing plugin on server side
	I0731 17:53:37.422262   57552 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 17:53:37.422288   57552 main.go:141] libmachine: Making call to close driver server
	I0731 17:53:37.422298   57552 main.go:141] libmachine: (stopped-upgrade-246118) Calling .Close
	I0731 17:53:37.422524   57552 main.go:141] libmachine: Successfully made call to close driver server
	I0731 17:53:37.422536   57552 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 17:53:37.422560   57552 main.go:141] libmachine: (stopped-upgrade-246118) DBG | Closing plugin on server side
	I0731 17:53:37.428124   57552 main.go:141] libmachine: Making call to close driver server
	I0731 17:53:37.428147   57552 main.go:141] libmachine: (stopped-upgrade-246118) Calling .Close
	I0731 17:53:37.428400   57552 main.go:141] libmachine: Successfully made call to close driver server
	I0731 17:53:37.428418   57552 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 17:53:37.452535   57552 main.go:141] libmachine: Making call to close driver server
	I0731 17:53:37.452558   57552 main.go:141] libmachine: (stopped-upgrade-246118) Calling .Close
	I0731 17:53:37.452856   57552 main.go:141] libmachine: Successfully made call to close driver server
	I0731 17:53:37.452866   57552 main.go:141] libmachine: (stopped-upgrade-246118) DBG | Closing plugin on server side
	I0731 17:53:37.452878   57552 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 17:53:37.452889   57552 main.go:141] libmachine: Making call to close driver server
	I0731 17:53:37.452897   57552 main.go:141] libmachine: (stopped-upgrade-246118) Calling .Close
	I0731 17:53:37.453094   57552 main.go:141] libmachine: Successfully made call to close driver server
	I0731 17:53:37.453106   57552 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 17:53:37.454687   57552 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0731 17:53:37.456081   57552 addons.go:510] duration metric: took 1.135703252s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0731 17:53:37.456110   57552 start.go:246] waiting for cluster config update ...
	I0731 17:53:37.456121   57552 start.go:255] writing updated cluster config ...
	I0731 17:53:37.456319   57552 ssh_runner.go:195] Run: rm -f paused
	I0731 17:53:37.500901   57552 start.go:600] kubectl: 1.30.3, cluster: 1.24.1 (minor skew: 6)
	I0731 17:53:37.502788   57552 out.go:177] 
	W0731 17:53:37.504163   57552 out.go:239] ! /usr/local/bin/kubectl is version 1.30.3, which may have incompatibilities with Kubernetes 1.24.1.
	I0731 17:53:37.505504   57552 out.go:177]   - Want kubectl v1.24.1? Try 'minikube kubectl -- get pods -A'
	I0731 17:53:37.506686   57552 out.go:177] * Done! kubectl is now configured to use "stopped-upgrade-246118" cluster and "default" namespace by default
	I0731 17:53:34.809037   57809 pod_ready.go:92] pod "coredns-7db6d8ff4d-x4692" in "kube-system" namespace has status "Ready":"True"
	I0731 17:53:34.809063   57809 pod_ready.go:81] duration metric: took 397.125827ms for pod "coredns-7db6d8ff4d-x4692" in "kube-system" namespace to be "Ready" ...
	I0731 17:53:34.809076   57809 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-957141" in "kube-system" namespace to be "Ready" ...
	I0731 17:53:35.209253   57809 pod_ready.go:92] pod "etcd-pause-957141" in "kube-system" namespace has status "Ready":"True"
	I0731 17:53:35.209278   57809 pod_ready.go:81] duration metric: took 400.196324ms for pod "etcd-pause-957141" in "kube-system" namespace to be "Ready" ...
	I0731 17:53:35.209287   57809 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-957141" in "kube-system" namespace to be "Ready" ...
	I0731 17:53:35.609203   57809 pod_ready.go:92] pod "kube-apiserver-pause-957141" in "kube-system" namespace has status "Ready":"True"
	I0731 17:53:35.609228   57809 pod_ready.go:81] duration metric: took 399.934116ms for pod "kube-apiserver-pause-957141" in "kube-system" namespace to be "Ready" ...
	I0731 17:53:35.609240   57809 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-957141" in "kube-system" namespace to be "Ready" ...
	I0731 17:53:36.009533   57809 pod_ready.go:92] pod "kube-controller-manager-pause-957141" in "kube-system" namespace has status "Ready":"True"
	I0731 17:53:36.009555   57809 pod_ready.go:81] duration metric: took 400.308158ms for pod "kube-controller-manager-pause-957141" in "kube-system" namespace to be "Ready" ...
	I0731 17:53:36.009565   57809 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7pdbj" in "kube-system" namespace to be "Ready" ...
	I0731 17:53:36.409899   57809 pod_ready.go:92] pod "kube-proxy-7pdbj" in "kube-system" namespace has status "Ready":"True"
	I0731 17:53:36.409919   57809 pod_ready.go:81] duration metric: took 400.348892ms for pod "kube-proxy-7pdbj" in "kube-system" namespace to be "Ready" ...
	I0731 17:53:36.409930   57809 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-957141" in "kube-system" namespace to be "Ready" ...
	I0731 17:53:36.809747   57809 pod_ready.go:92] pod "kube-scheduler-pause-957141" in "kube-system" namespace has status "Ready":"True"
	I0731 17:53:36.809768   57809 pod_ready.go:81] duration metric: took 399.83249ms for pod "kube-scheduler-pause-957141" in "kube-system" namespace to be "Ready" ...
	I0731 17:53:36.809777   57809 pod_ready.go:38] duration metric: took 2.56626002s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 17:53:36.809789   57809 api_server.go:52] waiting for apiserver process to appear ...
	I0731 17:53:36.809837   57809 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 17:53:36.828765   57809 api_server.go:72] duration metric: took 2.785218488s to wait for apiserver process to appear ...
	I0731 17:53:36.828796   57809 api_server.go:88] waiting for apiserver healthz status ...
	I0731 17:53:36.828820   57809 api_server.go:253] Checking apiserver healthz at https://192.168.39.24:8443/healthz ...
	I0731 17:53:36.837299   57809 api_server.go:279] https://192.168.39.24:8443/healthz returned 200:
	ok
	I0731 17:53:36.838524   57809 api_server.go:141] control plane version: v1.30.3
	I0731 17:53:36.838552   57809 api_server.go:131] duration metric: took 9.747851ms to wait for apiserver health ...
	I0731 17:53:36.838564   57809 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 17:53:37.012454   57809 system_pods.go:59] 6 kube-system pods found
	I0731 17:53:37.012480   57809 system_pods.go:61] "coredns-7db6d8ff4d-x4692" [88b05208-9a2c-431c-8cdf-bda38e0baf8a] Running
	I0731 17:53:37.012484   57809 system_pods.go:61] "etcd-pause-957141" [86f1fd21-0923-42b6-9882-e7a234961753] Running
	I0731 17:53:37.012490   57809 system_pods.go:61] "kube-apiserver-pause-957141" [a46dd278-a7b5-4297-bc94-f8db9a9cbd87] Running
	I0731 17:53:37.012494   57809 system_pods.go:61] "kube-controller-manager-pause-957141" [2eb60947-dca1-430a-9dce-1d00c4a9d91e] Running
	I0731 17:53:37.012497   57809 system_pods.go:61] "kube-proxy-7pdbj" [3bf92527-b96a-4157-90cc-5d864b41526d] Running
	I0731 17:53:37.012499   57809 system_pods.go:61] "kube-scheduler-pause-957141" [eb761ade-0576-4475-9457-dcfa00a02a9b] Running
	I0731 17:53:37.012504   57809 system_pods.go:74] duration metric: took 173.935249ms to wait for pod list to return data ...
	I0731 17:53:37.012511   57809 default_sa.go:34] waiting for default service account to be created ...
	I0731 17:53:37.210155   57809 default_sa.go:45] found service account: "default"
	I0731 17:53:37.210182   57809 default_sa.go:55] duration metric: took 197.665899ms for default service account to be created ...
	I0731 17:53:37.210193   57809 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 17:53:37.413682   57809 system_pods.go:86] 6 kube-system pods found
	I0731 17:53:37.413725   57809 system_pods.go:89] "coredns-7db6d8ff4d-x4692" [88b05208-9a2c-431c-8cdf-bda38e0baf8a] Running
	I0731 17:53:37.413749   57809 system_pods.go:89] "etcd-pause-957141" [86f1fd21-0923-42b6-9882-e7a234961753] Running
	I0731 17:53:37.413757   57809 system_pods.go:89] "kube-apiserver-pause-957141" [a46dd278-a7b5-4297-bc94-f8db9a9cbd87] Running
	I0731 17:53:37.413767   57809 system_pods.go:89] "kube-controller-manager-pause-957141" [2eb60947-dca1-430a-9dce-1d00c4a9d91e] Running
	I0731 17:53:37.413774   57809 system_pods.go:89] "kube-proxy-7pdbj" [3bf92527-b96a-4157-90cc-5d864b41526d] Running
	I0731 17:53:37.413782   57809 system_pods.go:89] "kube-scheduler-pause-957141" [eb761ade-0576-4475-9457-dcfa00a02a9b] Running
	I0731 17:53:37.413799   57809 system_pods.go:126] duration metric: took 203.597559ms to wait for k8s-apps to be running ...
	I0731 17:53:37.413815   57809 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 17:53:37.413894   57809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 17:53:37.428705   57809 system_svc.go:56] duration metric: took 14.885075ms WaitForService to wait for kubelet
	I0731 17:53:37.428728   57809 kubeadm.go:582] duration metric: took 3.385184085s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 17:53:37.428751   57809 node_conditions.go:102] verifying NodePressure condition ...
	I0731 17:53:37.609355   57809 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 17:53:37.609381   57809 node_conditions.go:123] node cpu capacity is 2
	I0731 17:53:37.609394   57809 node_conditions.go:105] duration metric: took 180.637198ms to run NodePressure ...
	I0731 17:53:37.609408   57809 start.go:241] waiting for startup goroutines ...
	I0731 17:53:37.609417   57809 start.go:246] waiting for cluster config update ...
	I0731 17:53:37.609430   57809 start.go:255] writing updated cluster config ...
	I0731 17:53:37.609692   57809 ssh_runner.go:195] Run: rm -f paused
	I0731 17:53:37.658863   57809 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 17:53:37.661553   57809 out.go:177] * Done! kubectl is now configured to use "pause-957141" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 31 17:53:38 pause-957141 crio[2245]: time="2024-07-31 17:53:38.310674970Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722448418310656494,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=95c5a2c5-e229-4678-9b54-d6e13e2fe450 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:53:38 pause-957141 crio[2245]: time="2024-07-31 17:53:38.311399882Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=66718670-25aa-4f07-b948-e004991610b4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:53:38 pause-957141 crio[2245]: time="2024-07-31 17:53:38.311456092Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=66718670-25aa-4f07-b948-e004991610b4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:53:38 pause-957141 crio[2245]: time="2024-07-31 17:53:38.311673893Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c10b93b11252bf16765c1ba5b7c71f0bd7929634945cf80f86ce8ba767199fd6,PodSandboxId:91045b7b3d6043c95fcfb56a6caaa1618b3311f72b69950900b17d7161c2d569,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722448400416680836,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x4692,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88b05208-9a2c-431c-8cdf-bda38e0baf8a,},Annotations:map[string]string{io.kubernetes.container.hash: 92de576d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c76eb85348eddce6b1c551eb07c87c84bb2008ad73594adadce509e776605eb7,PodSandboxId:1a9fe6c39c104705f0c010f47b1d19789a43d64d47d4726c0b56f190d69a8773,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722448400429509662,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7pdbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 3bf92527-b96a-4157-90cc-5d864b41526d,},Annotations:map[string]string{io.kubernetes.container.hash: d857380c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a02903735701be08d0f2540035c3d0c8ae3ae0487fef31d79d3420daa491ceef,PodSandboxId:7fa2f4020fd8f6e32b67a50f9e521f5690f44cf347833bb2bce3938ec3a10338,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722448396592319197,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-957141,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f36c04e787af1107623cfc87e34dfb95,},Annot
ations:map[string]string{io.kubernetes.container.hash: caa42e73,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:169febbca04884722fd4a27582be05d9c8428337d03e73a2b265c4d9015a561e,PodSandboxId:1a2b5733413009a61ee747829969830fba95da64116ed32e71dfdf0421aa3d6a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722448396591556491,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-957141,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6d3c791123308155b7da1ffde857c24,},Annotations:map[string]
string{io.kubernetes.container.hash: d4155cf3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:145822f96f5253022af018277ccec9937090df84d447df2f6b062809bbdbf124,PodSandboxId:7e706d5498e23252642a11e44585a13f268aad9b10e585643a95ef5ee0017c1b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722448396567629900,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-957141,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fad42619c5d00c7c1c196169a6f9eb4d,},Annotations:map[string]string{io.kubernet
es.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40eeb9de6b5a087cf086625c0c830be11a561c0f4663ceb4117702ccb3f52db0,PodSandboxId:35922cda041e63613841a6a79f8fc00cbc2a4119d20ed36235764f23294797ec,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722448396554506249,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-957141,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 369c1a8500deb33f817892ba954cb45e,},Annotations:map[string]string{io
.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6a604f6cdc9ae83bd68b4618c688c09f096a290bca0e7623a30b67bf0b41ecf,PodSandboxId:6bf9df1079de4647053f734d3b5c4b71cd6081a4fef8a06c6833bd111badb964,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722448343831535096,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7pdbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bf92527-b96a-4157-90cc-5d864b41526d,},Annotations:map[string]string{io.kubernetes.container.hash: d85738
0c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e761a3b6c9c7fa02d83225cddba648d385f3378e3cd8eda575bdc0ba4954a20,PodSandboxId:ea5eb9223ab19c59bf7c8efd694f5a350b363616b5d2bafce916652e7e14b16d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722448343419497730,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x4692,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88b05208-9a2c-431c-8cdf-bda38e0baf8a,},Annotations:map[string]string{io.kubernetes.container.hash: 92de576d,io.kubernetes.container.ports
: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ea3a326bb1edec1cc3c4627be32566c2f9d53be0f8549b5bf1efb4e4679a51f,PodSandboxId:b25ff220a3477a8002cdbf3836a198edf357c3d54b304755a823af1f5857cbd6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722448320920409581,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pau
se-957141,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6d3c791123308155b7da1ffde857c24,},Annotations:map[string]string{io.kubernetes.container.hash: d4155cf3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:599e308fce44b39d9ffaac07fa975de9d252453298ccb54bedd41c3adee1d9f9,PodSandboxId:0cf0ff28d7a24a1d1c1106e6749a31c6f4c1f22a2db69d1d6e715e736310d332,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722448320908711950,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-957141,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: f36c04e787af1107623cfc87e34dfb95,},Annotations:map[string]string{io.kubernetes.container.hash: caa42e73,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07fb01c7a66a08ce480a7aa54d46cf2af1212af230d47b4e721bf82d2e3c31f9,PodSandboxId:6a3b5534e49033d62d743e3d9b12068af46669f55fec73d7e21555fe512a3d3b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722448320864297845,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-957141,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 369c1a8500deb33f817892ba954cb45e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ff45ea8e5cd03c0f849f2d625879d14cc1411331de18175d027488dd676aac5,PodSandboxId:1309b324f049fd78e90ae2122c4e516b02d5de2e1e477cdc9031cae1d6df1afb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722448320846960855,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-957141,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: fad42619c5d00c7c1c196169a6f9eb4d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=66718670-25aa-4f07-b948-e004991610b4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:53:38 pause-957141 crio[2245]: time="2024-07-31 17:53:38.351442193Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7bac9c33-b146-4745-a3b3-bb2d03af66fa name=/runtime.v1.RuntimeService/Version
	Jul 31 17:53:38 pause-957141 crio[2245]: time="2024-07-31 17:53:38.351528122Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7bac9c33-b146-4745-a3b3-bb2d03af66fa name=/runtime.v1.RuntimeService/Version
	Jul 31 17:53:38 pause-957141 crio[2245]: time="2024-07-31 17:53:38.353001592Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1a45df2f-1e3b-416b-ae67-19ce3a1e4be4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:53:38 pause-957141 crio[2245]: time="2024-07-31 17:53:38.353381792Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722448418353357379,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1a45df2f-1e3b-416b-ae67-19ce3a1e4be4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:53:38 pause-957141 crio[2245]: time="2024-07-31 17:53:38.354030476Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b71013f2-505a-4d94-98d4-8de1305e2482 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:53:38 pause-957141 crio[2245]: time="2024-07-31 17:53:38.354095522Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b71013f2-505a-4d94-98d4-8de1305e2482 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:53:38 pause-957141 crio[2245]: time="2024-07-31 17:53:38.354357832Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c10b93b11252bf16765c1ba5b7c71f0bd7929634945cf80f86ce8ba767199fd6,PodSandboxId:91045b7b3d6043c95fcfb56a6caaa1618b3311f72b69950900b17d7161c2d569,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722448400416680836,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x4692,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88b05208-9a2c-431c-8cdf-bda38e0baf8a,},Annotations:map[string]string{io.kubernetes.container.hash: 92de576d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c76eb85348eddce6b1c551eb07c87c84bb2008ad73594adadce509e776605eb7,PodSandboxId:1a9fe6c39c104705f0c010f47b1d19789a43d64d47d4726c0b56f190d69a8773,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722448400429509662,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7pdbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 3bf92527-b96a-4157-90cc-5d864b41526d,},Annotations:map[string]string{io.kubernetes.container.hash: d857380c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a02903735701be08d0f2540035c3d0c8ae3ae0487fef31d79d3420daa491ceef,PodSandboxId:7fa2f4020fd8f6e32b67a50f9e521f5690f44cf347833bb2bce3938ec3a10338,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722448396592319197,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-957141,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f36c04e787af1107623cfc87e34dfb95,},Annot
ations:map[string]string{io.kubernetes.container.hash: caa42e73,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:169febbca04884722fd4a27582be05d9c8428337d03e73a2b265c4d9015a561e,PodSandboxId:1a2b5733413009a61ee747829969830fba95da64116ed32e71dfdf0421aa3d6a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722448396591556491,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-957141,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6d3c791123308155b7da1ffde857c24,},Annotations:map[string]
string{io.kubernetes.container.hash: d4155cf3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:145822f96f5253022af018277ccec9937090df84d447df2f6b062809bbdbf124,PodSandboxId:7e706d5498e23252642a11e44585a13f268aad9b10e585643a95ef5ee0017c1b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722448396567629900,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-957141,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fad42619c5d00c7c1c196169a6f9eb4d,},Annotations:map[string]string{io.kubernet
es.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40eeb9de6b5a087cf086625c0c830be11a561c0f4663ceb4117702ccb3f52db0,PodSandboxId:35922cda041e63613841a6a79f8fc00cbc2a4119d20ed36235764f23294797ec,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722448396554506249,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-957141,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 369c1a8500deb33f817892ba954cb45e,},Annotations:map[string]string{io
.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6a604f6cdc9ae83bd68b4618c688c09f096a290bca0e7623a30b67bf0b41ecf,PodSandboxId:6bf9df1079de4647053f734d3b5c4b71cd6081a4fef8a06c6833bd111badb964,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722448343831535096,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7pdbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bf92527-b96a-4157-90cc-5d864b41526d,},Annotations:map[string]string{io.kubernetes.container.hash: d85738
0c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e761a3b6c9c7fa02d83225cddba648d385f3378e3cd8eda575bdc0ba4954a20,PodSandboxId:ea5eb9223ab19c59bf7c8efd694f5a350b363616b5d2bafce916652e7e14b16d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722448343419497730,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x4692,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88b05208-9a2c-431c-8cdf-bda38e0baf8a,},Annotations:map[string]string{io.kubernetes.container.hash: 92de576d,io.kubernetes.container.ports
: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ea3a326bb1edec1cc3c4627be32566c2f9d53be0f8549b5bf1efb4e4679a51f,PodSandboxId:b25ff220a3477a8002cdbf3836a198edf357c3d54b304755a823af1f5857cbd6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722448320920409581,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pau
se-957141,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6d3c791123308155b7da1ffde857c24,},Annotations:map[string]string{io.kubernetes.container.hash: d4155cf3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:599e308fce44b39d9ffaac07fa975de9d252453298ccb54bedd41c3adee1d9f9,PodSandboxId:0cf0ff28d7a24a1d1c1106e6749a31c6f4c1f22a2db69d1d6e715e736310d332,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722448320908711950,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-957141,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: f36c04e787af1107623cfc87e34dfb95,},Annotations:map[string]string{io.kubernetes.container.hash: caa42e73,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07fb01c7a66a08ce480a7aa54d46cf2af1212af230d47b4e721bf82d2e3c31f9,PodSandboxId:6a3b5534e49033d62d743e3d9b12068af46669f55fec73d7e21555fe512a3d3b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722448320864297845,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-957141,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 369c1a8500deb33f817892ba954cb45e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ff45ea8e5cd03c0f849f2d625879d14cc1411331de18175d027488dd676aac5,PodSandboxId:1309b324f049fd78e90ae2122c4e516b02d5de2e1e477cdc9031cae1d6df1afb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722448320846960855,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-957141,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: fad42619c5d00c7c1c196169a6f9eb4d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b71013f2-505a-4d94-98d4-8de1305e2482 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:53:38 pause-957141 crio[2245]: time="2024-07-31 17:53:38.396501754Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f14fb0ba-8a1c-4090-b6c7-78a03d49215b name=/runtime.v1.RuntimeService/Version
	Jul 31 17:53:38 pause-957141 crio[2245]: time="2024-07-31 17:53:38.396577461Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f14fb0ba-8a1c-4090-b6c7-78a03d49215b name=/runtime.v1.RuntimeService/Version
	Jul 31 17:53:38 pause-957141 crio[2245]: time="2024-07-31 17:53:38.397971516Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=91a4308e-e0ba-45c3-8f4d-8a27511a4d2e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:53:38 pause-957141 crio[2245]: time="2024-07-31 17:53:38.398356792Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722448418398332714,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=91a4308e-e0ba-45c3-8f4d-8a27511a4d2e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:53:38 pause-957141 crio[2245]: time="2024-07-31 17:53:38.399121286Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dcbab31b-e08e-4d87-b578-6a0210a58f66 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:53:38 pause-957141 crio[2245]: time="2024-07-31 17:53:38.399275081Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dcbab31b-e08e-4d87-b578-6a0210a58f66 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:53:38 pause-957141 crio[2245]: time="2024-07-31 17:53:38.399750684Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c10b93b11252bf16765c1ba5b7c71f0bd7929634945cf80f86ce8ba767199fd6,PodSandboxId:91045b7b3d6043c95fcfb56a6caaa1618b3311f72b69950900b17d7161c2d569,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722448400416680836,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x4692,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88b05208-9a2c-431c-8cdf-bda38e0baf8a,},Annotations:map[string]string{io.kubernetes.container.hash: 92de576d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c76eb85348eddce6b1c551eb07c87c84bb2008ad73594adadce509e776605eb7,PodSandboxId:1a9fe6c39c104705f0c010f47b1d19789a43d64d47d4726c0b56f190d69a8773,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722448400429509662,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7pdbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 3bf92527-b96a-4157-90cc-5d864b41526d,},Annotations:map[string]string{io.kubernetes.container.hash: d857380c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a02903735701be08d0f2540035c3d0c8ae3ae0487fef31d79d3420daa491ceef,PodSandboxId:7fa2f4020fd8f6e32b67a50f9e521f5690f44cf347833bb2bce3938ec3a10338,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722448396592319197,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-957141,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f36c04e787af1107623cfc87e34dfb95,},Annot
ations:map[string]string{io.kubernetes.container.hash: caa42e73,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:169febbca04884722fd4a27582be05d9c8428337d03e73a2b265c4d9015a561e,PodSandboxId:1a2b5733413009a61ee747829969830fba95da64116ed32e71dfdf0421aa3d6a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722448396591556491,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-957141,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6d3c791123308155b7da1ffde857c24,},Annotations:map[string]
string{io.kubernetes.container.hash: d4155cf3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:145822f96f5253022af018277ccec9937090df84d447df2f6b062809bbdbf124,PodSandboxId:7e706d5498e23252642a11e44585a13f268aad9b10e585643a95ef5ee0017c1b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722448396567629900,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-957141,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fad42619c5d00c7c1c196169a6f9eb4d,},Annotations:map[string]string{io.kubernet
es.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40eeb9de6b5a087cf086625c0c830be11a561c0f4663ceb4117702ccb3f52db0,PodSandboxId:35922cda041e63613841a6a79f8fc00cbc2a4119d20ed36235764f23294797ec,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722448396554506249,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-957141,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 369c1a8500deb33f817892ba954cb45e,},Annotations:map[string]string{io
.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6a604f6cdc9ae83bd68b4618c688c09f096a290bca0e7623a30b67bf0b41ecf,PodSandboxId:6bf9df1079de4647053f734d3b5c4b71cd6081a4fef8a06c6833bd111badb964,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722448343831535096,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7pdbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bf92527-b96a-4157-90cc-5d864b41526d,},Annotations:map[string]string{io.kubernetes.container.hash: d85738
0c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e761a3b6c9c7fa02d83225cddba648d385f3378e3cd8eda575bdc0ba4954a20,PodSandboxId:ea5eb9223ab19c59bf7c8efd694f5a350b363616b5d2bafce916652e7e14b16d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722448343419497730,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x4692,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88b05208-9a2c-431c-8cdf-bda38e0baf8a,},Annotations:map[string]string{io.kubernetes.container.hash: 92de576d,io.kubernetes.container.ports
: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ea3a326bb1edec1cc3c4627be32566c2f9d53be0f8549b5bf1efb4e4679a51f,PodSandboxId:b25ff220a3477a8002cdbf3836a198edf357c3d54b304755a823af1f5857cbd6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722448320920409581,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pau
se-957141,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6d3c791123308155b7da1ffde857c24,},Annotations:map[string]string{io.kubernetes.container.hash: d4155cf3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:599e308fce44b39d9ffaac07fa975de9d252453298ccb54bedd41c3adee1d9f9,PodSandboxId:0cf0ff28d7a24a1d1c1106e6749a31c6f4c1f22a2db69d1d6e715e736310d332,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722448320908711950,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-957141,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: f36c04e787af1107623cfc87e34dfb95,},Annotations:map[string]string{io.kubernetes.container.hash: caa42e73,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07fb01c7a66a08ce480a7aa54d46cf2af1212af230d47b4e721bf82d2e3c31f9,PodSandboxId:6a3b5534e49033d62d743e3d9b12068af46669f55fec73d7e21555fe512a3d3b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722448320864297845,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-957141,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 369c1a8500deb33f817892ba954cb45e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ff45ea8e5cd03c0f849f2d625879d14cc1411331de18175d027488dd676aac5,PodSandboxId:1309b324f049fd78e90ae2122c4e516b02d5de2e1e477cdc9031cae1d6df1afb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722448320846960855,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-957141,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: fad42619c5d00c7c1c196169a6f9eb4d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dcbab31b-e08e-4d87-b578-6a0210a58f66 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:53:38 pause-957141 crio[2245]: time="2024-07-31 17:53:38.445547577Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=da8f13e8-669a-48d2-b143-b6313f9fb6d3 name=/runtime.v1.RuntimeService/Version
	Jul 31 17:53:38 pause-957141 crio[2245]: time="2024-07-31 17:53:38.445781969Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=da8f13e8-669a-48d2-b143-b6313f9fb6d3 name=/runtime.v1.RuntimeService/Version
	Jul 31 17:53:38 pause-957141 crio[2245]: time="2024-07-31 17:53:38.447964704Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1ce0d668-d784-4673-bd58-e87c01e194bf name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:53:38 pause-957141 crio[2245]: time="2024-07-31 17:53:38.448568049Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722448418448533608,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1ce0d668-d784-4673-bd58-e87c01e194bf name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:53:38 pause-957141 crio[2245]: time="2024-07-31 17:53:38.449779449Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=282e4026-5ffc-4d7e-ab10-2416b3ccc510 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:53:38 pause-957141 crio[2245]: time="2024-07-31 17:53:38.449896254Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=282e4026-5ffc-4d7e-ab10-2416b3ccc510 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:53:38 pause-957141 crio[2245]: time="2024-07-31 17:53:38.450149143Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c10b93b11252bf16765c1ba5b7c71f0bd7929634945cf80f86ce8ba767199fd6,PodSandboxId:91045b7b3d6043c95fcfb56a6caaa1618b3311f72b69950900b17d7161c2d569,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722448400416680836,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x4692,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88b05208-9a2c-431c-8cdf-bda38e0baf8a,},Annotations:map[string]string{io.kubernetes.container.hash: 92de576d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c76eb85348eddce6b1c551eb07c87c84bb2008ad73594adadce509e776605eb7,PodSandboxId:1a9fe6c39c104705f0c010f47b1d19789a43d64d47d4726c0b56f190d69a8773,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722448400429509662,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7pdbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 3bf92527-b96a-4157-90cc-5d864b41526d,},Annotations:map[string]string{io.kubernetes.container.hash: d857380c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a02903735701be08d0f2540035c3d0c8ae3ae0487fef31d79d3420daa491ceef,PodSandboxId:7fa2f4020fd8f6e32b67a50f9e521f5690f44cf347833bb2bce3938ec3a10338,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722448396592319197,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-957141,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f36c04e787af1107623cfc87e34dfb95,},Annot
ations:map[string]string{io.kubernetes.container.hash: caa42e73,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:169febbca04884722fd4a27582be05d9c8428337d03e73a2b265c4d9015a561e,PodSandboxId:1a2b5733413009a61ee747829969830fba95da64116ed32e71dfdf0421aa3d6a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722448396591556491,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-957141,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6d3c791123308155b7da1ffde857c24,},Annotations:map[string]
string{io.kubernetes.container.hash: d4155cf3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:145822f96f5253022af018277ccec9937090df84d447df2f6b062809bbdbf124,PodSandboxId:7e706d5498e23252642a11e44585a13f268aad9b10e585643a95ef5ee0017c1b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722448396567629900,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-957141,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fad42619c5d00c7c1c196169a6f9eb4d,},Annotations:map[string]string{io.kubernet
es.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40eeb9de6b5a087cf086625c0c830be11a561c0f4663ceb4117702ccb3f52db0,PodSandboxId:35922cda041e63613841a6a79f8fc00cbc2a4119d20ed36235764f23294797ec,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722448396554506249,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-957141,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 369c1a8500deb33f817892ba954cb45e,},Annotations:map[string]string{io
.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6a604f6cdc9ae83bd68b4618c688c09f096a290bca0e7623a30b67bf0b41ecf,PodSandboxId:6bf9df1079de4647053f734d3b5c4b71cd6081a4fef8a06c6833bd111badb964,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722448343831535096,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7pdbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bf92527-b96a-4157-90cc-5d864b41526d,},Annotations:map[string]string{io.kubernetes.container.hash: d85738
0c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e761a3b6c9c7fa02d83225cddba648d385f3378e3cd8eda575bdc0ba4954a20,PodSandboxId:ea5eb9223ab19c59bf7c8efd694f5a350b363616b5d2bafce916652e7e14b16d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722448343419497730,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x4692,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88b05208-9a2c-431c-8cdf-bda38e0baf8a,},Annotations:map[string]string{io.kubernetes.container.hash: 92de576d,io.kubernetes.container.ports
: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ea3a326bb1edec1cc3c4627be32566c2f9d53be0f8549b5bf1efb4e4679a51f,PodSandboxId:b25ff220a3477a8002cdbf3836a198edf357c3d54b304755a823af1f5857cbd6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722448320920409581,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pau
se-957141,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6d3c791123308155b7da1ffde857c24,},Annotations:map[string]string{io.kubernetes.container.hash: d4155cf3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:599e308fce44b39d9ffaac07fa975de9d252453298ccb54bedd41c3adee1d9f9,PodSandboxId:0cf0ff28d7a24a1d1c1106e6749a31c6f4c1f22a2db69d1d6e715e736310d332,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722448320908711950,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-957141,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: f36c04e787af1107623cfc87e34dfb95,},Annotations:map[string]string{io.kubernetes.container.hash: caa42e73,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07fb01c7a66a08ce480a7aa54d46cf2af1212af230d47b4e721bf82d2e3c31f9,PodSandboxId:6a3b5534e49033d62d743e3d9b12068af46669f55fec73d7e21555fe512a3d3b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722448320864297845,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-957141,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 369c1a8500deb33f817892ba954cb45e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ff45ea8e5cd03c0f849f2d625879d14cc1411331de18175d027488dd676aac5,PodSandboxId:1309b324f049fd78e90ae2122c4e516b02d5de2e1e477cdc9031cae1d6df1afb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722448320846960855,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-957141,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: fad42619c5d00c7c1c196169a6f9eb4d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=282e4026-5ffc-4d7e-ab10-2416b3ccc510 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	c76eb85348edd       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   18 seconds ago       Running             kube-proxy                1                   1a9fe6c39c104       kube-proxy-7pdbj
	c10b93b11252b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   18 seconds ago       Running             coredns                   1                   91045b7b3d604       coredns-7db6d8ff4d-x4692
	a02903735701b       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   21 seconds ago       Running             etcd                      1                   7fa2f4020fd8f       etcd-pause-957141
	169febbca0488       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   21 seconds ago       Running             kube-apiserver            1                   1a2b573341300       kube-apiserver-pause-957141
	145822f96f525       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   21 seconds ago       Running             kube-scheduler            1                   7e706d5498e23       kube-scheduler-pause-957141
	40eeb9de6b5a0       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   21 seconds ago       Running             kube-controller-manager   1                   35922cda041e6       kube-controller-manager-pause-957141
	f6a604f6cdc9a       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   About a minute ago   Exited              kube-proxy                0                   6bf9df1079de4       kube-proxy-7pdbj
	4e761a3b6c9c7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   ea5eb9223ab19       coredns-7db6d8ff4d-x4692
	6ea3a326bb1ed       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   About a minute ago   Exited              kube-apiserver            0                   b25ff220a3477       kube-apiserver-pause-957141
	599e308fce44b       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   About a minute ago   Exited              etcd                      0                   0cf0ff28d7a24       etcd-pause-957141
	07fb01c7a66a0       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   About a minute ago   Exited              kube-controller-manager   0                   6a3b5534e4903       kube-controller-manager-pause-957141
	3ff45ea8e5cd0       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   About a minute ago   Exited              kube-scheduler            0                   1309b324f049f       kube-scheduler-pause-957141
	
	
	==> coredns [4e761a3b6c9c7fa02d83225cddba648d385f3378e3cd8eda575bdc0ba4954a20] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:38432 - 6890 "HINFO IN 3662218827591387347.8007845490931809510. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.00691505s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[2077430852]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Jul-2024 17:52:23.607) (total time: 30002ms):
	Trace[2077430852]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (17:52:53.609)
	Trace[2077430852]: [30.002853434s] [30.002853434s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[160954076]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Jul-2024 17:52:23.607) (total time: 30003ms):
	Trace[160954076]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (17:52:53.610)
	Trace[160954076]: [30.003110252s] [30.003110252s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1593891359]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Jul-2024 17:52:23.608) (total time: 30002ms):
	Trace[1593891359]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (17:52:53.609)
	Trace[1593891359]: [30.002476812s] [30.002476812s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c10b93b11252bf16765c1ba5b7c71f0bd7929634945cf80f86ce8ba767199fd6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:38505 - 8853 "HINFO IN 6238839676201480773.6228709059102171477. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010399691s
	
	
	==> describe nodes <==
	Name:               pause-957141
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-957141
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1d737dad7efa60c56d30434fcd857dd3b14c91d9
	                    minikube.k8s.io/name=pause-957141
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T17_52_06_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 17:52:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-957141
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 17:53:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 17:53:19 +0000   Wed, 31 Jul 2024 17:52:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 17:53:19 +0000   Wed, 31 Jul 2024 17:52:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 17:53:19 +0000   Wed, 31 Jul 2024 17:52:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 17:53:19 +0000   Wed, 31 Jul 2024 17:52:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.24
	  Hostname:    pause-957141
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 d2cb4cf1fe7948c390f780ff401aeecc
	  System UUID:                d2cb4cf1-fe79-48c3-90f7-80ff401aeecc
	  Boot ID:                    b7a5e3c9-7c32-48ce-87c7-3771eff8d37e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-x4692                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     78s
	  kube-system                 etcd-pause-957141                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         93s
	  kube-system                 kube-apiserver-pause-957141             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 kube-controller-manager-pause-957141    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 kube-proxy-7pdbj                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         79s
	  kube-system                 kube-scheduler-pause-957141             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 74s                kube-proxy       
	  Normal  Starting                 17s                kube-proxy       
	  Normal  NodeHasSufficientPID     93s                kubelet          Node pause-957141 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  93s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  93s                kubelet          Node pause-957141 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    93s                kubelet          Node pause-957141 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 93s                kubelet          Starting kubelet.
	  Normal  NodeReady                92s                kubelet          Node pause-957141 status is now: NodeReady
	  Normal  RegisteredNode           80s                node-controller  Node pause-957141 event: Registered Node pause-957141 in Controller
	  Normal  Starting                 22s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)  kubelet          Node pause-957141 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)  kubelet          Node pause-957141 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)  kubelet          Node pause-957141 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7s                 node-controller  Node pause-957141 event: Registered Node pause-957141 in Controller
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.550697] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.060704] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.074241] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.182473] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.151677] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.267471] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +4.204725] systemd-fstab-generator[767]: Ignoring "noauto" option for root device
	[  +5.063083] systemd-fstab-generator[952]: Ignoring "noauto" option for root device
	[  +0.060865] kauditd_printk_skb: 158 callbacks suppressed
	[Jul31 17:52] systemd-fstab-generator[1284]: Ignoring "noauto" option for root device
	[  +0.091265] kauditd_printk_skb: 69 callbacks suppressed
	[ +15.861852] systemd-fstab-generator[1508]: Ignoring "noauto" option for root device
	[  +0.078113] kauditd_printk_skb: 21 callbacks suppressed
	[Jul31 17:53] kauditd_printk_skb: 67 callbacks suppressed
	[  +8.318078] systemd-fstab-generator[2163]: Ignoring "noauto" option for root device
	[  +0.128326] systemd-fstab-generator[2175]: Ignoring "noauto" option for root device
	[  +0.152095] systemd-fstab-generator[2189]: Ignoring "noauto" option for root device
	[  +0.123495] systemd-fstab-generator[2201]: Ignoring "noauto" option for root device
	[  +0.246573] systemd-fstab-generator[2229]: Ignoring "noauto" option for root device
	[  +1.310403] systemd-fstab-generator[2355]: Ignoring "noauto" option for root device
	[  +2.321236] systemd-fstab-generator[2759]: Ignoring "noauto" option for root device
	[  +0.792680] kauditd_printk_skb: 187 callbacks suppressed
	[ +15.336353] kauditd_printk_skb: 22 callbacks suppressed
	[  +2.124831] systemd-fstab-generator[3152]: Ignoring "noauto" option for root device
	
	
	==> etcd [599e308fce44b39d9ffaac07fa975de9d252453298ccb54bedd41c3adee1d9f9] <==
	{"level":"info","ts":"2024-07-31T17:52:22.502065Z","caller":"traceutil/trace.go:171","msg":"trace[1228088793] transaction","detail":"{read_only:false; response_revision:343; number_of_response:1; }","duration":"481.637948ms","start":"2024-07-31T17:52:22.020415Z","end":"2024-07-31T17:52:22.502053Z","steps":["trace[1228088793] 'process raft request'  (duration: 473.042058ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T17:52:22.504555Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-31T17:52:22.020404Z","time spent":"483.989138ms","remote":"127.0.0.1:42576","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3555,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/coredns-7db6d8ff4d-jlcql\" mod_revision:336 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-7db6d8ff4d-jlcql\" value_size:3496 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-7db6d8ff4d-jlcql\" > >"}
	{"level":"info","ts":"2024-07-31T17:52:22.679Z","caller":"traceutil/trace.go:171","msg":"trace[519005476] linearizableReadLoop","detail":"{readStateIndex:358; appliedIndex:357; }","duration":"156.9674ms","start":"2024-07-31T17:52:22.522018Z","end":"2024-07-31T17:52:22.678985Z","steps":["trace[519005476] 'read index received'  (duration: 150.644216ms)","trace[519005476] 'applied index is now lower than readState.Index'  (duration: 6.322543ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-31T17:52:22.679112Z","caller":"traceutil/trace.go:171","msg":"trace[782073005] transaction","detail":"{read_only:false; response_revision:345; number_of_response:1; }","duration":"170.350082ms","start":"2024-07-31T17:52:22.508755Z","end":"2024-07-31T17:52:22.679105Z","steps":["trace[782073005] 'process raft request'  (duration: 163.898739ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T17:52:22.679163Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"157.13042ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/kube-system/coredns-7db6d8ff4d\" ","response":"range_response_count:1 size:3752"}
	{"level":"info","ts":"2024-07-31T17:52:22.679757Z","caller":"traceutil/trace.go:171","msg":"trace[815962755] range","detail":"{range_begin:/registry/replicasets/kube-system/coredns-7db6d8ff4d; range_end:; response_count:1; response_revision:345; }","duration":"157.75407ms","start":"2024-07-31T17:52:22.521993Z","end":"2024-07-31T17:52:22.679747Z","steps":["trace[815962755] 'agreement among raft nodes before linearized reading'  (duration: 157.122993ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T17:52:22.679177Z","caller":"traceutil/trace.go:171","msg":"trace[496895721] transaction","detail":"{read_only:false; response_revision:346; number_of_response:1; }","duration":"154.157369ms","start":"2024-07-31T17:52:22.525015Z","end":"2024-07-31T17:52:22.679173Z","steps":["trace[496895721] 'process raft request'  (duration: 154.122455ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T17:52:22.679379Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"150.041133ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:3995"}
	{"level":"info","ts":"2024-07-31T17:52:22.681785Z","caller":"traceutil/trace.go:171","msg":"trace[776802858] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:346; }","duration":"152.503915ms","start":"2024-07-31T17:52:22.52927Z","end":"2024-07-31T17:52:22.681774Z","steps":["trace[776802858] 'agreement among raft nodes before linearized reading'  (duration: 150.027221ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T17:52:22.679548Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"150.151603ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-957141\" ","response":"range_response_count:1 size:5426"}
	{"level":"info","ts":"2024-07-31T17:52:22.682442Z","caller":"traceutil/trace.go:171","msg":"trace[28348542] range","detail":"{range_begin:/registry/minions/pause-957141; range_end:; response_count:1; response_revision:346; }","duration":"153.053229ms","start":"2024-07-31T17:52:22.529378Z","end":"2024-07-31T17:52:22.682431Z","steps":["trace[28348542] 'agreement among raft nodes before linearized reading'  (duration: 150.140034ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T17:53:00.390163Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"258.388938ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1654387909922034039 > lease_revoke:<id:16f59109eb9a9112>","response":"size:27"}
	{"level":"info","ts":"2024-07-31T17:53:00.390893Z","caller":"traceutil/trace.go:171","msg":"trace[1986015461] linearizableReadLoop","detail":"{readStateIndex:413; appliedIndex:412; }","duration":"170.373459ms","start":"2024-07-31T17:53:00.220465Z","end":"2024-07-31T17:53:00.390838Z","steps":["trace[1986015461] 'read index received'  (duration: 36.38µs)","trace[1986015461] 'applied index is now lower than readState.Index'  (duration: 170.334975ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-31T17:53:00.391134Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"170.646277ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7db6d8ff4d-x4692\" ","response":"range_response_count:1 size:4907"}
	{"level":"info","ts":"2024-07-31T17:53:00.391202Z","caller":"traceutil/trace.go:171","msg":"trace[353426435] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7db6d8ff4d-x4692; range_end:; response_count:1; response_revision:391; }","duration":"170.777543ms","start":"2024-07-31T17:53:00.220409Z","end":"2024-07-31T17:53:00.391186Z","steps":["trace[353426435] 'agreement among raft nodes before linearized reading'  (duration: 170.644564ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T17:53:05.572406Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-31T17:53:05.572497Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"pause-957141","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.24:2380"],"advertise-client-urls":["https://192.168.39.24:2379"]}
	{"level":"warn","ts":"2024-07-31T17:53:05.572629Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T17:53:05.572744Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T17:53:05.634773Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.24:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T17:53:05.634868Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.24:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-31T17:53:05.634967Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"602226ed500416f5","current-leader-member-id":"602226ed500416f5"}
	{"level":"info","ts":"2024-07-31T17:53:05.638201Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.24:2380"}
	{"level":"info","ts":"2024-07-31T17:53:05.638385Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.24:2380"}
	{"level":"info","ts":"2024-07-31T17:53:05.638418Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"pause-957141","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.24:2380"],"advertise-client-urls":["https://192.168.39.24:2379"]}
	
	
	==> etcd [a02903735701be08d0f2540035c3d0c8ae3ae0487fef31d79d3420daa491ceef] <==
	{"level":"info","ts":"2024-07-31T17:53:16.97484Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-31T17:53:16.974874Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-31T17:53:16.980996Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"602226ed500416f5 switched to configuration voters=(6927141977540794101)"}
	{"level":"info","ts":"2024-07-31T17:53:16.981094Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6c3e0d5efc74209","local-member-id":"602226ed500416f5","added-peer-id":"602226ed500416f5","added-peer-peer-urls":["https://192.168.39.24:2380"]}
	{"level":"info","ts":"2024-07-31T17:53:16.981241Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6c3e0d5efc74209","local-member-id":"602226ed500416f5","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T17:53:16.981292Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T17:53:16.985172Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-31T17:53:16.988147Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"602226ed500416f5","initial-advertise-peer-urls":["https://192.168.39.24:2380"],"listen-peer-urls":["https://192.168.39.24:2380"],"advertise-client-urls":["https://192.168.39.24:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.24:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-31T17:53:16.987924Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.24:2380"}
	{"level":"info","ts":"2024-07-31T17:53:16.98966Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-31T17:53:16.989952Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.24:2380"}
	{"level":"info","ts":"2024-07-31T17:53:17.926095Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"602226ed500416f5 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-31T17:53:17.92621Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"602226ed500416f5 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-31T17:53:17.926256Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"602226ed500416f5 received MsgPreVoteResp from 602226ed500416f5 at term 2"}
	{"level":"info","ts":"2024-07-31T17:53:17.926291Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"602226ed500416f5 became candidate at term 3"}
	{"level":"info","ts":"2024-07-31T17:53:17.926315Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"602226ed500416f5 received MsgVoteResp from 602226ed500416f5 at term 3"}
	{"level":"info","ts":"2024-07-31T17:53:17.926342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"602226ed500416f5 became leader at term 3"}
	{"level":"info","ts":"2024-07-31T17:53:17.926367Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 602226ed500416f5 elected leader 602226ed500416f5 at term 3"}
	{"level":"info","ts":"2024-07-31T17:53:17.932432Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"602226ed500416f5","local-member-attributes":"{Name:pause-957141 ClientURLs:[https://192.168.39.24:2379]}","request-path":"/0/members/602226ed500416f5/attributes","cluster-id":"6c3e0d5efc74209","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T17:53:17.932674Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T17:53:17.933077Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T17:53:17.9347Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-31T17:53:17.937042Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.24:2379"}
	{"level":"info","ts":"2024-07-31T17:53:17.945473Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T17:53:17.945561Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 17:53:38 up 2 min,  0 users,  load average: 2.26, 0.65, 0.23
	Linux pause-957141 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [169febbca04884722fd4a27582be05d9c8428337d03e73a2b265c4d9015a561e] <==
	I0731 17:53:19.303655       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0731 17:53:19.310707       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0731 17:53:19.310918       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0731 17:53:19.311022       1 shared_informer.go:320] Caches are synced for configmaps
	I0731 17:53:19.311114       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0731 17:53:19.312938       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0731 17:53:19.327061       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0731 17:53:19.327095       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0731 17:53:19.332649       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0731 17:53:19.333567       1 aggregator.go:165] initial CRD sync complete...
	I0731 17:53:19.333626       1 autoregister_controller.go:141] Starting autoregister controller
	I0731 17:53:19.333651       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0731 17:53:19.333675       1 cache.go:39] Caches are synced for autoregister controller
	I0731 17:53:19.337468       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0731 17:53:19.337538       1 policy_source.go:224] refreshing policies
	E0731 17:53:19.342002       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0731 17:53:19.350113       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0731 17:53:20.206468       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0731 17:53:20.825045       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0731 17:53:20.842445       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0731 17:53:20.888855       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0731 17:53:20.919199       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0731 17:53:20.925458       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0731 17:53:31.969230       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0731 17:53:32.068471       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [6ea3a326bb1edec1cc3c4627be32566c2f9d53be0f8549b5bf1efb4e4679a51f] <==
	Trace[483281170]: [514.883143ms] [514.883143ms] END
	I0731 17:52:22.520614       1 trace.go:236] Trace[1666184154]: "Create" accept:application/vnd.kubernetes.protobuf, */*,audit-id:bf33ba64-964a-43ae-9768-639952cf1039,client:192.168.39.24,api-group:,api-version:v1,name:coredns-7db6d8ff4d-jlcql,subresource:binding,namespace:kube-system,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jlcql/binding,user-agent:kube-scheduler/v1.30.3 (linux/amd64) kubernetes/6fc0a69/scheduler,verb:POST (31-Jul-2024 17:52:22.018) (total time: 501ms):
	Trace[1666184154]: ["GuaranteedUpdate etcd3" audit-id:bf33ba64-964a-43ae-9768-639952cf1039,key:/pods/kube-system/coredns-7db6d8ff4d-jlcql,type:*core.Pod,resource:pods 501ms (17:52:22.019)
	Trace[1666184154]:  ---"Txn call completed" 500ms (17:52:22.520)]
	Trace[1666184154]: [501.716572ms] [501.716572ms] END
	I0731 17:53:05.572509       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	E0731 17:53:05.590331       1 controller.go:131] Unable to remove endpoints from kubernetes service: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	I0731 17:53:05.592260       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0731 17:53:05.592770       1 storage_flowcontrol.go:187] APF bootstrap ensurer is exiting
	I0731 17:53:05.593295       1 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
	I0731 17:53:05.594679       1 available_controller.go:439] Shutting down AvailableConditionController
	I0731 17:53:05.594767       1 apf_controller.go:386] Shutting down API Priority and Fairness config worker
	I0731 17:53:05.596120       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0731 17:53:05.596750       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0731 17:53:05.596921       1 dynamic_serving_content.go:146] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0731 17:53:05.597455       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0731 17:53:05.597568       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0731 17:53:05.597667       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0731 17:53:05.597776       1 controller.go:157] Shutting down quota evaluator
	I0731 17:53:05.597882       1 controller.go:176] quota evaluator worker shutdown
	I0731 17:53:05.593566       1 apiservice_controller.go:131] Shutting down APIServiceRegistrationController
	I0731 17:53:05.598430       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0731 17:53:05.598642       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0731 17:53:05.603120       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0731 17:53:05.603474       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	
	==> kube-controller-manager [07fb01c7a66a08ce480a7aa54d46cf2af1212af230d47b4e721bf82d2e3c31f9] <==
	I0731 17:52:18.582903       1 shared_informer.go:320] Caches are synced for namespace
	I0731 17:52:18.619102       1 shared_informer.go:320] Caches are synced for PV protection
	I0731 17:52:18.625203       1 shared_informer.go:320] Caches are synced for service account
	I0731 17:52:18.659062       1 shared_informer.go:320] Caches are synced for attach detach
	I0731 17:52:18.702501       1 shared_informer.go:320] Caches are synced for resource quota
	I0731 17:52:18.718296       1 shared_informer.go:320] Caches are synced for persistent volume
	I0731 17:52:18.725016       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0731 17:52:18.747767       1 shared_informer.go:320] Caches are synced for resource quota
	I0731 17:52:18.753247       1 shared_informer.go:320] Caches are synced for endpoint
	I0731 17:52:19.166015       1 shared_informer.go:320] Caches are synced for garbage collector
	I0731 17:52:19.166104       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0731 17:52:19.176900       1 shared_informer.go:320] Caches are synced for garbage collector
	I0731 17:52:22.502247       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="3.366331678s"
	I0731 17:52:22.707563       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="205.213531ms"
	I0731 17:52:22.721069       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="13.444454ms"
	I0731 17:52:22.721635       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="64.541µs"
	I0731 17:52:22.762361       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="35.193342ms"
	I0731 17:52:22.797337       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="34.891642ms"
	I0731 17:52:22.797482       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="100.423µs"
	I0731 17:52:23.774686       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="52.37µs"
	I0731 17:52:23.784228       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="37.507µs"
	I0731 17:52:23.788555       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="35.837µs"
	I0731 17:52:23.802076       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="41.7µs"
	I0731 17:53:03.242467       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="17.501951ms"
	I0731 17:53:03.242681       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="43.032µs"
	
	
	==> kube-controller-manager [40eeb9de6b5a087cf086625c0c830be11a561c0f4663ceb4117702ccb3f52db0] <==
	I0731 17:53:31.764906       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0731 17:53:31.780992       1 shared_informer.go:320] Caches are synced for node
	I0731 17:53:31.781368       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0731 17:53:31.781584       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0731 17:53:31.781620       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0731 17:53:31.781701       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0731 17:53:31.787638       1 shared_informer.go:320] Caches are synced for stateful set
	I0731 17:53:31.787877       1 shared_informer.go:320] Caches are synced for service account
	I0731 17:53:31.790453       1 shared_informer.go:320] Caches are synced for daemon sets
	I0731 17:53:31.796842       1 shared_informer.go:320] Caches are synced for disruption
	I0731 17:53:31.803642       1 shared_informer.go:320] Caches are synced for attach detach
	I0731 17:53:31.807919       1 shared_informer.go:320] Caches are synced for crt configmap
	I0731 17:53:31.815485       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0731 17:53:31.818906       1 shared_informer.go:320] Caches are synced for job
	I0731 17:53:31.829565       1 shared_informer.go:320] Caches are synced for expand
	I0731 17:53:31.847196       1 shared_informer.go:320] Caches are synced for persistent volume
	I0731 17:53:31.848645       1 shared_informer.go:320] Caches are synced for ephemeral
	I0731 17:53:31.854028       1 shared_informer.go:320] Caches are synced for PVC protection
	I0731 17:53:31.932064       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0731 17:53:31.973180       1 shared_informer.go:320] Caches are synced for resource quota
	I0731 17:53:31.989314       1 shared_informer.go:320] Caches are synced for resource quota
	I0731 17:53:32.010731       1 shared_informer.go:320] Caches are synced for endpoint
	I0731 17:53:32.439449       1 shared_informer.go:320] Caches are synced for garbage collector
	I0731 17:53:32.445772       1 shared_informer.go:320] Caches are synced for garbage collector
	I0731 17:53:32.445858       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [c76eb85348eddce6b1c551eb07c87c84bb2008ad73594adadce509e776605eb7] <==
	I0731 17:53:20.640517       1 server_linux.go:69] "Using iptables proxy"
	I0731 17:53:20.657254       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.24"]
	I0731 17:53:20.732934       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 17:53:20.733000       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 17:53:20.733016       1 server_linux.go:165] "Using iptables Proxier"
	I0731 17:53:20.736421       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 17:53:20.736695       1 server.go:872] "Version info" version="v1.30.3"
	I0731 17:53:20.736708       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 17:53:20.738281       1 config.go:192] "Starting service config controller"
	I0731 17:53:20.738299       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 17:53:20.738326       1 config.go:101] "Starting endpoint slice config controller"
	I0731 17:53:20.738329       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 17:53:20.738945       1 config.go:319] "Starting node config controller"
	I0731 17:53:20.738952       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 17:53:20.839232       1 shared_informer.go:320] Caches are synced for node config
	I0731 17:53:20.839301       1 shared_informer.go:320] Caches are synced for service config
	I0731 17:53:20.839325       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [f6a604f6cdc9ae83bd68b4618c688c09f096a290bca0e7623a30b67bf0b41ecf] <==
	I0731 17:52:24.033748       1 server_linux.go:69] "Using iptables proxy"
	I0731 17:52:24.052538       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.24"]
	I0731 17:52:24.103914       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 17:52:24.103967       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 17:52:24.103984       1 server_linux.go:165] "Using iptables Proxier"
	I0731 17:52:24.107304       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 17:52:24.107970       1 server.go:872] "Version info" version="v1.30.3"
	I0731 17:52:24.108030       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 17:52:24.110750       1 config.go:192] "Starting service config controller"
	I0731 17:52:24.111035       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 17:52:24.111100       1 config.go:319] "Starting node config controller"
	I0731 17:52:24.111117       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 17:52:24.111139       1 config.go:101] "Starting endpoint slice config controller"
	I0731 17:52:24.111198       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 17:52:24.211834       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 17:52:24.211845       1 shared_informer.go:320] Caches are synced for node config
	I0731 17:52:24.211860       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [145822f96f5253022af018277ccec9937090df84d447df2f6b062809bbdbf124] <==
	I0731 17:53:17.683736       1 serving.go:380] Generated self-signed cert in-memory
	W0731 17:53:19.237389       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0731 17:53:19.237497       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 17:53:19.237559       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0731 17:53:19.237598       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0731 17:53:19.279220       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0731 17:53:19.279306       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 17:53:19.288279       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0731 17:53:19.288510       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0731 17:53:19.288563       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 17:53:19.288611       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0731 17:53:19.388758       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [3ff45ea8e5cd03c0f849f2d625879d14cc1411331de18175d027488dd676aac5] <==
	W0731 17:52:03.295732       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 17:52:03.295753       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0731 17:52:03.295940       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 17:52:03.295969       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 17:52:04.110471       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 17:52:04.110556       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0731 17:52:04.139441       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 17:52:04.139542       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0731 17:52:04.165733       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0731 17:52:04.165871       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0731 17:52:04.175594       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0731 17:52:04.175717       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0731 17:52:04.211919       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 17:52:04.211971       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0731 17:52:04.221548       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 17:52:04.221632       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0731 17:52:04.372429       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0731 17:52:04.372473       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0731 17:52:04.420512       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 17:52:04.420560       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 17:52:04.521326       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 17:52:04.521453       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0731 17:52:06.790001       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 17:53:05.574829       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0731 17:53:05.576521       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 31 17:53:16 pause-957141 kubelet[2766]: I0731 17:53:16.302409    2766 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f6d3c791123308155b7da1ffde857c24-usr-share-ca-certificates\") pod \"kube-apiserver-pause-957141\" (UID: \"f6d3c791123308155b7da1ffde857c24\") " pod="kube-system/kube-apiserver-pause-957141"
	Jul 31 17:53:16 pause-957141 kubelet[2766]: E0731 17:53:16.305945    2766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-957141?timeout=10s\": dial tcp 192.168.39.24:8443: connect: connection refused" interval="400ms"
	Jul 31 17:53:16 pause-957141 kubelet[2766]: I0731 17:53:16.402000    2766 kubelet_node_status.go:73] "Attempting to register node" node="pause-957141"
	Jul 31 17:53:16 pause-957141 kubelet[2766]: E0731 17:53:16.402986    2766 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.24:8443: connect: connection refused" node="pause-957141"
	Jul 31 17:53:16 pause-957141 kubelet[2766]: I0731 17:53:16.523049    2766 scope.go:117] "RemoveContainer" containerID="07fb01c7a66a08ce480a7aa54d46cf2af1212af230d47b4e721bf82d2e3c31f9"
	Jul 31 17:53:16 pause-957141 kubelet[2766]: I0731 17:53:16.524083    2766 scope.go:117] "RemoveContainer" containerID="3ff45ea8e5cd03c0f849f2d625879d14cc1411331de18175d027488dd676aac5"
	Jul 31 17:53:16 pause-957141 kubelet[2766]: I0731 17:53:16.528990    2766 scope.go:117] "RemoveContainer" containerID="6ea3a326bb1edec1cc3c4627be32566c2f9d53be0f8549b5bf1efb4e4679a51f"
	Jul 31 17:53:16 pause-957141 kubelet[2766]: I0731 17:53:16.530246    2766 scope.go:117] "RemoveContainer" containerID="599e308fce44b39d9ffaac07fa975de9d252453298ccb54bedd41c3adee1d9f9"
	Jul 31 17:53:16 pause-957141 kubelet[2766]: E0731 17:53:16.711243    2766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-957141?timeout=10s\": dial tcp 192.168.39.24:8443: connect: connection refused" interval="800ms"
	Jul 31 17:53:16 pause-957141 kubelet[2766]: I0731 17:53:16.804784    2766 kubelet_node_status.go:73] "Attempting to register node" node="pause-957141"
	Jul 31 17:53:16 pause-957141 kubelet[2766]: E0731 17:53:16.805637    2766 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.24:8443: connect: connection refused" node="pause-957141"
	Jul 31 17:53:17 pause-957141 kubelet[2766]: I0731 17:53:17.607292    2766 kubelet_node_status.go:73] "Attempting to register node" node="pause-957141"
	Jul 31 17:53:19 pause-957141 kubelet[2766]: I0731 17:53:19.355404    2766 kubelet_node_status.go:112] "Node was previously registered" node="pause-957141"
	Jul 31 17:53:19 pause-957141 kubelet[2766]: I0731 17:53:19.355844    2766 kubelet_node_status.go:76] "Successfully registered node" node="pause-957141"
	Jul 31 17:53:19 pause-957141 kubelet[2766]: I0731 17:53:19.357147    2766 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 31 17:53:19 pause-957141 kubelet[2766]: I0731 17:53:19.358623    2766 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 31 17:53:20 pause-957141 kubelet[2766]: I0731 17:53:20.090368    2766 apiserver.go:52] "Watching apiserver"
	Jul 31 17:53:20 pause-957141 kubelet[2766]: I0731 17:53:20.094136    2766 topology_manager.go:215] "Topology Admit Handler" podUID="3bf92527-b96a-4157-90cc-5d864b41526d" podNamespace="kube-system" podName="kube-proxy-7pdbj"
	Jul 31 17:53:20 pause-957141 kubelet[2766]: I0731 17:53:20.094373    2766 topology_manager.go:215] "Topology Admit Handler" podUID="88b05208-9a2c-431c-8cdf-bda38e0baf8a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-x4692"
	Jul 31 17:53:20 pause-957141 kubelet[2766]: I0731 17:53:20.099020    2766 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 31 17:53:20 pause-957141 kubelet[2766]: I0731 17:53:20.152957    2766 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3bf92527-b96a-4157-90cc-5d864b41526d-lib-modules\") pod \"kube-proxy-7pdbj\" (UID: \"3bf92527-b96a-4157-90cc-5d864b41526d\") " pod="kube-system/kube-proxy-7pdbj"
	Jul 31 17:53:20 pause-957141 kubelet[2766]: I0731 17:53:20.153059    2766 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3bf92527-b96a-4157-90cc-5d864b41526d-xtables-lock\") pod \"kube-proxy-7pdbj\" (UID: \"3bf92527-b96a-4157-90cc-5d864b41526d\") " pod="kube-system/kube-proxy-7pdbj"
	Jul 31 17:53:20 pause-957141 kubelet[2766]: I0731 17:53:20.394932    2766 scope.go:117] "RemoveContainer" containerID="4e761a3b6c9c7fa02d83225cddba648d385f3378e3cd8eda575bdc0ba4954a20"
	Jul 31 17:53:20 pause-957141 kubelet[2766]: I0731 17:53:20.395957    2766 scope.go:117] "RemoveContainer" containerID="f6a604f6cdc9ae83bd68b4618c688c09f096a290bca0e7623a30b67bf0b41ecf"
	Jul 31 17:53:29 pause-957141 kubelet[2766]: I0731 17:53:29.969006    2766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-957141 -n pause-957141
helpers_test.go:261: (dbg) Run:  kubectl --context pause-957141 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-957141 -n pause-957141
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-957141 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-957141 logs -n 25: (1.136889934s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p NoKubernetes-231031                | NoKubernetes-231031       | jenkins | v1.33.1 | 31 Jul 24 17:49 UTC | 31 Jul 24 17:49 UTC |
	| start   | -p NoKubernetes-231031                | NoKubernetes-231031       | jenkins | v1.33.1 | 31 Jul 24 17:49 UTC | 31 Jul 24 17:50 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p running-upgrade-262154             | running-upgrade-262154    | jenkins | v1.33.1 | 31 Jul 24 17:49 UTC | 31 Jul 24 17:51 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-241744 ssh               | cert-options-241744       | jenkins | v1.33.1 | 31 Jul 24 17:50 UTC | 31 Jul 24 17:50 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-241744 -- sudo        | cert-options-241744       | jenkins | v1.33.1 | 31 Jul 24 17:50 UTC | 31 Jul 24 17:50 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-241744                | cert-options-241744       | jenkins | v1.33.1 | 31 Jul 24 17:50 UTC | 31 Jul 24 17:50 UTC |
	| start   | -p force-systemd-env-082965           | force-systemd-env-082965  | jenkins | v1.33.1 | 31 Jul 24 17:50 UTC | 31 Jul 24 17:51 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-231031 sudo           | NoKubernetes-231031       | jenkins | v1.33.1 | 31 Jul 24 17:50 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-231031                | NoKubernetes-231031       | jenkins | v1.33.1 | 31 Jul 24 17:50 UTC | 31 Jul 24 17:50 UTC |
	| start   | -p NoKubernetes-231031                | NoKubernetes-231031       | jenkins | v1.33.1 | 31 Jul 24 17:50 UTC | 31 Jul 24 17:51 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-082965           | force-systemd-env-082965  | jenkins | v1.33.1 | 31 Jul 24 17:51 UTC | 31 Jul 24 17:51 UTC |
	| start   | -p kubernetes-upgrade-410576          | kubernetes-upgrade-410576 | jenkins | v1.33.1 | 31 Jul 24 17:51 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-231031 sudo           | NoKubernetes-231031       | jenkins | v1.33.1 | 31 Jul 24 17:51 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-231031                | NoKubernetes-231031       | jenkins | v1.33.1 | 31 Jul 24 17:51 UTC | 31 Jul 24 17:51 UTC |
	| start   | -p pause-957141 --memory=2048         | pause-957141              | jenkins | v1.33.1 | 31 Jul 24 17:51 UTC | 31 Jul 24 17:53 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-262154             | running-upgrade-262154    | jenkins | v1.33.1 | 31 Jul 24 17:51 UTC | 31 Jul 24 17:51 UTC |
	| start   | -p stopped-upgrade-246118             | minikube                  | jenkins | v1.26.0 | 31 Jul 24 17:51 UTC | 31 Jul 24 17:52 UTC |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-246118 stop           | minikube                  | jenkins | v1.26.0 | 31 Jul 24 17:52 UTC | 31 Jul 24 17:52 UTC |
	| start   | -p stopped-upgrade-246118             | stopped-upgrade-246118    | jenkins | v1.33.1 | 31 Jul 24 17:52 UTC | 31 Jul 24 17:53 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p cert-expiration-761578             | cert-expiration-761578    | jenkins | v1.33.1 | 31 Jul 24 17:52 UTC | 31 Jul 24 17:53 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-957141                       | pause-957141              | jenkins | v1.33.1 | 31 Jul 24 17:53 UTC | 31 Jul 24 17:53 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-761578             | cert-expiration-761578    | jenkins | v1.33.1 | 31 Jul 24 17:53 UTC | 31 Jul 24 17:53 UTC |
	| start   | -p auto-985288 --memory=3072          | auto-985288               | jenkins | v1.33.1 | 31 Jul 24 17:53 UTC |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-246118             | stopped-upgrade-246118    | jenkins | v1.33.1 | 31 Jul 24 17:53 UTC | 31 Jul 24 17:53 UTC |
	| start   | -p kindnet-985288                     | kindnet-985288            | jenkins | v1.33.1 | 31 Jul 24 17:53 UTC |                     |
	|         | --memory=3072                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2           |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 17:53:39
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 17:53:39.604899   58482 out.go:291] Setting OutFile to fd 1 ...
	I0731 17:53:39.605181   58482 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 17:53:39.605193   58482 out.go:304] Setting ErrFile to fd 2...
	I0731 17:53:39.605200   58482 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 17:53:39.605398   58482 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19349-8084/.minikube/bin
	I0731 17:53:39.606126   58482 out.go:298] Setting JSON to false
	I0731 17:53:39.607246   58482 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5764,"bootTime":1722442656,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 17:53:39.607308   58482 start.go:139] virtualization: kvm guest
	I0731 17:53:39.610522   58482 out.go:177] * [kindnet-985288] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 17:53:39.612008   58482 notify.go:220] Checking for updates...
	I0731 17:53:39.612014   58482 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 17:53:39.613422   58482 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 17:53:39.614835   58482 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 17:53:39.616551   58482 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19349-8084/.minikube
	I0731 17:53:39.617769   58482 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 17:53:39.619095   58482 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 17:53:39.620915   58482 config.go:182] Loaded profile config "auto-985288": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 17:53:39.621108   58482 config.go:182] Loaded profile config "kubernetes-upgrade-410576": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0731 17:53:39.621301   58482 config.go:182] Loaded profile config "pause-957141": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 17:53:39.621443   58482 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 17:53:39.661254   58482 out.go:177] * Using the kvm2 driver based on user configuration
	I0731 17:53:39.662580   58482 start.go:297] selected driver: kvm2
	I0731 17:53:39.662598   58482 start.go:901] validating driver "kvm2" against <nil>
	I0731 17:53:39.662612   58482 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 17:53:39.663326   58482 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 17:53:39.663416   58482 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19349-8084/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 17:53:39.678901   58482 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 17:53:39.678944   58482 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 17:53:39.679188   58482 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 17:53:39.679217   58482 cni.go:84] Creating CNI manager for "kindnet"
	I0731 17:53:39.679223   58482 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0731 17:53:39.679273   58482 start.go:340] cluster config:
	{Name:kindnet-985288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kindnet-985288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 17:53:39.679359   58482 iso.go:125] acquiring lock: {Name:mk4494bb5a63902bf12a909c3109db85229bc516 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 17:53:39.681061   58482 out.go:177] * Starting "kindnet-985288" primary control-plane node in "kindnet-985288" cluster
	
	
	==> CRI-O <==
	Jul 31 17:53:40 pause-957141 crio[2245]: time="2024-07-31 17:53:40.129040366Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722448420129012096,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ab12fa3f-98fb-4d2a-9125-fd061ea9d367 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:53:40 pause-957141 crio[2245]: time="2024-07-31 17:53:40.129442595Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=86743c24-636a-4214-84e5-f70f67f55ddb name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:53:40 pause-957141 crio[2245]: time="2024-07-31 17:53:40.129507769Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=86743c24-636a-4214-84e5-f70f67f55ddb name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:53:40 pause-957141 crio[2245]: time="2024-07-31 17:53:40.129730649Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c10b93b11252bf16765c1ba5b7c71f0bd7929634945cf80f86ce8ba767199fd6,PodSandboxId:91045b7b3d6043c95fcfb56a6caaa1618b3311f72b69950900b17d7161c2d569,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722448400416680836,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x4692,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88b05208-9a2c-431c-8cdf-bda38e0baf8a,},Annotations:map[string]string{io.kubernetes.container.hash: 92de576d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c76eb85348eddce6b1c551eb07c87c84bb2008ad73594adadce509e776605eb7,PodSandboxId:1a9fe6c39c104705f0c010f47b1d19789a43d64d47d4726c0b56f190d69a8773,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722448400429509662,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7pdbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 3bf92527-b96a-4157-90cc-5d864b41526d,},Annotations:map[string]string{io.kubernetes.container.hash: d857380c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a02903735701be08d0f2540035c3d0c8ae3ae0487fef31d79d3420daa491ceef,PodSandboxId:7fa2f4020fd8f6e32b67a50f9e521f5690f44cf347833bb2bce3938ec3a10338,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722448396592319197,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-957141,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f36c04e787af1107623cfc87e34dfb95,},Annot
ations:map[string]string{io.kubernetes.container.hash: caa42e73,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:169febbca04884722fd4a27582be05d9c8428337d03e73a2b265c4d9015a561e,PodSandboxId:1a2b5733413009a61ee747829969830fba95da64116ed32e71dfdf0421aa3d6a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722448396591556491,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-957141,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6d3c791123308155b7da1ffde857c24,},Annotations:map[string]
string{io.kubernetes.container.hash: d4155cf3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:145822f96f5253022af018277ccec9937090df84d447df2f6b062809bbdbf124,PodSandboxId:7e706d5498e23252642a11e44585a13f268aad9b10e585643a95ef5ee0017c1b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722448396567629900,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-957141,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fad42619c5d00c7c1c196169a6f9eb4d,},Annotations:map[string]string{io.kubernet
es.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40eeb9de6b5a087cf086625c0c830be11a561c0f4663ceb4117702ccb3f52db0,PodSandboxId:35922cda041e63613841a6a79f8fc00cbc2a4119d20ed36235764f23294797ec,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722448396554506249,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-957141,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 369c1a8500deb33f817892ba954cb45e,},Annotations:map[string]string{io
.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6a604f6cdc9ae83bd68b4618c688c09f096a290bca0e7623a30b67bf0b41ecf,PodSandboxId:6bf9df1079de4647053f734d3b5c4b71cd6081a4fef8a06c6833bd111badb964,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722448343831535096,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7pdbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bf92527-b96a-4157-90cc-5d864b41526d,},Annotations:map[string]string{io.kubernetes.container.hash: d85738
0c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e761a3b6c9c7fa02d83225cddba648d385f3378e3cd8eda575bdc0ba4954a20,PodSandboxId:ea5eb9223ab19c59bf7c8efd694f5a350b363616b5d2bafce916652e7e14b16d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722448343419497730,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x4692,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88b05208-9a2c-431c-8cdf-bda38e0baf8a,},Annotations:map[string]string{io.kubernetes.container.hash: 92de576d,io.kubernetes.container.ports
: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ea3a326bb1edec1cc3c4627be32566c2f9d53be0f8549b5bf1efb4e4679a51f,PodSandboxId:b25ff220a3477a8002cdbf3836a198edf357c3d54b304755a823af1f5857cbd6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722448320920409581,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pau
se-957141,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6d3c791123308155b7da1ffde857c24,},Annotations:map[string]string{io.kubernetes.container.hash: d4155cf3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:599e308fce44b39d9ffaac07fa975de9d252453298ccb54bedd41c3adee1d9f9,PodSandboxId:0cf0ff28d7a24a1d1c1106e6749a31c6f4c1f22a2db69d1d6e715e736310d332,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722448320908711950,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-957141,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: f36c04e787af1107623cfc87e34dfb95,},Annotations:map[string]string{io.kubernetes.container.hash: caa42e73,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07fb01c7a66a08ce480a7aa54d46cf2af1212af230d47b4e721bf82d2e3c31f9,PodSandboxId:6a3b5534e49033d62d743e3d9b12068af46669f55fec73d7e21555fe512a3d3b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722448320864297845,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-957141,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 369c1a8500deb33f817892ba954cb45e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ff45ea8e5cd03c0f849f2d625879d14cc1411331de18175d027488dd676aac5,PodSandboxId:1309b324f049fd78e90ae2122c4e516b02d5de2e1e477cdc9031cae1d6df1afb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722448320846960855,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-957141,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: fad42619c5d00c7c1c196169a6f9eb4d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=86743c24-636a-4214-84e5-f70f67f55ddb name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:53:40 pause-957141 crio[2245]: time="2024-07-31 17:53:40.166735100Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=426b975a-49e0-4f69-a85e-a7f41ae6d2cb name=/runtime.v1.RuntimeService/Version
	Jul 31 17:53:40 pause-957141 crio[2245]: time="2024-07-31 17:53:40.166909982Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=426b975a-49e0-4f69-a85e-a7f41ae6d2cb name=/runtime.v1.RuntimeService/Version
	Jul 31 17:53:40 pause-957141 crio[2245]: time="2024-07-31 17:53:40.167768546Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d4337250-af86-4dea-a41d-9fed23d2e6f0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:53:40 pause-957141 crio[2245]: time="2024-07-31 17:53:40.168393360Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722448420168365930,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d4337250-af86-4dea-a41d-9fed23d2e6f0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:53:40 pause-957141 crio[2245]: time="2024-07-31 17:53:40.168880575Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=589f51e6-3a3f-4c73-9feb-798042ab7f42 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:53:40 pause-957141 crio[2245]: time="2024-07-31 17:53:40.168947355Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=589f51e6-3a3f-4c73-9feb-798042ab7f42 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:53:40 pause-957141 crio[2245]: time="2024-07-31 17:53:40.169191191Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c10b93b11252bf16765c1ba5b7c71f0bd7929634945cf80f86ce8ba767199fd6,PodSandboxId:91045b7b3d6043c95fcfb56a6caaa1618b3311f72b69950900b17d7161c2d569,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722448400416680836,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x4692,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88b05208-9a2c-431c-8cdf-bda38e0baf8a,},Annotations:map[string]string{io.kubernetes.container.hash: 92de576d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c76eb85348eddce6b1c551eb07c87c84bb2008ad73594adadce509e776605eb7,PodSandboxId:1a9fe6c39c104705f0c010f47b1d19789a43d64d47d4726c0b56f190d69a8773,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722448400429509662,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7pdbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 3bf92527-b96a-4157-90cc-5d864b41526d,},Annotations:map[string]string{io.kubernetes.container.hash: d857380c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a02903735701be08d0f2540035c3d0c8ae3ae0487fef31d79d3420daa491ceef,PodSandboxId:7fa2f4020fd8f6e32b67a50f9e521f5690f44cf347833bb2bce3938ec3a10338,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722448396592319197,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-957141,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f36c04e787af1107623cfc87e34dfb95,},Annot
ations:map[string]string{io.kubernetes.container.hash: caa42e73,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:169febbca04884722fd4a27582be05d9c8428337d03e73a2b265c4d9015a561e,PodSandboxId:1a2b5733413009a61ee747829969830fba95da64116ed32e71dfdf0421aa3d6a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722448396591556491,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-957141,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6d3c791123308155b7da1ffde857c24,},Annotations:map[string]
string{io.kubernetes.container.hash: d4155cf3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:145822f96f5253022af018277ccec9937090df84d447df2f6b062809bbdbf124,PodSandboxId:7e706d5498e23252642a11e44585a13f268aad9b10e585643a95ef5ee0017c1b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722448396567629900,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-957141,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fad42619c5d00c7c1c196169a6f9eb4d,},Annotations:map[string]string{io.kubernet
es.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40eeb9de6b5a087cf086625c0c830be11a561c0f4663ceb4117702ccb3f52db0,PodSandboxId:35922cda041e63613841a6a79f8fc00cbc2a4119d20ed36235764f23294797ec,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722448396554506249,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-957141,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 369c1a8500deb33f817892ba954cb45e,},Annotations:map[string]string{io
.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6a604f6cdc9ae83bd68b4618c688c09f096a290bca0e7623a30b67bf0b41ecf,PodSandboxId:6bf9df1079de4647053f734d3b5c4b71cd6081a4fef8a06c6833bd111badb964,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722448343831535096,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7pdbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bf92527-b96a-4157-90cc-5d864b41526d,},Annotations:map[string]string{io.kubernetes.container.hash: d85738
0c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e761a3b6c9c7fa02d83225cddba648d385f3378e3cd8eda575bdc0ba4954a20,PodSandboxId:ea5eb9223ab19c59bf7c8efd694f5a350b363616b5d2bafce916652e7e14b16d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722448343419497730,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x4692,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88b05208-9a2c-431c-8cdf-bda38e0baf8a,},Annotations:map[string]string{io.kubernetes.container.hash: 92de576d,io.kubernetes.container.ports
: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ea3a326bb1edec1cc3c4627be32566c2f9d53be0f8549b5bf1efb4e4679a51f,PodSandboxId:b25ff220a3477a8002cdbf3836a198edf357c3d54b304755a823af1f5857cbd6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722448320920409581,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pau
se-957141,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6d3c791123308155b7da1ffde857c24,},Annotations:map[string]string{io.kubernetes.container.hash: d4155cf3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:599e308fce44b39d9ffaac07fa975de9d252453298ccb54bedd41c3adee1d9f9,PodSandboxId:0cf0ff28d7a24a1d1c1106e6749a31c6f4c1f22a2db69d1d6e715e736310d332,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722448320908711950,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-957141,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: f36c04e787af1107623cfc87e34dfb95,},Annotations:map[string]string{io.kubernetes.container.hash: caa42e73,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07fb01c7a66a08ce480a7aa54d46cf2af1212af230d47b4e721bf82d2e3c31f9,PodSandboxId:6a3b5534e49033d62d743e3d9b12068af46669f55fec73d7e21555fe512a3d3b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722448320864297845,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-957141,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 369c1a8500deb33f817892ba954cb45e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ff45ea8e5cd03c0f849f2d625879d14cc1411331de18175d027488dd676aac5,PodSandboxId:1309b324f049fd78e90ae2122c4e516b02d5de2e1e477cdc9031cae1d6df1afb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722448320846960855,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-957141,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: fad42619c5d00c7c1c196169a6f9eb4d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=589f51e6-3a3f-4c73-9feb-798042ab7f42 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:53:40 pause-957141 crio[2245]: time="2024-07-31 17:53:40.205612276Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fee8b331-5498-4863-839f-3fa79fb35821 name=/runtime.v1.RuntimeService/Version
	Jul 31 17:53:40 pause-957141 crio[2245]: time="2024-07-31 17:53:40.205682766Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fee8b331-5498-4863-839f-3fa79fb35821 name=/runtime.v1.RuntimeService/Version
	Jul 31 17:53:40 pause-957141 crio[2245]: time="2024-07-31 17:53:40.206775109Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a7aa31e0-a304-4a21-950a-cc2f941e7862 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:53:40 pause-957141 crio[2245]: time="2024-07-31 17:53:40.207205248Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722448420207176805,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a7aa31e0-a304-4a21-950a-cc2f941e7862 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:53:40 pause-957141 crio[2245]: time="2024-07-31 17:53:40.207746974Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b7fbcac1-ba64-4031-aaa4-7d26c0f8a946 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:53:40 pause-957141 crio[2245]: time="2024-07-31 17:53:40.207843473Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b7fbcac1-ba64-4031-aaa4-7d26c0f8a946 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:53:40 pause-957141 crio[2245]: time="2024-07-31 17:53:40.208117100Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c10b93b11252bf16765c1ba5b7c71f0bd7929634945cf80f86ce8ba767199fd6,PodSandboxId:91045b7b3d6043c95fcfb56a6caaa1618b3311f72b69950900b17d7161c2d569,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722448400416680836,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x4692,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88b05208-9a2c-431c-8cdf-bda38e0baf8a,},Annotations:map[string]string{io.kubernetes.container.hash: 92de576d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c76eb85348eddce6b1c551eb07c87c84bb2008ad73594adadce509e776605eb7,PodSandboxId:1a9fe6c39c104705f0c010f47b1d19789a43d64d47d4726c0b56f190d69a8773,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722448400429509662,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7pdbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 3bf92527-b96a-4157-90cc-5d864b41526d,},Annotations:map[string]string{io.kubernetes.container.hash: d857380c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a02903735701be08d0f2540035c3d0c8ae3ae0487fef31d79d3420daa491ceef,PodSandboxId:7fa2f4020fd8f6e32b67a50f9e521f5690f44cf347833bb2bce3938ec3a10338,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722448396592319197,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-957141,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f36c04e787af1107623cfc87e34dfb95,},Annot
ations:map[string]string{io.kubernetes.container.hash: caa42e73,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:169febbca04884722fd4a27582be05d9c8428337d03e73a2b265c4d9015a561e,PodSandboxId:1a2b5733413009a61ee747829969830fba95da64116ed32e71dfdf0421aa3d6a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722448396591556491,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-957141,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6d3c791123308155b7da1ffde857c24,},Annotations:map[string]
string{io.kubernetes.container.hash: d4155cf3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:145822f96f5253022af018277ccec9937090df84d447df2f6b062809bbdbf124,PodSandboxId:7e706d5498e23252642a11e44585a13f268aad9b10e585643a95ef5ee0017c1b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722448396567629900,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-957141,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fad42619c5d00c7c1c196169a6f9eb4d,},Annotations:map[string]string{io.kubernet
es.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40eeb9de6b5a087cf086625c0c830be11a561c0f4663ceb4117702ccb3f52db0,PodSandboxId:35922cda041e63613841a6a79f8fc00cbc2a4119d20ed36235764f23294797ec,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722448396554506249,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-957141,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 369c1a8500deb33f817892ba954cb45e,},Annotations:map[string]string{io
.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6a604f6cdc9ae83bd68b4618c688c09f096a290bca0e7623a30b67bf0b41ecf,PodSandboxId:6bf9df1079de4647053f734d3b5c4b71cd6081a4fef8a06c6833bd111badb964,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722448343831535096,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7pdbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bf92527-b96a-4157-90cc-5d864b41526d,},Annotations:map[string]string{io.kubernetes.container.hash: d85738
0c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e761a3b6c9c7fa02d83225cddba648d385f3378e3cd8eda575bdc0ba4954a20,PodSandboxId:ea5eb9223ab19c59bf7c8efd694f5a350b363616b5d2bafce916652e7e14b16d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722448343419497730,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x4692,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88b05208-9a2c-431c-8cdf-bda38e0baf8a,},Annotations:map[string]string{io.kubernetes.container.hash: 92de576d,io.kubernetes.container.ports
: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ea3a326bb1edec1cc3c4627be32566c2f9d53be0f8549b5bf1efb4e4679a51f,PodSandboxId:b25ff220a3477a8002cdbf3836a198edf357c3d54b304755a823af1f5857cbd6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722448320920409581,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pau
se-957141,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6d3c791123308155b7da1ffde857c24,},Annotations:map[string]string{io.kubernetes.container.hash: d4155cf3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:599e308fce44b39d9ffaac07fa975de9d252453298ccb54bedd41c3adee1d9f9,PodSandboxId:0cf0ff28d7a24a1d1c1106e6749a31c6f4c1f22a2db69d1d6e715e736310d332,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722448320908711950,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-957141,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: f36c04e787af1107623cfc87e34dfb95,},Annotations:map[string]string{io.kubernetes.container.hash: caa42e73,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07fb01c7a66a08ce480a7aa54d46cf2af1212af230d47b4e721bf82d2e3c31f9,PodSandboxId:6a3b5534e49033d62d743e3d9b12068af46669f55fec73d7e21555fe512a3d3b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722448320864297845,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-957141,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 369c1a8500deb33f817892ba954cb45e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ff45ea8e5cd03c0f849f2d625879d14cc1411331de18175d027488dd676aac5,PodSandboxId:1309b324f049fd78e90ae2122c4e516b02d5de2e1e477cdc9031cae1d6df1afb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722448320846960855,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-957141,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: fad42619c5d00c7c1c196169a6f9eb4d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b7fbcac1-ba64-4031-aaa4-7d26c0f8a946 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:53:40 pause-957141 crio[2245]: time="2024-07-31 17:53:40.245017128Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=85f47f85-b5d0-46a5-824a-756779b8f1b2 name=/runtime.v1.RuntimeService/Version
	Jul 31 17:53:40 pause-957141 crio[2245]: time="2024-07-31 17:53:40.245089917Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=85f47f85-b5d0-46a5-824a-756779b8f1b2 name=/runtime.v1.RuntimeService/Version
	Jul 31 17:53:40 pause-957141 crio[2245]: time="2024-07-31 17:53:40.246363496Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ccf779db-1c79-4f8e-9f09-e24e9cb3a66c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:53:40 pause-957141 crio[2245]: time="2024-07-31 17:53:40.246994219Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722448420246966911,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ccf779db-1c79-4f8e-9f09-e24e9cb3a66c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 17:53:40 pause-957141 crio[2245]: time="2024-07-31 17:53:40.250094918Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f50559b8-0f2f-4ebe-8d00-354d78aaf784 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:53:40 pause-957141 crio[2245]: time="2024-07-31 17:53:40.250225532Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f50559b8-0f2f-4ebe-8d00-354d78aaf784 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 17:53:40 pause-957141 crio[2245]: time="2024-07-31 17:53:40.250605887Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c10b93b11252bf16765c1ba5b7c71f0bd7929634945cf80f86ce8ba767199fd6,PodSandboxId:91045b7b3d6043c95fcfb56a6caaa1618b3311f72b69950900b17d7161c2d569,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722448400416680836,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x4692,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88b05208-9a2c-431c-8cdf-bda38e0baf8a,},Annotations:map[string]string{io.kubernetes.container.hash: 92de576d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c76eb85348eddce6b1c551eb07c87c84bb2008ad73594adadce509e776605eb7,PodSandboxId:1a9fe6c39c104705f0c010f47b1d19789a43d64d47d4726c0b56f190d69a8773,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722448400429509662,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7pdbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 3bf92527-b96a-4157-90cc-5d864b41526d,},Annotations:map[string]string{io.kubernetes.container.hash: d857380c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a02903735701be08d0f2540035c3d0c8ae3ae0487fef31d79d3420daa491ceef,PodSandboxId:7fa2f4020fd8f6e32b67a50f9e521f5690f44cf347833bb2bce3938ec3a10338,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722448396592319197,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-957141,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f36c04e787af1107623cfc87e34dfb95,},Annot
ations:map[string]string{io.kubernetes.container.hash: caa42e73,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:169febbca04884722fd4a27582be05d9c8428337d03e73a2b265c4d9015a561e,PodSandboxId:1a2b5733413009a61ee747829969830fba95da64116ed32e71dfdf0421aa3d6a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722448396591556491,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-957141,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6d3c791123308155b7da1ffde857c24,},Annotations:map[string]
string{io.kubernetes.container.hash: d4155cf3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:145822f96f5253022af018277ccec9937090df84d447df2f6b062809bbdbf124,PodSandboxId:7e706d5498e23252642a11e44585a13f268aad9b10e585643a95ef5ee0017c1b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722448396567629900,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-957141,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fad42619c5d00c7c1c196169a6f9eb4d,},Annotations:map[string]string{io.kubernet
es.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40eeb9de6b5a087cf086625c0c830be11a561c0f4663ceb4117702ccb3f52db0,PodSandboxId:35922cda041e63613841a6a79f8fc00cbc2a4119d20ed36235764f23294797ec,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722448396554506249,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-957141,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 369c1a8500deb33f817892ba954cb45e,},Annotations:map[string]string{io
.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6a604f6cdc9ae83bd68b4618c688c09f096a290bca0e7623a30b67bf0b41ecf,PodSandboxId:6bf9df1079de4647053f734d3b5c4b71cd6081a4fef8a06c6833bd111badb964,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722448343831535096,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7pdbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bf92527-b96a-4157-90cc-5d864b41526d,},Annotations:map[string]string{io.kubernetes.container.hash: d85738
0c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e761a3b6c9c7fa02d83225cddba648d385f3378e3cd8eda575bdc0ba4954a20,PodSandboxId:ea5eb9223ab19c59bf7c8efd694f5a350b363616b5d2bafce916652e7e14b16d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722448343419497730,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x4692,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88b05208-9a2c-431c-8cdf-bda38e0baf8a,},Annotations:map[string]string{io.kubernetes.container.hash: 92de576d,io.kubernetes.container.ports
: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ea3a326bb1edec1cc3c4627be32566c2f9d53be0f8549b5bf1efb4e4679a51f,PodSandboxId:b25ff220a3477a8002cdbf3836a198edf357c3d54b304755a823af1f5857cbd6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722448320920409581,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pau
se-957141,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6d3c791123308155b7da1ffde857c24,},Annotations:map[string]string{io.kubernetes.container.hash: d4155cf3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:599e308fce44b39d9ffaac07fa975de9d252453298ccb54bedd41c3adee1d9f9,PodSandboxId:0cf0ff28d7a24a1d1c1106e6749a31c6f4c1f22a2db69d1d6e715e736310d332,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722448320908711950,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-957141,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: f36c04e787af1107623cfc87e34dfb95,},Annotations:map[string]string{io.kubernetes.container.hash: caa42e73,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07fb01c7a66a08ce480a7aa54d46cf2af1212af230d47b4e721bf82d2e3c31f9,PodSandboxId:6a3b5534e49033d62d743e3d9b12068af46669f55fec73d7e21555fe512a3d3b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722448320864297845,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-957141,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 369c1a8500deb33f817892ba954cb45e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ff45ea8e5cd03c0f849f2d625879d14cc1411331de18175d027488dd676aac5,PodSandboxId:1309b324f049fd78e90ae2122c4e516b02d5de2e1e477cdc9031cae1d6df1afb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722448320846960855,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-957141,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: fad42619c5d00c7c1c196169a6f9eb4d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f50559b8-0f2f-4ebe-8d00-354d78aaf784 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	c76eb85348edd       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   19 seconds ago       Running             kube-proxy                1                   1a9fe6c39c104       kube-proxy-7pdbj
	c10b93b11252b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   19 seconds ago       Running             coredns                   1                   91045b7b3d604       coredns-7db6d8ff4d-x4692
	a02903735701b       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   23 seconds ago       Running             etcd                      1                   7fa2f4020fd8f       etcd-pause-957141
	169febbca0488       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   23 seconds ago       Running             kube-apiserver            1                   1a2b573341300       kube-apiserver-pause-957141
	145822f96f525       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   23 seconds ago       Running             kube-scheduler            1                   7e706d5498e23       kube-scheduler-pause-957141
	40eeb9de6b5a0       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   23 seconds ago       Running             kube-controller-manager   1                   35922cda041e6       kube-controller-manager-pause-957141
	f6a604f6cdc9a       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   About a minute ago   Exited              kube-proxy                0                   6bf9df1079de4       kube-proxy-7pdbj
	4e761a3b6c9c7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   ea5eb9223ab19       coredns-7db6d8ff4d-x4692
	6ea3a326bb1ed       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   About a minute ago   Exited              kube-apiserver            0                   b25ff220a3477       kube-apiserver-pause-957141
	599e308fce44b       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   About a minute ago   Exited              etcd                      0                   0cf0ff28d7a24       etcd-pause-957141
	07fb01c7a66a0       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   About a minute ago   Exited              kube-controller-manager   0                   6a3b5534e4903       kube-controller-manager-pause-957141
	3ff45ea8e5cd0       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   About a minute ago   Exited              kube-scheduler            0                   1309b324f049f       kube-scheduler-pause-957141
	
	
	==> coredns [4e761a3b6c9c7fa02d83225cddba648d385f3378e3cd8eda575bdc0ba4954a20] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:38432 - 6890 "HINFO IN 3662218827591387347.8007845490931809510. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.00691505s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[2077430852]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Jul-2024 17:52:23.607) (total time: 30002ms):
	Trace[2077430852]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (17:52:53.609)
	Trace[2077430852]: [30.002853434s] [30.002853434s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[160954076]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Jul-2024 17:52:23.607) (total time: 30003ms):
	Trace[160954076]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (17:52:53.610)
	Trace[160954076]: [30.003110252s] [30.003110252s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1593891359]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Jul-2024 17:52:23.608) (total time: 30002ms):
	Trace[1593891359]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (17:52:53.609)
	Trace[1593891359]: [30.002476812s] [30.002476812s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c10b93b11252bf16765c1ba5b7c71f0bd7929634945cf80f86ce8ba767199fd6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:38505 - 8853 "HINFO IN 6238839676201480773.6228709059102171477. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010399691s
	
	
	==> describe nodes <==
	Name:               pause-957141
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-957141
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1d737dad7efa60c56d30434fcd857dd3b14c91d9
	                    minikube.k8s.io/name=pause-957141
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T17_52_06_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 17:52:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-957141
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 17:53:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 17:53:19 +0000   Wed, 31 Jul 2024 17:52:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 17:53:19 +0000   Wed, 31 Jul 2024 17:52:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 17:53:19 +0000   Wed, 31 Jul 2024 17:52:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 17:53:19 +0000   Wed, 31 Jul 2024 17:52:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.24
	  Hostname:    pause-957141
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 d2cb4cf1fe7948c390f780ff401aeecc
	  System UUID:                d2cb4cf1-fe79-48c3-90f7-80ff401aeecc
	  Boot ID:                    b7a5e3c9-7c32-48ce-87c7-3771eff8d37e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-x4692                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     80s
	  kube-system                 etcd-pause-957141                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         95s
	  kube-system                 kube-apiserver-pause-957141             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         95s
	  kube-system                 kube-controller-manager-pause-957141    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         95s
	  kube-system                 kube-proxy-7pdbj                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         81s
	  kube-system                 kube-scheduler-pause-957141             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         95s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 76s                kube-proxy       
	  Normal  Starting                 19s                kube-proxy       
	  Normal  NodeHasSufficientPID     95s                kubelet          Node pause-957141 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  95s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  95s                kubelet          Node pause-957141 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    95s                kubelet          Node pause-957141 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 95s                kubelet          Starting kubelet.
	  Normal  NodeReady                94s                kubelet          Node pause-957141 status is now: NodeReady
	  Normal  RegisteredNode           82s                node-controller  Node pause-957141 event: Registered Node pause-957141 in Controller
	  Normal  Starting                 24s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24s (x8 over 24s)  kubelet          Node pause-957141 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 24s)  kubelet          Node pause-957141 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x7 over 24s)  kubelet          Node pause-957141 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9s                 node-controller  Node pause-957141 event: Registered Node pause-957141 in Controller
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.550697] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.060704] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.074241] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.182473] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.151677] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.267471] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +4.204725] systemd-fstab-generator[767]: Ignoring "noauto" option for root device
	[  +5.063083] systemd-fstab-generator[952]: Ignoring "noauto" option for root device
	[  +0.060865] kauditd_printk_skb: 158 callbacks suppressed
	[Jul31 17:52] systemd-fstab-generator[1284]: Ignoring "noauto" option for root device
	[  +0.091265] kauditd_printk_skb: 69 callbacks suppressed
	[ +15.861852] systemd-fstab-generator[1508]: Ignoring "noauto" option for root device
	[  +0.078113] kauditd_printk_skb: 21 callbacks suppressed
	[Jul31 17:53] kauditd_printk_skb: 67 callbacks suppressed
	[  +8.318078] systemd-fstab-generator[2163]: Ignoring "noauto" option for root device
	[  +0.128326] systemd-fstab-generator[2175]: Ignoring "noauto" option for root device
	[  +0.152095] systemd-fstab-generator[2189]: Ignoring "noauto" option for root device
	[  +0.123495] systemd-fstab-generator[2201]: Ignoring "noauto" option for root device
	[  +0.246573] systemd-fstab-generator[2229]: Ignoring "noauto" option for root device
	[  +1.310403] systemd-fstab-generator[2355]: Ignoring "noauto" option for root device
	[  +2.321236] systemd-fstab-generator[2759]: Ignoring "noauto" option for root device
	[  +0.792680] kauditd_printk_skb: 187 callbacks suppressed
	[ +15.336353] kauditd_printk_skb: 22 callbacks suppressed
	[  +2.124831] systemd-fstab-generator[3152]: Ignoring "noauto" option for root device
	
	
	==> etcd [599e308fce44b39d9ffaac07fa975de9d252453298ccb54bedd41c3adee1d9f9] <==
	{"level":"info","ts":"2024-07-31T17:52:22.502065Z","caller":"traceutil/trace.go:171","msg":"trace[1228088793] transaction","detail":"{read_only:false; response_revision:343; number_of_response:1; }","duration":"481.637948ms","start":"2024-07-31T17:52:22.020415Z","end":"2024-07-31T17:52:22.502053Z","steps":["trace[1228088793] 'process raft request'  (duration: 473.042058ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T17:52:22.504555Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-31T17:52:22.020404Z","time spent":"483.989138ms","remote":"127.0.0.1:42576","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3555,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/coredns-7db6d8ff4d-jlcql\" mod_revision:336 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-7db6d8ff4d-jlcql\" value_size:3496 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-7db6d8ff4d-jlcql\" > >"}
	{"level":"info","ts":"2024-07-31T17:52:22.679Z","caller":"traceutil/trace.go:171","msg":"trace[519005476] linearizableReadLoop","detail":"{readStateIndex:358; appliedIndex:357; }","duration":"156.9674ms","start":"2024-07-31T17:52:22.522018Z","end":"2024-07-31T17:52:22.678985Z","steps":["trace[519005476] 'read index received'  (duration: 150.644216ms)","trace[519005476] 'applied index is now lower than readState.Index'  (duration: 6.322543ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-31T17:52:22.679112Z","caller":"traceutil/trace.go:171","msg":"trace[782073005] transaction","detail":"{read_only:false; response_revision:345; number_of_response:1; }","duration":"170.350082ms","start":"2024-07-31T17:52:22.508755Z","end":"2024-07-31T17:52:22.679105Z","steps":["trace[782073005] 'process raft request'  (duration: 163.898739ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T17:52:22.679163Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"157.13042ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/kube-system/coredns-7db6d8ff4d\" ","response":"range_response_count:1 size:3752"}
	{"level":"info","ts":"2024-07-31T17:52:22.679757Z","caller":"traceutil/trace.go:171","msg":"trace[815962755] range","detail":"{range_begin:/registry/replicasets/kube-system/coredns-7db6d8ff4d; range_end:; response_count:1; response_revision:345; }","duration":"157.75407ms","start":"2024-07-31T17:52:22.521993Z","end":"2024-07-31T17:52:22.679747Z","steps":["trace[815962755] 'agreement among raft nodes before linearized reading'  (duration: 157.122993ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T17:52:22.679177Z","caller":"traceutil/trace.go:171","msg":"trace[496895721] transaction","detail":"{read_only:false; response_revision:346; number_of_response:1; }","duration":"154.157369ms","start":"2024-07-31T17:52:22.525015Z","end":"2024-07-31T17:52:22.679173Z","steps":["trace[496895721] 'process raft request'  (duration: 154.122455ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T17:52:22.679379Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"150.041133ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:3995"}
	{"level":"info","ts":"2024-07-31T17:52:22.681785Z","caller":"traceutil/trace.go:171","msg":"trace[776802858] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:346; }","duration":"152.503915ms","start":"2024-07-31T17:52:22.52927Z","end":"2024-07-31T17:52:22.681774Z","steps":["trace[776802858] 'agreement among raft nodes before linearized reading'  (duration: 150.027221ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T17:52:22.679548Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"150.151603ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-957141\" ","response":"range_response_count:1 size:5426"}
	{"level":"info","ts":"2024-07-31T17:52:22.682442Z","caller":"traceutil/trace.go:171","msg":"trace[28348542] range","detail":"{range_begin:/registry/minions/pause-957141; range_end:; response_count:1; response_revision:346; }","duration":"153.053229ms","start":"2024-07-31T17:52:22.529378Z","end":"2024-07-31T17:52:22.682431Z","steps":["trace[28348542] 'agreement among raft nodes before linearized reading'  (duration: 150.140034ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T17:53:00.390163Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"258.388938ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1654387909922034039 > lease_revoke:<id:16f59109eb9a9112>","response":"size:27"}
	{"level":"info","ts":"2024-07-31T17:53:00.390893Z","caller":"traceutil/trace.go:171","msg":"trace[1986015461] linearizableReadLoop","detail":"{readStateIndex:413; appliedIndex:412; }","duration":"170.373459ms","start":"2024-07-31T17:53:00.220465Z","end":"2024-07-31T17:53:00.390838Z","steps":["trace[1986015461] 'read index received'  (duration: 36.38µs)","trace[1986015461] 'applied index is now lower than readState.Index'  (duration: 170.334975ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-31T17:53:00.391134Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"170.646277ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7db6d8ff4d-x4692\" ","response":"range_response_count:1 size:4907"}
	{"level":"info","ts":"2024-07-31T17:53:00.391202Z","caller":"traceutil/trace.go:171","msg":"trace[353426435] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7db6d8ff4d-x4692; range_end:; response_count:1; response_revision:391; }","duration":"170.777543ms","start":"2024-07-31T17:53:00.220409Z","end":"2024-07-31T17:53:00.391186Z","steps":["trace[353426435] 'agreement among raft nodes before linearized reading'  (duration: 170.644564ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T17:53:05.572406Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-31T17:53:05.572497Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"pause-957141","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.24:2380"],"advertise-client-urls":["https://192.168.39.24:2379"]}
	{"level":"warn","ts":"2024-07-31T17:53:05.572629Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T17:53:05.572744Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T17:53:05.634773Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.24:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T17:53:05.634868Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.24:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-31T17:53:05.634967Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"602226ed500416f5","current-leader-member-id":"602226ed500416f5"}
	{"level":"info","ts":"2024-07-31T17:53:05.638201Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.24:2380"}
	{"level":"info","ts":"2024-07-31T17:53:05.638385Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.24:2380"}
	{"level":"info","ts":"2024-07-31T17:53:05.638418Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"pause-957141","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.24:2380"],"advertise-client-urls":["https://192.168.39.24:2379"]}
	
	
	==> etcd [a02903735701be08d0f2540035c3d0c8ae3ae0487fef31d79d3420daa491ceef] <==
	{"level":"info","ts":"2024-07-31T17:53:16.97484Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-31T17:53:16.974874Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-31T17:53:16.980996Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"602226ed500416f5 switched to configuration voters=(6927141977540794101)"}
	{"level":"info","ts":"2024-07-31T17:53:16.981094Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6c3e0d5efc74209","local-member-id":"602226ed500416f5","added-peer-id":"602226ed500416f5","added-peer-peer-urls":["https://192.168.39.24:2380"]}
	{"level":"info","ts":"2024-07-31T17:53:16.981241Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6c3e0d5efc74209","local-member-id":"602226ed500416f5","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T17:53:16.981292Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T17:53:16.985172Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-31T17:53:16.988147Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"602226ed500416f5","initial-advertise-peer-urls":["https://192.168.39.24:2380"],"listen-peer-urls":["https://192.168.39.24:2380"],"advertise-client-urls":["https://192.168.39.24:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.24:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-31T17:53:16.987924Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.24:2380"}
	{"level":"info","ts":"2024-07-31T17:53:16.98966Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-31T17:53:16.989952Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.24:2380"}
	{"level":"info","ts":"2024-07-31T17:53:17.926095Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"602226ed500416f5 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-31T17:53:17.92621Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"602226ed500416f5 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-31T17:53:17.926256Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"602226ed500416f5 received MsgPreVoteResp from 602226ed500416f5 at term 2"}
	{"level":"info","ts":"2024-07-31T17:53:17.926291Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"602226ed500416f5 became candidate at term 3"}
	{"level":"info","ts":"2024-07-31T17:53:17.926315Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"602226ed500416f5 received MsgVoteResp from 602226ed500416f5 at term 3"}
	{"level":"info","ts":"2024-07-31T17:53:17.926342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"602226ed500416f5 became leader at term 3"}
	{"level":"info","ts":"2024-07-31T17:53:17.926367Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 602226ed500416f5 elected leader 602226ed500416f5 at term 3"}
	{"level":"info","ts":"2024-07-31T17:53:17.932432Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"602226ed500416f5","local-member-attributes":"{Name:pause-957141 ClientURLs:[https://192.168.39.24:2379]}","request-path":"/0/members/602226ed500416f5/attributes","cluster-id":"6c3e0d5efc74209","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T17:53:17.932674Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T17:53:17.933077Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T17:53:17.9347Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-31T17:53:17.937042Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.24:2379"}
	{"level":"info","ts":"2024-07-31T17:53:17.945473Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T17:53:17.945561Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 17:53:40 up 2 min,  0 users,  load average: 2.16, 0.66, 0.23
	Linux pause-957141 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [169febbca04884722fd4a27582be05d9c8428337d03e73a2b265c4d9015a561e] <==
	I0731 17:53:19.303655       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0731 17:53:19.310707       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0731 17:53:19.310918       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0731 17:53:19.311022       1 shared_informer.go:320] Caches are synced for configmaps
	I0731 17:53:19.311114       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0731 17:53:19.312938       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0731 17:53:19.327061       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0731 17:53:19.327095       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0731 17:53:19.332649       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0731 17:53:19.333567       1 aggregator.go:165] initial CRD sync complete...
	I0731 17:53:19.333626       1 autoregister_controller.go:141] Starting autoregister controller
	I0731 17:53:19.333651       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0731 17:53:19.333675       1 cache.go:39] Caches are synced for autoregister controller
	I0731 17:53:19.337468       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0731 17:53:19.337538       1 policy_source.go:224] refreshing policies
	E0731 17:53:19.342002       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0731 17:53:19.350113       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0731 17:53:20.206468       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0731 17:53:20.825045       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0731 17:53:20.842445       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0731 17:53:20.888855       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0731 17:53:20.919199       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0731 17:53:20.925458       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0731 17:53:31.969230       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0731 17:53:32.068471       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [6ea3a326bb1edec1cc3c4627be32566c2f9d53be0f8549b5bf1efb4e4679a51f] <==
	Trace[483281170]: [514.883143ms] [514.883143ms] END
	I0731 17:52:22.520614       1 trace.go:236] Trace[1666184154]: "Create" accept:application/vnd.kubernetes.protobuf, */*,audit-id:bf33ba64-964a-43ae-9768-639952cf1039,client:192.168.39.24,api-group:,api-version:v1,name:coredns-7db6d8ff4d-jlcql,subresource:binding,namespace:kube-system,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jlcql/binding,user-agent:kube-scheduler/v1.30.3 (linux/amd64) kubernetes/6fc0a69/scheduler,verb:POST (31-Jul-2024 17:52:22.018) (total time: 501ms):
	Trace[1666184154]: ["GuaranteedUpdate etcd3" audit-id:bf33ba64-964a-43ae-9768-639952cf1039,key:/pods/kube-system/coredns-7db6d8ff4d-jlcql,type:*core.Pod,resource:pods 501ms (17:52:22.019)
	Trace[1666184154]:  ---"Txn call completed" 500ms (17:52:22.520)]
	Trace[1666184154]: [501.716572ms] [501.716572ms] END
	I0731 17:53:05.572509       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	E0731 17:53:05.590331       1 controller.go:131] Unable to remove endpoints from kubernetes service: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	I0731 17:53:05.592260       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0731 17:53:05.592770       1 storage_flowcontrol.go:187] APF bootstrap ensurer is exiting
	I0731 17:53:05.593295       1 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
	I0731 17:53:05.594679       1 available_controller.go:439] Shutting down AvailableConditionController
	I0731 17:53:05.594767       1 apf_controller.go:386] Shutting down API Priority and Fairness config worker
	I0731 17:53:05.596120       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0731 17:53:05.596750       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0731 17:53:05.596921       1 dynamic_serving_content.go:146] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0731 17:53:05.597455       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0731 17:53:05.597568       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0731 17:53:05.597667       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0731 17:53:05.597776       1 controller.go:157] Shutting down quota evaluator
	I0731 17:53:05.597882       1 controller.go:176] quota evaluator worker shutdown
	I0731 17:53:05.593566       1 apiservice_controller.go:131] Shutting down APIServiceRegistrationController
	I0731 17:53:05.598430       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0731 17:53:05.598642       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0731 17:53:05.603120       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0731 17:53:05.603474       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	
	==> kube-controller-manager [07fb01c7a66a08ce480a7aa54d46cf2af1212af230d47b4e721bf82d2e3c31f9] <==
	I0731 17:52:18.582903       1 shared_informer.go:320] Caches are synced for namespace
	I0731 17:52:18.619102       1 shared_informer.go:320] Caches are synced for PV protection
	I0731 17:52:18.625203       1 shared_informer.go:320] Caches are synced for service account
	I0731 17:52:18.659062       1 shared_informer.go:320] Caches are synced for attach detach
	I0731 17:52:18.702501       1 shared_informer.go:320] Caches are synced for resource quota
	I0731 17:52:18.718296       1 shared_informer.go:320] Caches are synced for persistent volume
	I0731 17:52:18.725016       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0731 17:52:18.747767       1 shared_informer.go:320] Caches are synced for resource quota
	I0731 17:52:18.753247       1 shared_informer.go:320] Caches are synced for endpoint
	I0731 17:52:19.166015       1 shared_informer.go:320] Caches are synced for garbage collector
	I0731 17:52:19.166104       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0731 17:52:19.176900       1 shared_informer.go:320] Caches are synced for garbage collector
	I0731 17:52:22.502247       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="3.366331678s"
	I0731 17:52:22.707563       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="205.213531ms"
	I0731 17:52:22.721069       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="13.444454ms"
	I0731 17:52:22.721635       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="64.541µs"
	I0731 17:52:22.762361       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="35.193342ms"
	I0731 17:52:22.797337       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="34.891642ms"
	I0731 17:52:22.797482       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="100.423µs"
	I0731 17:52:23.774686       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="52.37µs"
	I0731 17:52:23.784228       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="37.507µs"
	I0731 17:52:23.788555       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="35.837µs"
	I0731 17:52:23.802076       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="41.7µs"
	I0731 17:53:03.242467       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="17.501951ms"
	I0731 17:53:03.242681       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="43.032µs"
	
	
	==> kube-controller-manager [40eeb9de6b5a087cf086625c0c830be11a561c0f4663ceb4117702ccb3f52db0] <==
	I0731 17:53:31.764906       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0731 17:53:31.780992       1 shared_informer.go:320] Caches are synced for node
	I0731 17:53:31.781368       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0731 17:53:31.781584       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0731 17:53:31.781620       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0731 17:53:31.781701       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0731 17:53:31.787638       1 shared_informer.go:320] Caches are synced for stateful set
	I0731 17:53:31.787877       1 shared_informer.go:320] Caches are synced for service account
	I0731 17:53:31.790453       1 shared_informer.go:320] Caches are synced for daemon sets
	I0731 17:53:31.796842       1 shared_informer.go:320] Caches are synced for disruption
	I0731 17:53:31.803642       1 shared_informer.go:320] Caches are synced for attach detach
	I0731 17:53:31.807919       1 shared_informer.go:320] Caches are synced for crt configmap
	I0731 17:53:31.815485       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0731 17:53:31.818906       1 shared_informer.go:320] Caches are synced for job
	I0731 17:53:31.829565       1 shared_informer.go:320] Caches are synced for expand
	I0731 17:53:31.847196       1 shared_informer.go:320] Caches are synced for persistent volume
	I0731 17:53:31.848645       1 shared_informer.go:320] Caches are synced for ephemeral
	I0731 17:53:31.854028       1 shared_informer.go:320] Caches are synced for PVC protection
	I0731 17:53:31.932064       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0731 17:53:31.973180       1 shared_informer.go:320] Caches are synced for resource quota
	I0731 17:53:31.989314       1 shared_informer.go:320] Caches are synced for resource quota
	I0731 17:53:32.010731       1 shared_informer.go:320] Caches are synced for endpoint
	I0731 17:53:32.439449       1 shared_informer.go:320] Caches are synced for garbage collector
	I0731 17:53:32.445772       1 shared_informer.go:320] Caches are synced for garbage collector
	I0731 17:53:32.445858       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [c76eb85348eddce6b1c551eb07c87c84bb2008ad73594adadce509e776605eb7] <==
	I0731 17:53:20.640517       1 server_linux.go:69] "Using iptables proxy"
	I0731 17:53:20.657254       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.24"]
	I0731 17:53:20.732934       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 17:53:20.733000       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 17:53:20.733016       1 server_linux.go:165] "Using iptables Proxier"
	I0731 17:53:20.736421       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 17:53:20.736695       1 server.go:872] "Version info" version="v1.30.3"
	I0731 17:53:20.736708       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 17:53:20.738281       1 config.go:192] "Starting service config controller"
	I0731 17:53:20.738299       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 17:53:20.738326       1 config.go:101] "Starting endpoint slice config controller"
	I0731 17:53:20.738329       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 17:53:20.738945       1 config.go:319] "Starting node config controller"
	I0731 17:53:20.738952       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 17:53:20.839232       1 shared_informer.go:320] Caches are synced for node config
	I0731 17:53:20.839301       1 shared_informer.go:320] Caches are synced for service config
	I0731 17:53:20.839325       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [f6a604f6cdc9ae83bd68b4618c688c09f096a290bca0e7623a30b67bf0b41ecf] <==
	I0731 17:52:24.033748       1 server_linux.go:69] "Using iptables proxy"
	I0731 17:52:24.052538       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.24"]
	I0731 17:52:24.103914       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 17:52:24.103967       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 17:52:24.103984       1 server_linux.go:165] "Using iptables Proxier"
	I0731 17:52:24.107304       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 17:52:24.107970       1 server.go:872] "Version info" version="v1.30.3"
	I0731 17:52:24.108030       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 17:52:24.110750       1 config.go:192] "Starting service config controller"
	I0731 17:52:24.111035       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 17:52:24.111100       1 config.go:319] "Starting node config controller"
	I0731 17:52:24.111117       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 17:52:24.111139       1 config.go:101] "Starting endpoint slice config controller"
	I0731 17:52:24.111198       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 17:52:24.211834       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 17:52:24.211845       1 shared_informer.go:320] Caches are synced for node config
	I0731 17:52:24.211860       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [145822f96f5253022af018277ccec9937090df84d447df2f6b062809bbdbf124] <==
	I0731 17:53:17.683736       1 serving.go:380] Generated self-signed cert in-memory
	W0731 17:53:19.237389       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0731 17:53:19.237497       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 17:53:19.237559       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0731 17:53:19.237598       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0731 17:53:19.279220       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0731 17:53:19.279306       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 17:53:19.288279       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0731 17:53:19.288510       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0731 17:53:19.288563       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 17:53:19.288611       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0731 17:53:19.388758       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [3ff45ea8e5cd03c0f849f2d625879d14cc1411331de18175d027488dd676aac5] <==
	W0731 17:52:03.295732       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 17:52:03.295753       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0731 17:52:03.295940       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 17:52:03.295969       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 17:52:04.110471       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 17:52:04.110556       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0731 17:52:04.139441       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 17:52:04.139542       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0731 17:52:04.165733       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0731 17:52:04.165871       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0731 17:52:04.175594       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0731 17:52:04.175717       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0731 17:52:04.211919       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 17:52:04.211971       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0731 17:52:04.221548       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 17:52:04.221632       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0731 17:52:04.372429       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0731 17:52:04.372473       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0731 17:52:04.420512       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 17:52:04.420560       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 17:52:04.521326       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 17:52:04.521453       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0731 17:52:06.790001       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 17:53:05.574829       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0731 17:53:05.576521       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 31 17:53:16 pause-957141 kubelet[2766]: I0731 17:53:16.302409    2766 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f6d3c791123308155b7da1ffde857c24-usr-share-ca-certificates\") pod \"kube-apiserver-pause-957141\" (UID: \"f6d3c791123308155b7da1ffde857c24\") " pod="kube-system/kube-apiserver-pause-957141"
	Jul 31 17:53:16 pause-957141 kubelet[2766]: E0731 17:53:16.305945    2766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-957141?timeout=10s\": dial tcp 192.168.39.24:8443: connect: connection refused" interval="400ms"
	Jul 31 17:53:16 pause-957141 kubelet[2766]: I0731 17:53:16.402000    2766 kubelet_node_status.go:73] "Attempting to register node" node="pause-957141"
	Jul 31 17:53:16 pause-957141 kubelet[2766]: E0731 17:53:16.402986    2766 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.24:8443: connect: connection refused" node="pause-957141"
	Jul 31 17:53:16 pause-957141 kubelet[2766]: I0731 17:53:16.523049    2766 scope.go:117] "RemoveContainer" containerID="07fb01c7a66a08ce480a7aa54d46cf2af1212af230d47b4e721bf82d2e3c31f9"
	Jul 31 17:53:16 pause-957141 kubelet[2766]: I0731 17:53:16.524083    2766 scope.go:117] "RemoveContainer" containerID="3ff45ea8e5cd03c0f849f2d625879d14cc1411331de18175d027488dd676aac5"
	Jul 31 17:53:16 pause-957141 kubelet[2766]: I0731 17:53:16.528990    2766 scope.go:117] "RemoveContainer" containerID="6ea3a326bb1edec1cc3c4627be32566c2f9d53be0f8549b5bf1efb4e4679a51f"
	Jul 31 17:53:16 pause-957141 kubelet[2766]: I0731 17:53:16.530246    2766 scope.go:117] "RemoveContainer" containerID="599e308fce44b39d9ffaac07fa975de9d252453298ccb54bedd41c3adee1d9f9"
	Jul 31 17:53:16 pause-957141 kubelet[2766]: E0731 17:53:16.711243    2766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-957141?timeout=10s\": dial tcp 192.168.39.24:8443: connect: connection refused" interval="800ms"
	Jul 31 17:53:16 pause-957141 kubelet[2766]: I0731 17:53:16.804784    2766 kubelet_node_status.go:73] "Attempting to register node" node="pause-957141"
	Jul 31 17:53:16 pause-957141 kubelet[2766]: E0731 17:53:16.805637    2766 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.24:8443: connect: connection refused" node="pause-957141"
	Jul 31 17:53:17 pause-957141 kubelet[2766]: I0731 17:53:17.607292    2766 kubelet_node_status.go:73] "Attempting to register node" node="pause-957141"
	Jul 31 17:53:19 pause-957141 kubelet[2766]: I0731 17:53:19.355404    2766 kubelet_node_status.go:112] "Node was previously registered" node="pause-957141"
	Jul 31 17:53:19 pause-957141 kubelet[2766]: I0731 17:53:19.355844    2766 kubelet_node_status.go:76] "Successfully registered node" node="pause-957141"
	Jul 31 17:53:19 pause-957141 kubelet[2766]: I0731 17:53:19.357147    2766 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 31 17:53:19 pause-957141 kubelet[2766]: I0731 17:53:19.358623    2766 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 31 17:53:20 pause-957141 kubelet[2766]: I0731 17:53:20.090368    2766 apiserver.go:52] "Watching apiserver"
	Jul 31 17:53:20 pause-957141 kubelet[2766]: I0731 17:53:20.094136    2766 topology_manager.go:215] "Topology Admit Handler" podUID="3bf92527-b96a-4157-90cc-5d864b41526d" podNamespace="kube-system" podName="kube-proxy-7pdbj"
	Jul 31 17:53:20 pause-957141 kubelet[2766]: I0731 17:53:20.094373    2766 topology_manager.go:215] "Topology Admit Handler" podUID="88b05208-9a2c-431c-8cdf-bda38e0baf8a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-x4692"
	Jul 31 17:53:20 pause-957141 kubelet[2766]: I0731 17:53:20.099020    2766 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 31 17:53:20 pause-957141 kubelet[2766]: I0731 17:53:20.152957    2766 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3bf92527-b96a-4157-90cc-5d864b41526d-lib-modules\") pod \"kube-proxy-7pdbj\" (UID: \"3bf92527-b96a-4157-90cc-5d864b41526d\") " pod="kube-system/kube-proxy-7pdbj"
	Jul 31 17:53:20 pause-957141 kubelet[2766]: I0731 17:53:20.153059    2766 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3bf92527-b96a-4157-90cc-5d864b41526d-xtables-lock\") pod \"kube-proxy-7pdbj\" (UID: \"3bf92527-b96a-4157-90cc-5d864b41526d\") " pod="kube-system/kube-proxy-7pdbj"
	Jul 31 17:53:20 pause-957141 kubelet[2766]: I0731 17:53:20.394932    2766 scope.go:117] "RemoveContainer" containerID="4e761a3b6c9c7fa02d83225cddba648d385f3378e3cd8eda575bdc0ba4954a20"
	Jul 31 17:53:20 pause-957141 kubelet[2766]: I0731 17:53:20.395957    2766 scope.go:117] "RemoveContainer" containerID="f6a604f6cdc9ae83bd68b4618c688c09f096a290bca0e7623a30b67bf0b41ecf"
	Jul 31 17:53:29 pause-957141 kubelet[2766]: I0731 17:53:29.969006    2766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-957141 -n pause-957141
helpers_test.go:261: (dbg) Run:  kubectl --context pause-957141 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (36.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (272.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-276459 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-276459 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m32.64373404s)

                                                
                                                
-- stdout --
	* [old-k8s-version-276459] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19349
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19349-8084/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19349-8084/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-276459" primary control-plane node in "old-k8s-version-276459" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 17:57:39.117433   67995 out.go:291] Setting OutFile to fd 1 ...
	I0731 17:57:39.117525   67995 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 17:57:39.117533   67995 out.go:304] Setting ErrFile to fd 2...
	I0731 17:57:39.117537   67995 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 17:57:39.117711   67995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19349-8084/.minikube/bin
	I0731 17:57:39.118277   67995 out.go:298] Setting JSON to false
	I0731 17:57:39.119514   67995 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6003,"bootTime":1722442656,"procs":315,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 17:57:39.119576   67995 start.go:139] virtualization: kvm guest
	I0731 17:57:39.121850   67995 out.go:177] * [old-k8s-version-276459] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 17:57:39.123125   67995 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 17:57:39.123178   67995 notify.go:220] Checking for updates...
	I0731 17:57:39.125621   67995 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 17:57:39.126856   67995 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 17:57:39.128060   67995 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19349-8084/.minikube
	I0731 17:57:39.129307   67995 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 17:57:39.130625   67995 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 17:57:39.132272   67995 config.go:182] Loaded profile config "enable-default-cni-985288": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 17:57:39.132391   67995 config.go:182] Loaded profile config "flannel-985288": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 17:57:39.132494   67995 config.go:182] Loaded profile config "kubernetes-upgrade-410576": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 17:57:39.132620   67995 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 17:57:39.169119   67995 out.go:177] * Using the kvm2 driver based on user configuration
	I0731 17:57:39.170527   67995 start.go:297] selected driver: kvm2
	I0731 17:57:39.170542   67995 start.go:901] validating driver "kvm2" against <nil>
	I0731 17:57:39.170558   67995 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 17:57:39.171300   67995 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 17:57:39.171386   67995 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19349-8084/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 17:57:39.186105   67995 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 17:57:39.186165   67995 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 17:57:39.186441   67995 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 17:57:39.186507   67995 cni.go:84] Creating CNI manager for ""
	I0731 17:57:39.186522   67995 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 17:57:39.186536   67995 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 17:57:39.186601   67995 start.go:340] cluster config:
	{Name:old-k8s-version-276459 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-276459 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 17:57:39.186720   67995 iso.go:125] acquiring lock: {Name:mk4494bb5a63902bf12a909c3109db85229bc516 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 17:57:39.188567   67995 out.go:177] * Starting "old-k8s-version-276459" primary control-plane node in "old-k8s-version-276459" cluster
	I0731 17:57:39.189908   67995 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 17:57:39.189950   67995 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0731 17:57:39.189959   67995 cache.go:56] Caching tarball of preloaded images
	I0731 17:57:39.190050   67995 preload.go:172] Found /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 17:57:39.190062   67995 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0731 17:57:39.190187   67995 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/config.json ...
	I0731 17:57:39.190211   67995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/config.json: {Name:mk7d50135ed2c60e0545008accf64d73827c287c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 17:57:39.190384   67995 start.go:360] acquireMachinesLock for old-k8s-version-276459: {Name:mkd78eacfab7d6c3058c6674434b3d889ec957e0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 17:57:39.190428   67995 start.go:364] duration metric: took 23.264µs to acquireMachinesLock for "old-k8s-version-276459"
	I0731 17:57:39.190453   67995 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-276459 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-276459 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 17:57:39.190531   67995 start.go:125] createHost starting for "" (driver="kvm2")
	I0731 17:57:39.192130   67995 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 17:57:39.192285   67995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:57:39.192338   67995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:57:39.207371   67995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38035
	I0731 17:57:39.207857   67995 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:57:39.208398   67995 main.go:141] libmachine: Using API Version  1
	I0731 17:57:39.208419   67995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:57:39.208831   67995 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:57:39.209071   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetMachineName
	I0731 17:57:39.209265   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 17:57:39.209468   67995 start.go:159] libmachine.API.Create for "old-k8s-version-276459" (driver="kvm2")
	I0731 17:57:39.209498   67995 client.go:168] LocalClient.Create starting
	I0731 17:57:39.209536   67995 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem
	I0731 17:57:39.209581   67995 main.go:141] libmachine: Decoding PEM data...
	I0731 17:57:39.209601   67995 main.go:141] libmachine: Parsing certificate...
	I0731 17:57:39.209678   67995 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem
	I0731 17:57:39.209706   67995 main.go:141] libmachine: Decoding PEM data...
	I0731 17:57:39.209729   67995 main.go:141] libmachine: Parsing certificate...
	I0731 17:57:39.209754   67995 main.go:141] libmachine: Running pre-create checks...
	I0731 17:57:39.209775   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .PreCreateCheck
	I0731 17:57:39.210233   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetConfigRaw
	I0731 17:57:39.210764   67995 main.go:141] libmachine: Creating machine...
	I0731 17:57:39.210779   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .Create
	I0731 17:57:39.210966   67995 main.go:141] libmachine: (old-k8s-version-276459) Creating KVM machine...
	I0731 17:57:39.212423   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | found existing default KVM network
	I0731 17:57:39.213833   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 17:57:39.213686   68018 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000112c10}
	I0731 17:57:39.213853   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | created network xml: 
	I0731 17:57:39.213862   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | <network>
	I0731 17:57:39.213868   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG |   <name>mk-old-k8s-version-276459</name>
	I0731 17:57:39.213875   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG |   <dns enable='no'/>
	I0731 17:57:39.213880   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG |   
	I0731 17:57:39.213886   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0731 17:57:39.213905   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG |     <dhcp>
	I0731 17:57:39.213919   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0731 17:57:39.213928   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG |     </dhcp>
	I0731 17:57:39.214021   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG |   </ip>
	I0731 17:57:39.214085   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG |   
	I0731 17:57:39.214101   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | </network>
	I0731 17:57:39.214119   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | 
	I0731 17:57:39.219563   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | trying to create private KVM network mk-old-k8s-version-276459 192.168.39.0/24...
	I0731 17:57:39.310430   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | private KVM network mk-old-k8s-version-276459 192.168.39.0/24 created
	I0731 17:57:39.310478   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 17:57:39.310310   68018 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19349-8084/.minikube
	I0731 17:57:39.310493   67995 main.go:141] libmachine: (old-k8s-version-276459) Setting up store path in /home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459 ...
	I0731 17:57:39.310522   67995 main.go:141] libmachine: (old-k8s-version-276459) Building disk image from file:///home/jenkins/minikube-integration/19349-8084/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0731 17:57:39.310574   67995 main.go:141] libmachine: (old-k8s-version-276459) Downloading /home/jenkins/minikube-integration/19349-8084/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19349-8084/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0731 17:57:39.576495   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 17:57:39.576315   68018 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459/id_rsa...
	I0731 17:57:39.680936   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 17:57:39.680813   68018 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459/old-k8s-version-276459.rawdisk...
	I0731 17:57:39.680965   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | Writing magic tar header
	I0731 17:57:39.680982   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | Writing SSH key tar header
	I0731 17:57:39.681031   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 17:57:39.680975   68018 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459 ...
	I0731 17:57:39.681109   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459
	I0731 17:57:39.681131   67995 main.go:141] libmachine: (old-k8s-version-276459) Setting executable bit set on /home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459 (perms=drwx------)
	I0731 17:57:39.681144   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19349-8084/.minikube/machines
	I0731 17:57:39.681160   67995 main.go:141] libmachine: (old-k8s-version-276459) Setting executable bit set on /home/jenkins/minikube-integration/19349-8084/.minikube/machines (perms=drwxr-xr-x)
	I0731 17:57:39.681181   67995 main.go:141] libmachine: (old-k8s-version-276459) Setting executable bit set on /home/jenkins/minikube-integration/19349-8084/.minikube (perms=drwxr-xr-x)
	I0731 17:57:39.681196   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19349-8084/.minikube
	I0731 17:57:39.681211   67995 main.go:141] libmachine: (old-k8s-version-276459) Setting executable bit set on /home/jenkins/minikube-integration/19349-8084 (perms=drwxrwxr-x)
	I0731 17:57:39.681229   67995 main.go:141] libmachine: (old-k8s-version-276459) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0731 17:57:39.681243   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19349-8084
	I0731 17:57:39.681261   67995 main.go:141] libmachine: (old-k8s-version-276459) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0731 17:57:39.681274   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0731 17:57:39.681291   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | Checking permissions on dir: /home/jenkins
	I0731 17:57:39.681300   67995 main.go:141] libmachine: (old-k8s-version-276459) Creating domain...
	I0731 17:57:39.681307   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | Checking permissions on dir: /home
	I0731 17:57:39.681317   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | Skipping /home - not owner
	I0731 17:57:39.682579   67995 main.go:141] libmachine: (old-k8s-version-276459) define libvirt domain using xml: 
	I0731 17:57:39.682601   67995 main.go:141] libmachine: (old-k8s-version-276459) <domain type='kvm'>
	I0731 17:57:39.682612   67995 main.go:141] libmachine: (old-k8s-version-276459)   <name>old-k8s-version-276459</name>
	I0731 17:57:39.682630   67995 main.go:141] libmachine: (old-k8s-version-276459)   <memory unit='MiB'>2200</memory>
	I0731 17:57:39.682656   67995 main.go:141] libmachine: (old-k8s-version-276459)   <vcpu>2</vcpu>
	I0731 17:57:39.682679   67995 main.go:141] libmachine: (old-k8s-version-276459)   <features>
	I0731 17:57:39.682691   67995 main.go:141] libmachine: (old-k8s-version-276459)     <acpi/>
	I0731 17:57:39.682706   67995 main.go:141] libmachine: (old-k8s-version-276459)     <apic/>
	I0731 17:57:39.682716   67995 main.go:141] libmachine: (old-k8s-version-276459)     <pae/>
	I0731 17:57:39.682738   67995 main.go:141] libmachine: (old-k8s-version-276459)     
	I0731 17:57:39.682749   67995 main.go:141] libmachine: (old-k8s-version-276459)   </features>
	I0731 17:57:39.682761   67995 main.go:141] libmachine: (old-k8s-version-276459)   <cpu mode='host-passthrough'>
	I0731 17:57:39.682772   67995 main.go:141] libmachine: (old-k8s-version-276459)   
	I0731 17:57:39.682782   67995 main.go:141] libmachine: (old-k8s-version-276459)   </cpu>
	I0731 17:57:39.682793   67995 main.go:141] libmachine: (old-k8s-version-276459)   <os>
	I0731 17:57:39.682804   67995 main.go:141] libmachine: (old-k8s-version-276459)     <type>hvm</type>
	I0731 17:57:39.682812   67995 main.go:141] libmachine: (old-k8s-version-276459)     <boot dev='cdrom'/>
	I0731 17:57:39.682826   67995 main.go:141] libmachine: (old-k8s-version-276459)     <boot dev='hd'/>
	I0731 17:57:39.682837   67995 main.go:141] libmachine: (old-k8s-version-276459)     <bootmenu enable='no'/>
	I0731 17:57:39.682850   67995 main.go:141] libmachine: (old-k8s-version-276459)   </os>
	I0731 17:57:39.682871   67995 main.go:141] libmachine: (old-k8s-version-276459)   <devices>
	I0731 17:57:39.682885   67995 main.go:141] libmachine: (old-k8s-version-276459)     <disk type='file' device='cdrom'>
	I0731 17:57:39.682902   67995 main.go:141] libmachine: (old-k8s-version-276459)       <source file='/home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459/boot2docker.iso'/>
	I0731 17:57:39.682913   67995 main.go:141] libmachine: (old-k8s-version-276459)       <target dev='hdc' bus='scsi'/>
	I0731 17:57:39.682922   67995 main.go:141] libmachine: (old-k8s-version-276459)       <readonly/>
	I0731 17:57:39.682931   67995 main.go:141] libmachine: (old-k8s-version-276459)     </disk>
	I0731 17:57:39.682941   67995 main.go:141] libmachine: (old-k8s-version-276459)     <disk type='file' device='disk'>
	I0731 17:57:39.682952   67995 main.go:141] libmachine: (old-k8s-version-276459)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0731 17:57:39.682969   67995 main.go:141] libmachine: (old-k8s-version-276459)       <source file='/home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459/old-k8s-version-276459.rawdisk'/>
	I0731 17:57:39.682979   67995 main.go:141] libmachine: (old-k8s-version-276459)       <target dev='hda' bus='virtio'/>
	I0731 17:57:39.682986   67995 main.go:141] libmachine: (old-k8s-version-276459)     </disk>
	I0731 17:57:39.682998   67995 main.go:141] libmachine: (old-k8s-version-276459)     <interface type='network'>
	I0731 17:57:39.683010   67995 main.go:141] libmachine: (old-k8s-version-276459)       <source network='mk-old-k8s-version-276459'/>
	I0731 17:57:39.683021   67995 main.go:141] libmachine: (old-k8s-version-276459)       <model type='virtio'/>
	I0731 17:57:39.683036   67995 main.go:141] libmachine: (old-k8s-version-276459)     </interface>
	I0731 17:57:39.683048   67995 main.go:141] libmachine: (old-k8s-version-276459)     <interface type='network'>
	I0731 17:57:39.683060   67995 main.go:141] libmachine: (old-k8s-version-276459)       <source network='default'/>
	I0731 17:57:39.683072   67995 main.go:141] libmachine: (old-k8s-version-276459)       <model type='virtio'/>
	I0731 17:57:39.683081   67995 main.go:141] libmachine: (old-k8s-version-276459)     </interface>
	I0731 17:57:39.683089   67995 main.go:141] libmachine: (old-k8s-version-276459)     <serial type='pty'>
	I0731 17:57:39.683098   67995 main.go:141] libmachine: (old-k8s-version-276459)       <target port='0'/>
	I0731 17:57:39.683138   67995 main.go:141] libmachine: (old-k8s-version-276459)     </serial>
	I0731 17:57:39.683161   67995 main.go:141] libmachine: (old-k8s-version-276459)     <console type='pty'>
	I0731 17:57:39.683176   67995 main.go:141] libmachine: (old-k8s-version-276459)       <target type='serial' port='0'/>
	I0731 17:57:39.683188   67995 main.go:141] libmachine: (old-k8s-version-276459)     </console>
	I0731 17:57:39.683201   67995 main.go:141] libmachine: (old-k8s-version-276459)     <rng model='virtio'>
	I0731 17:57:39.683223   67995 main.go:141] libmachine: (old-k8s-version-276459)       <backend model='random'>/dev/random</backend>
	I0731 17:57:39.683247   67995 main.go:141] libmachine: (old-k8s-version-276459)     </rng>
	I0731 17:57:39.683266   67995 main.go:141] libmachine: (old-k8s-version-276459)     
	I0731 17:57:39.683282   67995 main.go:141] libmachine: (old-k8s-version-276459)     
	I0731 17:57:39.683292   67995 main.go:141] libmachine: (old-k8s-version-276459)   </devices>
	I0731 17:57:39.683298   67995 main.go:141] libmachine: (old-k8s-version-276459) </domain>
	I0731 17:57:39.683310   67995 main.go:141] libmachine: (old-k8s-version-276459) 
	I0731 17:57:39.688638   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:89:39:4e in network default
	I0731 17:57:39.689230   67995 main.go:141] libmachine: (old-k8s-version-276459) Ensuring networks are active...
	I0731 17:57:39.689252   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 17:57:39.689888   67995 main.go:141] libmachine: (old-k8s-version-276459) Ensuring network default is active
	I0731 17:57:39.690220   67995 main.go:141] libmachine: (old-k8s-version-276459) Ensuring network mk-old-k8s-version-276459 is active
	I0731 17:57:39.690746   67995 main.go:141] libmachine: (old-k8s-version-276459) Getting domain xml...
	I0731 17:57:39.691416   67995 main.go:141] libmachine: (old-k8s-version-276459) Creating domain...
	I0731 17:57:41.026258   67995 main.go:141] libmachine: (old-k8s-version-276459) Waiting to get IP...
	I0731 17:57:41.027135   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 17:57:41.027688   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 17:57:41.027755   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 17:57:41.027658   68018 retry.go:31] will retry after 299.311272ms: waiting for machine to come up
	I0731 17:57:41.327950   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 17:57:41.328415   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 17:57:41.328439   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 17:57:41.328372   68018 retry.go:31] will retry after 362.13362ms: waiting for machine to come up
	I0731 17:57:41.691998   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 17:57:41.692503   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 17:57:41.692536   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 17:57:41.692464   68018 retry.go:31] will retry after 432.407689ms: waiting for machine to come up
	I0731 17:57:42.126805   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 17:57:42.127489   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 17:57:42.127515   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 17:57:42.127444   68018 retry.go:31] will retry after 413.716495ms: waiting for machine to come up
	I0731 17:57:42.543029   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 17:57:42.543593   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 17:57:42.543623   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 17:57:42.543539   68018 retry.go:31] will retry after 485.377079ms: waiting for machine to come up
	I0731 17:57:43.030441   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 17:57:43.031267   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 17:57:43.031290   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 17:57:43.031219   68018 retry.go:31] will retry after 923.154755ms: waiting for machine to come up
	I0731 17:57:43.956323   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 17:57:43.956817   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 17:57:43.956862   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 17:57:43.956771   68018 retry.go:31] will retry after 962.456791ms: waiting for machine to come up
	I0731 17:57:44.920531   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 17:57:44.921182   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 17:57:44.921204   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 17:57:44.921097   68018 retry.go:31] will retry after 1.107879536s: waiting for machine to come up
	I0731 17:57:46.030530   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 17:57:46.031222   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 17:57:46.031247   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 17:57:46.031142   68018 retry.go:31] will retry after 1.67805442s: waiting for machine to come up
	I0731 17:57:47.710539   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 17:57:47.711335   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 17:57:47.711367   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 17:57:47.711269   68018 retry.go:31] will retry after 1.585652652s: waiting for machine to come up
	I0731 17:57:49.298743   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 17:57:49.299368   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 17:57:49.299393   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 17:57:49.299317   68018 retry.go:31] will retry after 2.429908417s: waiting for machine to come up
	I0731 17:57:51.731416   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 17:57:51.731885   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 17:57:51.731909   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 17:57:51.731839   68018 retry.go:31] will retry after 2.296376405s: waiting for machine to come up
	I0731 17:57:54.030279   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 17:57:54.030794   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 17:57:54.030839   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 17:57:54.030748   68018 retry.go:31] will retry after 2.914320434s: waiting for machine to come up
	I0731 17:57:56.948124   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 17:57:56.948641   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 17:57:56.948668   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 17:57:56.948597   68018 retry.go:31] will retry after 3.886986085s: waiting for machine to come up
	I0731 17:58:00.837371   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 17:58:00.837940   67995 main.go:141] libmachine: (old-k8s-version-276459) Found IP for machine: 192.168.39.26
	I0731 17:58:00.837986   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has current primary IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 17:58:00.837996   67995 main.go:141] libmachine: (old-k8s-version-276459) Reserving static IP address...
	I0731 17:58:00.838391   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-276459", mac: "52:54:00:79:9d:96", ip: "192.168.39.26"} in network mk-old-k8s-version-276459
	I0731 17:58:00.912934   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | Getting to WaitForSSH function...
	I0731 17:58:00.912968   67995 main.go:141] libmachine: (old-k8s-version-276459) Reserved static IP address: 192.168.39.26
	I0731 17:58:00.912981   67995 main.go:141] libmachine: (old-k8s-version-276459) Waiting for SSH to be available...
	I0731 17:58:00.915542   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 17:58:00.915878   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459
	I0731 17:58:00.915904   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find defined IP address of network mk-old-k8s-version-276459 interface with MAC address 52:54:00:79:9d:96
	I0731 17:58:00.916108   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | Using SSH client type: external
	I0731 17:58:00.916133   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | Using SSH private key: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459/id_rsa (-rw-------)
	I0731 17:58:00.916163   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 17:58:00.916189   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | About to run SSH command:
	I0731 17:58:00.916204   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | exit 0
	I0731 17:58:00.919684   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | SSH cmd err, output: exit status 255: 
	I0731 17:58:00.919706   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0731 17:58:00.919715   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | command : exit 0
	I0731 17:58:00.919723   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | err     : exit status 255
	I0731 17:58:00.919731   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | output  : 
	I0731 17:58:03.920241   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | Getting to WaitForSSH function...
	I0731 17:58:03.923291   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 17:58:03.923884   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 18:57:55 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 17:58:03.923923   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 17:58:03.924092   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | Using SSH client type: external
	I0731 17:58:03.924121   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | Using SSH private key: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459/id_rsa (-rw-------)
	I0731 17:58:03.924170   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.26 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 17:58:03.924183   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | About to run SSH command:
	I0731 17:58:03.924223   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | exit 0
	I0731 17:58:04.055523   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | SSH cmd err, output: <nil>: 
	I0731 17:58:04.055797   67995 main.go:141] libmachine: (old-k8s-version-276459) KVM machine creation complete!
	I0731 17:58:04.056177   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetConfigRaw
	I0731 17:58:04.056816   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 17:58:04.057030   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 17:58:04.057247   67995 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0731 17:58:04.057274   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetState
	I0731 17:58:04.058921   67995 main.go:141] libmachine: Detecting operating system of created instance...
	I0731 17:58:04.058934   67995 main.go:141] libmachine: Waiting for SSH to be available...
	I0731 17:58:04.058939   67995 main.go:141] libmachine: Getting to WaitForSSH function...
	I0731 17:58:04.058946   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 17:58:04.061789   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 17:58:04.062098   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 18:57:55 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 17:58:04.062116   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 17:58:04.062320   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 17:58:04.062506   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 17:58:04.062649   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 17:58:04.062770   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 17:58:04.062972   67995 main.go:141] libmachine: Using SSH client type: native
	I0731 17:58:04.063226   67995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0731 17:58:04.063243   67995 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0731 17:58:04.179036   67995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 17:58:04.179060   67995 main.go:141] libmachine: Detecting the provisioner...
	I0731 17:58:04.179071   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 17:58:04.182228   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 17:58:04.182627   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 18:57:55 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 17:58:04.182665   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 17:58:04.182767   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 17:58:04.182944   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 17:58:04.183094   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 17:58:04.183256   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 17:58:04.183416   67995 main.go:141] libmachine: Using SSH client type: native
	I0731 17:58:04.183589   67995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0731 17:58:04.183604   67995 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0731 17:58:04.303826   67995 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0731 17:58:04.303907   67995 main.go:141] libmachine: found compatible host: buildroot
	I0731 17:58:04.303922   67995 main.go:141] libmachine: Provisioning with buildroot...
	I0731 17:58:04.303939   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetMachineName
	I0731 17:58:04.304211   67995 buildroot.go:166] provisioning hostname "old-k8s-version-276459"
	I0731 17:58:04.304235   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetMachineName
	I0731 17:58:04.304403   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 17:58:04.306991   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 17:58:04.307364   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 18:57:55 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 17:58:04.307396   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 17:58:04.307552   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 17:58:04.307729   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 17:58:04.307908   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 17:58:04.308055   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 17:58:04.308221   67995 main.go:141] libmachine: Using SSH client type: native
	I0731 17:58:04.308400   67995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0731 17:58:04.308411   67995 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-276459 && echo "old-k8s-version-276459" | sudo tee /etc/hostname
	I0731 17:58:04.440653   67995 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-276459
	
	I0731 17:58:04.440682   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 17:58:04.443793   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 17:58:04.444149   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 18:57:55 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 17:58:04.444178   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 17:58:04.444406   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 17:58:04.444602   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 17:58:04.444823   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 17:58:04.444971   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 17:58:04.445134   67995 main.go:141] libmachine: Using SSH client type: native
	I0731 17:58:04.445340   67995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0731 17:58:04.445364   67995 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-276459' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-276459/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-276459' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 17:58:04.568874   67995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 17:58:04.568910   67995 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19349-8084/.minikube CaCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19349-8084/.minikube}
	I0731 17:58:04.568935   67995 buildroot.go:174] setting up certificates
	I0731 17:58:04.568955   67995 provision.go:84] configureAuth start
	I0731 17:58:04.568972   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetMachineName
	I0731 17:58:04.569226   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetIP
	I0731 17:58:04.571687   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 17:58:04.572047   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 18:57:55 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 17:58:04.572073   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 17:58:04.572216   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 17:58:04.574518   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 17:58:04.574822   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 18:57:55 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 17:58:04.574863   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 17:58:04.574978   67995 provision.go:143] copyHostCerts
	I0731 17:58:04.575027   67995 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem, removing ...
	I0731 17:58:04.575036   67995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem
	I0731 17:58:04.575096   67995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem (1082 bytes)
	I0731 17:58:04.575244   67995 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem, removing ...
	I0731 17:58:04.575254   67995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem
	I0731 17:58:04.575282   67995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem (1123 bytes)
	I0731 17:58:04.575352   67995 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem, removing ...
	I0731 17:58:04.575359   67995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem
	I0731 17:58:04.575381   67995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem (1675 bytes)
	I0731 17:58:04.575459   67995 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-276459 san=[127.0.0.1 192.168.39.26 localhost minikube old-k8s-version-276459]
	I0731 17:58:04.733167   67995 provision.go:177] copyRemoteCerts
	I0731 17:58:04.733212   67995 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 17:58:04.733245   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 17:58:04.736023   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 17:58:04.736439   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 18:57:55 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 17:58:04.736475   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 17:58:04.736652   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 17:58:04.736819   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 17:58:04.736952   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 17:58:04.737044   67995 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459/id_rsa Username:docker}
	I0731 17:58:04.825053   67995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 17:58:04.848356   67995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0731 17:58:04.873600   67995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 17:58:04.898773   67995 provision.go:87] duration metric: took 329.804543ms to configureAuth
	I0731 17:58:04.898799   67995 buildroot.go:189] setting minikube options for container-runtime
	I0731 17:58:04.898978   67995 config.go:182] Loaded profile config "old-k8s-version-276459": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0731 17:58:04.899060   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 17:58:04.902014   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 17:58:04.902448   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 18:57:55 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 17:58:04.902475   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 17:58:04.902648   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 17:58:04.902865   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 17:58:04.903076   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 17:58:04.903269   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 17:58:04.903453   67995 main.go:141] libmachine: Using SSH client type: native
	I0731 17:58:04.903657   67995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0731 17:58:04.903673   67995 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 17:58:05.201873   67995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 17:58:05.201898   67995 main.go:141] libmachine: Checking connection to Docker...
	I0731 17:58:05.201910   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetURL
	I0731 17:58:05.203512   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | Using libvirt version 6000000
	I0731 17:58:05.206035   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 17:58:05.206506   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 18:57:55 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 17:58:05.206536   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 17:58:05.206681   67995 main.go:141] libmachine: Docker is up and running!
	I0731 17:58:05.206698   67995 main.go:141] libmachine: Reticulating splines...
	I0731 17:58:05.206706   67995 client.go:171] duration metric: took 25.997201308s to LocalClient.Create
	I0731 17:58:05.206730   67995 start.go:167] duration metric: took 25.997263222s to libmachine.API.Create "old-k8s-version-276459"
	I0731 17:58:05.206743   67995 start.go:293] postStartSetup for "old-k8s-version-276459" (driver="kvm2")
	I0731 17:58:05.206754   67995 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 17:58:05.206777   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 17:58:05.207071   67995 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 17:58:05.207096   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 17:58:05.209592   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 17:58:05.209960   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 18:57:55 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 17:58:05.209987   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 17:58:05.210203   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 17:58:05.210401   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 17:58:05.210574   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 17:58:05.210729   67995 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459/id_rsa Username:docker}
	I0731 17:58:05.302690   67995 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 17:58:05.306881   67995 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 17:58:05.306906   67995 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/addons for local assets ...
	I0731 17:58:05.306964   67995 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/files for local assets ...
	I0731 17:58:05.307061   67995 filesync.go:149] local asset: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem -> 152592.pem in /etc/ssl/certs
	I0731 17:58:05.307201   67995 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 17:58:05.316574   67995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /etc/ssl/certs/152592.pem (1708 bytes)
	I0731 17:58:05.340706   67995 start.go:296] duration metric: took 133.953084ms for postStartSetup
	I0731 17:58:05.340748   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetConfigRaw
	I0731 17:58:05.341418   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetIP
	I0731 17:58:05.343860   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 17:58:05.344184   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 18:57:55 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 17:58:05.344215   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 17:58:05.344380   67995 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/config.json ...
	I0731 17:58:05.344562   67995 start.go:128] duration metric: took 26.154018034s to createHost
	I0731 17:58:05.344581   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 17:58:05.346631   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 17:58:05.346921   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 18:57:55 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 17:58:05.346948   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 17:58:05.347059   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 17:58:05.347235   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 17:58:05.347402   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 17:58:05.347548   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 17:58:05.347701   67995 main.go:141] libmachine: Using SSH client type: native
	I0731 17:58:05.347895   67995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0731 17:58:05.347907   67995 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0731 17:58:05.460751   67995 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722448685.436586704
	
	I0731 17:58:05.460780   67995 fix.go:216] guest clock: 1722448685.436586704
	I0731 17:58:05.460791   67995 fix.go:229] Guest: 2024-07-31 17:58:05.436586704 +0000 UTC Remote: 2024-07-31 17:58:05.344571634 +0000 UTC m=+26.267860021 (delta=92.01507ms)
	I0731 17:58:05.460815   67995 fix.go:200] guest clock delta is within tolerance: 92.01507ms
	I0731 17:58:05.460823   67995 start.go:83] releasing machines lock for "old-k8s-version-276459", held for 26.270384134s
	I0731 17:58:05.460854   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 17:58:05.461115   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetIP
	I0731 17:58:05.464094   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 17:58:05.464511   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 18:57:55 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 17:58:05.464560   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 17:58:05.464767   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 17:58:05.465324   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 17:58:05.465795   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 17:58:05.465896   67995 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 17:58:05.465940   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 17:58:05.466025   67995 ssh_runner.go:195] Run: cat /version.json
	I0731 17:58:05.466051   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 17:58:05.469082   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 17:58:05.469374   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 17:58:05.469503   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 18:57:55 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 17:58:05.469531   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 17:58:05.469657   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 17:58:05.469809   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 18:57:55 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 17:58:05.469830   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 17:58:05.469860   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 17:58:05.469997   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 17:58:05.470017   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 17:58:05.470160   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 17:58:05.470154   67995 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459/id_rsa Username:docker}
	I0731 17:58:05.470317   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 17:58:05.470446   67995 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459/id_rsa Username:docker}
	I0731 17:58:05.595846   67995 ssh_runner.go:195] Run: systemctl --version
	I0731 17:58:05.603187   67995 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 17:58:05.763357   67995 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 17:58:05.769601   67995 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 17:58:05.769677   67995 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 17:58:05.786689   67995 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 17:58:05.786709   67995 start.go:495] detecting cgroup driver to use...
	I0731 17:58:05.786761   67995 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 17:58:05.808633   67995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 17:58:05.824857   67995 docker.go:217] disabling cri-docker service (if available) ...
	I0731 17:58:05.824929   67995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 17:58:05.838902   67995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 17:58:05.854831   67995 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 17:58:05.989233   67995 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 17:58:06.154472   67995 docker.go:233] disabling docker service ...
	I0731 17:58:06.154555   67995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 17:58:06.170168   67995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 17:58:06.185206   67995 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 17:58:06.346174   67995 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 17:58:06.480008   67995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 17:58:06.493937   67995 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 17:58:06.512195   67995 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0731 17:58:06.512246   67995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:58:06.524490   67995 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 17:58:06.524575   67995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:58:06.537686   67995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:58:06.547877   67995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 17:58:06.559959   67995 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 17:58:06.573367   67995 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 17:58:06.583080   67995 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 17:58:06.583160   67995 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 17:58:06.599516   67995 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 17:58:06.609470   67995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 17:58:06.744474   67995 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 17:58:06.924488   67995 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 17:58:06.924582   67995 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 17:58:06.929250   67995 start.go:563] Will wait 60s for crictl version
	I0731 17:58:06.929306   67995 ssh_runner.go:195] Run: which crictl
	I0731 17:58:06.932813   67995 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 17:58:06.972195   67995 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 17:58:06.972273   67995 ssh_runner.go:195] Run: crio --version
	I0731 17:58:07.004263   67995 ssh_runner.go:195] Run: crio --version
	I0731 17:58:07.042486   67995 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0731 17:58:07.043911   67995 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetIP
	I0731 17:58:07.047132   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 17:58:07.047688   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 18:57:55 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 17:58:07.047720   67995 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 17:58:07.047966   67995 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 17:58:07.052389   67995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 17:58:07.066189   67995 kubeadm.go:883] updating cluster {Name:old-k8s-version-276459 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-276459 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 17:58:07.066315   67995 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 17:58:07.066365   67995 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 17:58:07.101795   67995 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0731 17:58:07.101865   67995 ssh_runner.go:195] Run: which lz4
	I0731 17:58:07.106007   67995 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0731 17:58:07.114368   67995 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 17:58:07.114400   67995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0731 17:58:08.713134   67995 crio.go:462] duration metric: took 1.607150286s to copy over tarball
	I0731 17:58:08.713209   67995 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 17:58:11.620940   67995 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.907693181s)
	I0731 17:58:11.620972   67995 crio.go:469] duration metric: took 2.907807763s to extract the tarball
	I0731 17:58:11.620981   67995 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 17:58:11.665324   67995 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 17:58:11.716977   67995 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0731 17:58:11.717006   67995 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 17:58:11.717080   67995 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 17:58:11.717392   67995 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 17:58:11.717411   67995 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 17:58:11.717422   67995 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0731 17:58:11.717589   67995 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0731 17:58:11.717617   67995 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 17:58:11.717625   67995 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0731 17:58:11.717302   67995 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 17:58:11.718840   67995 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0731 17:58:11.718848   67995 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 17:58:11.718875   67995 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 17:58:11.718950   67995 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0731 17:58:11.718956   67995 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 17:58:11.719228   67995 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 17:58:11.719267   67995 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 17:58:11.719542   67995 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0731 17:58:11.968891   67995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0731 17:58:11.971597   67995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0731 17:58:11.975177   67995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0731 17:58:11.975409   67995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0731 17:58:11.984322   67995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0731 17:58:11.991915   67995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 17:58:12.061059   67995 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0731 17:58:12.061105   67995 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 17:58:12.061149   67995 ssh_runner.go:195] Run: which crictl
	I0731 17:58:12.125910   67995 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0731 17:58:12.125955   67995 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0731 17:58:12.126000   67995 ssh_runner.go:195] Run: which crictl
	I0731 17:58:12.135373   67995 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0731 17:58:12.135414   67995 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0731 17:58:12.135429   67995 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 17:58:12.135446   67995 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0731 17:58:12.135483   67995 ssh_runner.go:195] Run: which crictl
	I0731 17:58:12.135489   67995 ssh_runner.go:195] Run: which crictl
	I0731 17:58:12.135534   67995 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0731 17:58:12.135552   67995 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 17:58:12.135579   67995 ssh_runner.go:195] Run: which crictl
	I0731 17:58:12.148884   67995 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0731 17:58:12.148923   67995 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 17:58:12.148970   67995 ssh_runner.go:195] Run: which crictl
	I0731 17:58:12.148995   67995 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0731 17:58:12.149058   67995 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0731 17:58:12.150366   67995 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0731 17:58:12.150401   67995 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0731 17:58:12.155415   67995 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0731 17:58:12.160264   67995 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 17:58:12.249447   67995 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0731 17:58:12.249548   67995 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0731 17:58:12.279159   67995 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0731 17:58:12.280321   67995 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0731 17:58:12.280335   67995 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0731 17:58:12.280399   67995 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0731 17:58:12.288072   67995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0731 17:58:12.330492   67995 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0731 17:58:12.330534   67995 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0731 17:58:12.330583   67995 ssh_runner.go:195] Run: which crictl
	I0731 17:58:12.335475   67995 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0731 17:58:12.371431   67995 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0731 17:58:12.584137   67995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 17:58:12.735370   67995 cache_images.go:92] duration metric: took 1.018348553s to LoadCachedImages
	W0731 17:58:12.735444   67995 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0731 17:58:12.735458   67995 kubeadm.go:934] updating node { 192.168.39.26 8443 v1.20.0 crio true true} ...
	I0731 17:58:12.735596   67995 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-276459 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.26
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-276459 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 17:58:12.735681   67995 ssh_runner.go:195] Run: crio config
	I0731 17:58:12.781021   67995 cni.go:84] Creating CNI manager for ""
	I0731 17:58:12.781042   67995 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 17:58:12.781051   67995 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 17:58:12.781067   67995 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.26 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-276459 NodeName:old-k8s-version-276459 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.26"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.26 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0731 17:58:12.781227   67995 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.26
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-276459"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.26
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.26"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 17:58:12.781301   67995 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0731 17:58:12.792170   67995 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 17:58:12.792246   67995 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 17:58:12.801785   67995 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0731 17:58:12.817793   67995 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 17:58:12.834303   67995 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0731 17:58:12.851098   67995 ssh_runner.go:195] Run: grep 192.168.39.26	control-plane.minikube.internal$ /etc/hosts
	I0731 17:58:12.855226   67995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.26	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 17:58:12.866904   67995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 17:58:13.013776   67995 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 17:58:13.037494   67995 certs.go:68] Setting up /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459 for IP: 192.168.39.26
	I0731 17:58:13.037519   67995 certs.go:194] generating shared ca certs ...
	I0731 17:58:13.037534   67995 certs.go:226] acquiring lock for ca certs: {Name:mk49eed439085f9c30746706460ce213570d1997 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 17:58:13.037755   67995 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key
	I0731 17:58:13.037824   67995 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key
	I0731 17:58:13.037841   67995 certs.go:256] generating profile certs ...
	I0731 17:58:13.037915   67995 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/client.key
	I0731 17:58:13.037937   67995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/client.crt with IP's: []
	I0731 17:58:13.240802   67995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/client.crt ...
	I0731 17:58:13.240835   67995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/client.crt: {Name:mk87dcfcabfd5e9b0b22ccf8b01d833e6856862c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 17:58:13.241032   67995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/client.key ...
	I0731 17:58:13.241053   67995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/client.key: {Name:mk0b993f99482a1ab4aace1b405b47451a37545e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 17:58:13.241169   67995 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/apiserver.key.7c620cac
	I0731 17:58:13.241188   67995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/apiserver.crt.7c620cac with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.26]
	I0731 17:58:13.456843   67995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/apiserver.crt.7c620cac ...
	I0731 17:58:13.456879   67995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/apiserver.crt.7c620cac: {Name:mk27907551fc7a941834b628d48348a9b93118f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 17:58:13.457097   67995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/apiserver.key.7c620cac ...
	I0731 17:58:13.457121   67995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/apiserver.key.7c620cac: {Name:mk55bf645aa942510332fbb3e38c723520d91bb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 17:58:13.457224   67995 certs.go:381] copying /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/apiserver.crt.7c620cac -> /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/apiserver.crt
	I0731 17:58:13.457330   67995 certs.go:385] copying /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/apiserver.key.7c620cac -> /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/apiserver.key
	I0731 17:58:13.457404   67995 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/proxy-client.key
	I0731 17:58:13.457422   67995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/proxy-client.crt with IP's: []
	I0731 17:58:13.620720   67995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/proxy-client.crt ...
	I0731 17:58:13.620750   67995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/proxy-client.crt: {Name:mkc88f4cf331f48f9df3c79c804f8ff148998c77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 17:58:13.620938   67995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/proxy-client.key ...
	I0731 17:58:13.620954   67995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/proxy-client.key: {Name:mk173f15970a2ac972c6480e0481b0bd7521484d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 17:58:13.621142   67995 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem (1338 bytes)
	W0731 17:58:13.621188   67995 certs.go:480] ignoring /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259_empty.pem, impossibly tiny 0 bytes
	I0731 17:58:13.621200   67995 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 17:58:13.621222   67995 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem (1082 bytes)
	I0731 17:58:13.621245   67995 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem (1123 bytes)
	I0731 17:58:13.621265   67995 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem (1675 bytes)
	I0731 17:58:13.621300   67995 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem (1708 bytes)
	I0731 17:58:13.622159   67995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 17:58:13.648863   67995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 17:58:13.672630   67995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 17:58:13.698879   67995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 17:58:13.727525   67995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0731 17:58:13.755989   67995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 17:58:13.783577   67995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 17:58:13.812561   67995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 17:58:13.838172   67995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /usr/share/ca-certificates/152592.pem (1708 bytes)
	I0731 17:58:13.861827   67995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 17:58:13.890382   67995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem --> /usr/share/ca-certificates/15259.pem (1338 bytes)
	I0731 17:58:13.919046   67995 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 17:58:13.937008   67995 ssh_runner.go:195] Run: openssl version
	I0731 17:58:13.947269   67995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/152592.pem && ln -fs /usr/share/ca-certificates/152592.pem /etc/ssl/certs/152592.pem"
	I0731 17:58:13.968796   67995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/152592.pem
	I0731 17:58:13.977075   67995 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 16:54 /usr/share/ca-certificates/152592.pem
	I0731 17:58:13.977137   67995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/152592.pem
	I0731 17:58:13.989508   67995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/152592.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 17:58:14.007323   67995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 17:58:14.027919   67995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 17:58:14.034891   67995 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 16:42 /usr/share/ca-certificates/minikubeCA.pem
	I0731 17:58:14.034967   67995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 17:58:14.040368   67995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 17:58:14.050578   67995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15259.pem && ln -fs /usr/share/ca-certificates/15259.pem /etc/ssl/certs/15259.pem"
	I0731 17:58:14.061202   67995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15259.pem
	I0731 17:58:14.065519   67995 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 16:54 /usr/share/ca-certificates/15259.pem
	I0731 17:58:14.065578   67995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15259.pem
	I0731 17:58:14.071230   67995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15259.pem /etc/ssl/certs/51391683.0"
	I0731 17:58:14.081356   67995 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 17:58:14.085290   67995 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 17:58:14.085353   67995 kubeadm.go:392] StartCluster: {Name:old-k8s-version-276459 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-276459 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 17:58:14.085443   67995 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 17:58:14.085519   67995 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 17:58:14.126449   67995 cri.go:89] found id: ""
	I0731 17:58:14.126519   67995 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 17:58:14.138484   67995 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 17:58:14.149445   67995 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 17:58:14.161279   67995 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 17:58:14.161300   67995 kubeadm.go:157] found existing configuration files:
	
	I0731 17:58:14.161349   67995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 17:58:14.173634   67995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 17:58:14.173696   67995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 17:58:14.186438   67995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 17:58:14.198397   67995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 17:58:14.198445   67995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 17:58:14.211748   67995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 17:58:14.224200   67995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 17:58:14.224260   67995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 17:58:14.235540   67995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 17:58:14.247138   67995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 17:58:14.247201   67995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 17:58:14.259374   67995 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 17:58:14.536613   67995 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 18:00:12.868746   67995 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 18:00:12.868881   67995 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0731 18:00:12.870244   67995 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0731 18:00:12.870302   67995 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 18:00:12.870386   67995 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 18:00:12.870548   67995 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 18:00:12.870708   67995 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 18:00:12.870803   67995 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 18:00:12.872641   67995 out.go:204]   - Generating certificates and keys ...
	I0731 18:00:12.872754   67995 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 18:00:12.872850   67995 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 18:00:12.872958   67995 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0731 18:00:12.873042   67995 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0731 18:00:12.873123   67995 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0731 18:00:12.873192   67995 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0731 18:00:12.873263   67995 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0731 18:00:12.873415   67995 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-276459] and IPs [192.168.39.26 127.0.0.1 ::1]
	I0731 18:00:12.873481   67995 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0731 18:00:12.873645   67995 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-276459] and IPs [192.168.39.26 127.0.0.1 ::1]
	I0731 18:00:12.873727   67995 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0731 18:00:12.873805   67995 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0731 18:00:12.873861   67995 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0731 18:00:12.873928   67995 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 18:00:12.873992   67995 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 18:00:12.874056   67995 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 18:00:12.874135   67995 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 18:00:12.874204   67995 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 18:00:12.874331   67995 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 18:00:12.874434   67995 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 18:00:12.874497   67995 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 18:00:12.874583   67995 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 18:00:12.875974   67995 out.go:204]   - Booting up control plane ...
	I0731 18:00:12.876062   67995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 18:00:12.876124   67995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 18:00:12.876182   67995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 18:00:12.876260   67995 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 18:00:12.876432   67995 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 18:00:12.876499   67995 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0731 18:00:12.876592   67995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:00:12.876835   67995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:00:12.876928   67995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:00:12.877114   67995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:00:12.877189   67995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:00:12.877347   67995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:00:12.877404   67995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:00:12.877567   67995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:00:12.877653   67995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:00:12.877890   67995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:00:12.877898   67995 kubeadm.go:310] 
	I0731 18:00:12.877945   67995 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 18:00:12.877978   67995 kubeadm.go:310] 		timed out waiting for the condition
	I0731 18:00:12.877984   67995 kubeadm.go:310] 
	I0731 18:00:12.878024   67995 kubeadm.go:310] 	This error is likely caused by:
	I0731 18:00:12.878075   67995 kubeadm.go:310] 		- The kubelet is not running
	I0731 18:00:12.878235   67995 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 18:00:12.878250   67995 kubeadm.go:310] 
	I0731 18:00:12.878392   67995 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 18:00:12.878431   67995 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 18:00:12.878458   67995 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 18:00:12.878464   67995 kubeadm.go:310] 
	I0731 18:00:12.878578   67995 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 18:00:12.878693   67995 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 18:00:12.878701   67995 kubeadm.go:310] 
	I0731 18:00:12.878777   67995 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 18:00:12.878856   67995 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 18:00:12.878926   67995 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 18:00:12.879011   67995 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 18:00:12.879078   67995 kubeadm.go:310] 
	W0731 18:00:12.879132   67995 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-276459] and IPs [192.168.39.26 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-276459] and IPs [192.168.39.26 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-276459] and IPs [192.168.39.26 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-276459] and IPs [192.168.39.26 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0731 18:00:12.879178   67995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 18:00:14.332977   67995 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.453772943s)
	I0731 18:00:14.333079   67995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:00:14.351717   67995 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 18:00:14.365192   67995 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 18:00:14.365216   67995 kubeadm.go:157] found existing configuration files:
	
	I0731 18:00:14.365263   67995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 18:00:14.377655   67995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 18:00:14.377705   67995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 18:00:14.390126   67995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 18:00:14.402381   67995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 18:00:14.402455   67995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 18:00:14.415181   67995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 18:00:14.425058   67995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 18:00:14.425120   67995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 18:00:14.435203   67995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 18:00:14.444471   67995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 18:00:14.444533   67995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 18:00:14.455834   67995 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 18:00:14.526627   67995 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0731 18:00:14.526691   67995 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 18:00:14.666243   67995 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 18:00:14.666406   67995 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 18:00:14.666596   67995 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 18:00:14.856924   67995 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 18:00:14.859354   67995 out.go:204]   - Generating certificates and keys ...
	I0731 18:00:14.859466   67995 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 18:00:14.859556   67995 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 18:00:14.859673   67995 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 18:00:14.859763   67995 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 18:00:14.859871   67995 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 18:00:14.859961   67995 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 18:00:14.860060   67995 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 18:00:14.860147   67995 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 18:00:14.861437   67995 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 18:00:14.861575   67995 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 18:00:14.861645   67995 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 18:00:14.861726   67995 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 18:00:14.981277   67995 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 18:00:15.476263   67995 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 18:00:15.843641   67995 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 18:00:15.942973   67995 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 18:00:15.960713   67995 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 18:00:15.962181   67995 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 18:00:15.962279   67995 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 18:00:16.116470   67995 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 18:00:16.118327   67995 out.go:204]   - Booting up control plane ...
	I0731 18:00:16.118457   67995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 18:00:16.126555   67995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 18:00:16.129957   67995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 18:00:16.130090   67995 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 18:00:16.134664   67995 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 18:00:56.137383   67995 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0731 18:00:56.137885   67995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:00:56.138057   67995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:01:01.138845   67995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:01:01.139223   67995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:01:11.139342   67995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:01:11.139567   67995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:01:31.138399   67995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:01:31.138593   67995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:02:11.138009   67995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:02:11.138254   67995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:02:11.138279   67995 kubeadm.go:310] 
	I0731 18:02:11.138350   67995 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 18:02:11.138405   67995 kubeadm.go:310] 		timed out waiting for the condition
	I0731 18:02:11.138420   67995 kubeadm.go:310] 
	I0731 18:02:11.138474   67995 kubeadm.go:310] 	This error is likely caused by:
	I0731 18:02:11.138516   67995 kubeadm.go:310] 		- The kubelet is not running
	I0731 18:02:11.138617   67995 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 18:02:11.138624   67995 kubeadm.go:310] 
	I0731 18:02:11.138707   67995 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 18:02:11.138737   67995 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 18:02:11.138765   67995 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 18:02:11.138770   67995 kubeadm.go:310] 
	I0731 18:02:11.138858   67995 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 18:02:11.138936   67995 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 18:02:11.138942   67995 kubeadm.go:310] 
	I0731 18:02:11.139031   67995 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 18:02:11.139142   67995 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 18:02:11.139242   67995 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 18:02:11.139308   67995 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 18:02:11.139317   67995 kubeadm.go:310] 
	I0731 18:02:11.140288   67995 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 18:02:11.140374   67995 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 18:02:11.140435   67995 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0731 18:02:11.140493   67995 kubeadm.go:394] duration metric: took 3m57.05515081s to StartCluster
	I0731 18:02:11.140532   67995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:02:11.140579   67995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:02:11.182354   67995 cri.go:89] found id: ""
	I0731 18:02:11.182380   67995 logs.go:276] 0 containers: []
	W0731 18:02:11.182388   67995 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:02:11.182394   67995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:02:11.182439   67995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:02:11.214130   67995 cri.go:89] found id: ""
	I0731 18:02:11.214153   67995 logs.go:276] 0 containers: []
	W0731 18:02:11.214160   67995 logs.go:278] No container was found matching "etcd"
	I0731 18:02:11.214166   67995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:02:11.214209   67995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:02:11.244156   67995 cri.go:89] found id: ""
	I0731 18:02:11.244189   67995 logs.go:276] 0 containers: []
	W0731 18:02:11.244200   67995 logs.go:278] No container was found matching "coredns"
	I0731 18:02:11.244207   67995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:02:11.244267   67995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:02:11.274175   67995 cri.go:89] found id: ""
	I0731 18:02:11.274210   67995 logs.go:276] 0 containers: []
	W0731 18:02:11.274220   67995 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:02:11.274228   67995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:02:11.274290   67995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:02:11.308135   67995 cri.go:89] found id: ""
	I0731 18:02:11.308167   67995 logs.go:276] 0 containers: []
	W0731 18:02:11.308176   67995 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:02:11.308182   67995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:02:11.308244   67995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:02:11.338714   67995 cri.go:89] found id: ""
	I0731 18:02:11.338742   67995 logs.go:276] 0 containers: []
	W0731 18:02:11.338752   67995 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:02:11.338759   67995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:02:11.338817   67995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:02:11.369211   67995 cri.go:89] found id: ""
	I0731 18:02:11.369247   67995 logs.go:276] 0 containers: []
	W0731 18:02:11.369259   67995 logs.go:278] No container was found matching "kindnet"
	I0731 18:02:11.369272   67995 logs.go:123] Gathering logs for dmesg ...
	I0731 18:02:11.369288   67995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:02:11.381547   67995 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:02:11.381575   67995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:02:11.530092   67995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:02:11.530113   67995 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:02:11.530126   67995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:02:11.619769   67995 logs.go:123] Gathering logs for container status ...
	I0731 18:02:11.619804   67995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:02:11.654780   67995 logs.go:123] Gathering logs for kubelet ...
	I0731 18:02:11.654811   67995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 18:02:11.705960   67995 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0731 18:02:11.706004   67995 out.go:239] * 
	* 
	W0731 18:02:11.706058   67995 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 18:02:11.706078   67995 out.go:239] * 
	* 
	W0731 18:02:11.706835   67995 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 18:02:11.709551   67995 out.go:177] 
	W0731 18:02:11.710642   67995 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 18:02:11.710700   67995 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0731 18:02:11.710726   67995 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0731 18:02:11.712127   67995 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-276459 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-276459 -n old-k8s-version-276459
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-276459 -n old-k8s-version-276459: exit status 6 (217.52363ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 18:02:11.972784   73162 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-276459" does not appear in /home/jenkins/minikube-integration/19349-8084/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-276459" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (272.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (138.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-673754 --alsologtostderr -v=3
E0731 18:00:00.361910   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/kindnet-985288/client.crt: no such file or directory
E0731 18:00:00.367211   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/kindnet-985288/client.crt: no such file or directory
E0731 18:00:00.377534   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/kindnet-985288/client.crt: no such file or directory
E0731 18:00:00.397817   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/kindnet-985288/client.crt: no such file or directory
E0731 18:00:00.438148   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/kindnet-985288/client.crt: no such file or directory
E0731 18:00:00.519105   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/kindnet-985288/client.crt: no such file or directory
E0731 18:00:00.679581   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/kindnet-985288/client.crt: no such file or directory
E0731 18:00:00.999948   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/kindnet-985288/client.crt: no such file or directory
E0731 18:00:01.640955   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/kindnet-985288/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-673754 --alsologtostderr -v=3: exit status 82 (2m0.5039004s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-673754"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 17:59:55.551191   72396 out.go:291] Setting OutFile to fd 1 ...
	I0731 17:59:55.551433   72396 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 17:59:55.551442   72396 out.go:304] Setting ErrFile to fd 2...
	I0731 17:59:55.551446   72396 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 17:59:55.551635   72396 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19349-8084/.minikube/bin
	I0731 17:59:55.551857   72396 out.go:298] Setting JSON to false
	I0731 17:59:55.551949   72396 mustload.go:65] Loading cluster: no-preload-673754
	I0731 17:59:55.552265   72396 config.go:182] Loaded profile config "no-preload-673754": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 17:59:55.552328   72396 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/no-preload-673754/config.json ...
	I0731 17:59:55.552508   72396 mustload.go:65] Loading cluster: no-preload-673754
	I0731 17:59:55.552619   72396 config.go:182] Loaded profile config "no-preload-673754": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 17:59:55.552650   72396 stop.go:39] StopHost: no-preload-673754
	I0731 17:59:55.553001   72396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:59:55.553049   72396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:59:55.567729   72396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38765
	I0731 17:59:55.568158   72396 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:59:55.568727   72396 main.go:141] libmachine: Using API Version  1
	I0731 17:59:55.568744   72396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:59:55.569080   72396 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:59:55.572520   72396 out.go:177] * Stopping node "no-preload-673754"  ...
	I0731 17:59:55.574041   72396 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0731 17:59:55.574079   72396 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 17:59:55.574299   72396 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0731 17:59:55.574320   72396 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 17:59:55.577283   72396 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 17:59:55.577734   72396 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 18:58:21 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 17:59:55.577771   72396 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 17:59:55.577898   72396 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 17:59:55.578103   72396 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 17:59:55.578280   72396 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 17:59:55.578443   72396 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/no-preload-673754/id_rsa Username:docker}
	I0731 17:59:55.680668   72396 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0731 17:59:55.742889   72396 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0731 17:59:55.805706   72396 main.go:141] libmachine: Stopping "no-preload-673754"...
	I0731 17:59:55.805753   72396 main.go:141] libmachine: (no-preload-673754) Calling .GetState
	I0731 17:59:55.807527   72396 main.go:141] libmachine: (no-preload-673754) Calling .Stop
	I0731 17:59:55.811603   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 0/120
	I0731 17:59:56.813585   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 1/120
	I0731 17:59:57.814664   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 2/120
	I0731 17:59:58.816107   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 3/120
	I0731 17:59:59.818368   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 4/120
	I0731 18:00:00.820396   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 5/120
	I0731 18:00:01.821802   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 6/120
	I0731 18:00:02.823445   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 7/120
	I0731 18:00:03.825743   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 8/120
	I0731 18:00:04.827072   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 9/120
	I0731 18:00:05.829213   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 10/120
	I0731 18:00:06.830542   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 11/120
	I0731 18:00:07.832928   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 12/120
	I0731 18:00:08.835320   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 13/120
	I0731 18:00:09.837067   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 14/120
	I0731 18:00:10.839143   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 15/120
	I0731 18:00:11.841287   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 16/120
	I0731 18:00:12.842700   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 17/120
	I0731 18:00:13.844305   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 18/120
	I0731 18:00:14.846485   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 19/120
	I0731 18:00:15.848789   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 20/120
	I0731 18:00:16.850070   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 21/120
	I0731 18:00:17.851455   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 22/120
	I0731 18:00:18.852867   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 23/120
	I0731 18:00:19.854390   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 24/120
	I0731 18:00:20.856722   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 25/120
	I0731 18:00:21.858029   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 26/120
	I0731 18:00:22.860139   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 27/120
	I0731 18:00:23.861583   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 28/120
	I0731 18:00:24.862908   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 29/120
	I0731 18:00:25.864919   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 30/120
	I0731 18:00:26.866209   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 31/120
	I0731 18:00:27.867498   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 32/120
	I0731 18:00:28.869082   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 33/120
	I0731 18:00:29.870399   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 34/120
	I0731 18:00:30.872169   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 35/120
	I0731 18:00:31.873866   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 36/120
	I0731 18:00:32.875449   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 37/120
	I0731 18:00:33.877074   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 38/120
	I0731 18:00:34.878655   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 39/120
	I0731 18:00:35.880753   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 40/120
	I0731 18:00:36.882547   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 41/120
	I0731 18:00:37.884054   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 42/120
	I0731 18:00:38.885465   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 43/120
	I0731 18:00:39.887433   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 44/120
	I0731 18:00:40.889603   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 45/120
	I0731 18:00:41.890985   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 46/120
	I0731 18:00:42.892651   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 47/120
	I0731 18:00:43.894093   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 48/120
	I0731 18:00:44.895923   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 49/120
	I0731 18:00:45.898201   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 50/120
	I0731 18:00:46.899372   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 51/120
	I0731 18:00:47.900777   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 52/120
	I0731 18:00:48.902047   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 53/120
	I0731 18:00:49.903389   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 54/120
	I0731 18:00:50.905219   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 55/120
	I0731 18:00:51.906584   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 56/120
	I0731 18:00:52.908171   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 57/120
	I0731 18:00:53.909487   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 58/120
	I0731 18:00:54.910887   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 59/120
	I0731 18:00:55.912929   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 60/120
	I0731 18:00:56.914204   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 61/120
	I0731 18:00:57.915573   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 62/120
	I0731 18:00:58.917552   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 63/120
	I0731 18:00:59.918983   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 64/120
	I0731 18:01:00.920966   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 65/120
	I0731 18:01:01.922312   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 66/120
	I0731 18:01:02.924027   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 67/120
	I0731 18:01:03.925390   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 68/120
	I0731 18:01:04.926952   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 69/120
	I0731 18:01:05.929186   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 70/120
	I0731 18:01:06.930732   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 71/120
	I0731 18:01:07.932164   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 72/120
	I0731 18:01:08.933717   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 73/120
	I0731 18:01:09.935320   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 74/120
	I0731 18:01:10.937071   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 75/120
	I0731 18:01:11.938518   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 76/120
	I0731 18:01:12.939822   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 77/120
	I0731 18:01:13.941284   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 78/120
	I0731 18:01:14.943069   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 79/120
	I0731 18:01:15.945187   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 80/120
	I0731 18:01:16.946507   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 81/120
	I0731 18:01:17.947904   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 82/120
	I0731 18:01:18.949253   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 83/120
	I0731 18:01:19.950820   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 84/120
	I0731 18:01:20.952913   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 85/120
	I0731 18:01:21.954167   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 86/120
	I0731 18:01:22.955761   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 87/120
	I0731 18:01:23.957153   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 88/120
	I0731 18:01:24.958608   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 89/120
	I0731 18:01:25.960827   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 90/120
	I0731 18:01:26.962423   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 91/120
	I0731 18:01:27.963720   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 92/120
	I0731 18:01:28.965012   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 93/120
	I0731 18:01:29.966604   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 94/120
	I0731 18:01:30.968335   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 95/120
	I0731 18:01:31.969708   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 96/120
	I0731 18:01:32.970993   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 97/120
	I0731 18:01:33.972528   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 98/120
	I0731 18:01:34.973906   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 99/120
	I0731 18:01:35.975812   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 100/120
	I0731 18:01:36.977368   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 101/120
	I0731 18:01:37.978632   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 102/120
	I0731 18:01:38.980078   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 103/120
	I0731 18:01:39.981415   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 104/120
	I0731 18:01:40.983493   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 105/120
	I0731 18:01:41.984823   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 106/120
	I0731 18:01:42.986062   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 107/120
	I0731 18:01:43.987554   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 108/120
	I0731 18:01:44.988939   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 109/120
	I0731 18:01:45.991190   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 110/120
	I0731 18:01:46.992689   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 111/120
	I0731 18:01:47.994034   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 112/120
	I0731 18:01:48.995647   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 113/120
	I0731 18:01:49.996998   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 114/120
	I0731 18:01:50.999200   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 115/120
	I0731 18:01:52.000593   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 116/120
	I0731 18:01:53.002074   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 117/120
	I0731 18:01:54.003638   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 118/120
	I0731 18:01:55.005235   72396 main.go:141] libmachine: (no-preload-673754) Waiting for machine to stop 119/120
	I0731 18:01:56.006387   72396 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0731 18:01:56.006465   72396 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0731 18:01:56.008630   72396 out.go:177] 
	W0731 18:01:56.010032   72396 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0731 18:01:56.010054   72396 out.go:239] * 
	* 
	W0731 18:01:56.012599   72396 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 18:01:56.013841   72396 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-673754 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-673754 -n no-preload-673754
E0731 18:01:57.915773   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/calico-985288/client.crt: no such file or directory
E0731 18:02:05.551711   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/auto-985288/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-673754 -n no-preload-673754: exit status 3 (18.419856316s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 18:02:14.435485   73082 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.126:22: connect: no route to host
	E0731 18:02:14.435505   73082 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.126:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-673754" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (138.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-094310 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-094310 --alsologtostderr -v=3: exit status 82 (2m0.535235671s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-094310"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 18:00:14.486761   72598 out.go:291] Setting OutFile to fd 1 ...
	I0731 18:00:14.486874   72598 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:00:14.486883   72598 out.go:304] Setting ErrFile to fd 2...
	I0731 18:00:14.486887   72598 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:00:14.487069   72598 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19349-8084/.minikube/bin
	I0731 18:00:14.487348   72598 out.go:298] Setting JSON to false
	I0731 18:00:14.487437   72598 mustload.go:65] Loading cluster: default-k8s-diff-port-094310
	I0731 18:00:14.487910   72598 config.go:182] Loaded profile config "default-k8s-diff-port-094310": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:00:14.488005   72598 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/default-k8s-diff-port-094310/config.json ...
	I0731 18:00:14.488191   72598 mustload.go:65] Loading cluster: default-k8s-diff-port-094310
	I0731 18:00:14.488340   72598 config.go:182] Loaded profile config "default-k8s-diff-port-094310": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:00:14.488383   72598 stop.go:39] StopHost: default-k8s-diff-port-094310
	I0731 18:00:14.488797   72598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:00:14.488852   72598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:00:14.504288   72598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43471
	I0731 18:00:14.504817   72598 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:00:14.505378   72598 main.go:141] libmachine: Using API Version  1
	I0731 18:00:14.505403   72598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:00:14.505734   72598 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:00:14.507788   72598 out.go:177] * Stopping node "default-k8s-diff-port-094310"  ...
	I0731 18:00:14.509041   72598 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0731 18:00:14.509086   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:00:14.509339   72598 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0731 18:00:14.509364   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:00:14.512682   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:00:14.513076   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 18:59:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:00:14.513123   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:00:14.513303   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:00:14.513500   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:00:14.513669   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:00:14.513805   72598 sshutil.go:53] new ssh client: &{IP:192.168.72.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/default-k8s-diff-port-094310/id_rsa Username:docker}
	I0731 18:00:14.608835   72598 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0731 18:00:14.691633   72598 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0731 18:00:14.771379   72598 main.go:141] libmachine: Stopping "default-k8s-diff-port-094310"...
	I0731 18:00:14.771413   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetState
	I0731 18:00:14.772819   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .Stop
	I0731 18:00:14.776794   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 0/120
	I0731 18:00:15.778547   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 1/120
	I0731 18:00:16.780576   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 2/120
	I0731 18:00:17.782235   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 3/120
	I0731 18:00:18.783624   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 4/120
	I0731 18:00:19.785745   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 5/120
	I0731 18:00:20.787209   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 6/120
	I0731 18:00:21.788525   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 7/120
	I0731 18:00:22.790002   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 8/120
	I0731 18:00:23.791567   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 9/120
	I0731 18:00:24.793935   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 10/120
	I0731 18:00:25.795212   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 11/120
	I0731 18:00:26.796734   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 12/120
	I0731 18:00:27.798158   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 13/120
	I0731 18:00:28.799682   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 14/120
	I0731 18:00:29.801799   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 15/120
	I0731 18:00:30.802934   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 16/120
	I0731 18:00:31.804349   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 17/120
	I0731 18:00:32.805745   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 18/120
	I0731 18:00:33.807136   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 19/120
	I0731 18:00:34.809458   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 20/120
	I0731 18:00:35.811020   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 21/120
	I0731 18:00:36.812535   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 22/120
	I0731 18:00:37.813754   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 23/120
	I0731 18:00:38.815361   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 24/120
	I0731 18:00:39.817494   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 25/120
	I0731 18:00:40.819090   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 26/120
	I0731 18:00:41.820525   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 27/120
	I0731 18:00:42.822152   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 28/120
	I0731 18:00:43.823574   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 29/120
	I0731 18:00:44.825837   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 30/120
	I0731 18:00:45.827189   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 31/120
	I0731 18:00:46.828664   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 32/120
	I0731 18:00:47.830186   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 33/120
	I0731 18:00:48.832031   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 34/120
	I0731 18:00:49.834070   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 35/120
	I0731 18:00:50.835648   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 36/120
	I0731 18:00:51.837142   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 37/120
	I0731 18:00:52.838558   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 38/120
	I0731 18:00:53.839908   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 39/120
	I0731 18:00:54.842268   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 40/120
	I0731 18:00:55.843626   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 41/120
	I0731 18:00:56.845031   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 42/120
	I0731 18:00:57.846535   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 43/120
	I0731 18:00:58.847992   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 44/120
	I0731 18:00:59.850044   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 45/120
	I0731 18:01:00.851749   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 46/120
	I0731 18:01:01.853209   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 47/120
	I0731 18:01:02.854617   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 48/120
	I0731 18:01:03.856022   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 49/120
	I0731 18:01:04.858290   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 50/120
	I0731 18:01:05.859623   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 51/120
	I0731 18:01:06.861112   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 52/120
	I0731 18:01:07.862532   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 53/120
	I0731 18:01:08.863830   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 54/120
	I0731 18:01:09.865912   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 55/120
	I0731 18:01:10.867256   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 56/120
	I0731 18:01:11.868716   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 57/120
	I0731 18:01:12.870165   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 58/120
	I0731 18:01:13.871610   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 59/120
	I0731 18:01:14.874059   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 60/120
	I0731 18:01:15.876128   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 61/120
	I0731 18:01:16.877714   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 62/120
	I0731 18:01:17.879095   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 63/120
	I0731 18:01:18.880798   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 64/120
	I0731 18:01:19.883233   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 65/120
	I0731 18:01:20.884950   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 66/120
	I0731 18:01:21.886315   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 67/120
	I0731 18:01:22.887788   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 68/120
	I0731 18:01:23.889123   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 69/120
	I0731 18:01:24.891296   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 70/120
	I0731 18:01:25.892699   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 71/120
	I0731 18:01:26.894389   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 72/120
	I0731 18:01:27.895923   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 73/120
	I0731 18:01:28.897231   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 74/120
	I0731 18:01:29.899314   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 75/120
	I0731 18:01:30.900760   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 76/120
	I0731 18:01:31.902449   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 77/120
	I0731 18:01:32.903979   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 78/120
	I0731 18:01:33.905401   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 79/120
	I0731 18:01:34.906938   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 80/120
	I0731 18:01:35.908398   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 81/120
	I0731 18:01:36.909873   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 82/120
	I0731 18:01:37.911282   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 83/120
	I0731 18:01:38.912857   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 84/120
	I0731 18:01:39.915026   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 85/120
	I0731 18:01:40.916462   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 86/120
	I0731 18:01:41.917856   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 87/120
	I0731 18:01:42.919483   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 88/120
	I0731 18:01:43.920998   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 89/120
	I0731 18:01:44.923464   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 90/120
	I0731 18:01:45.925641   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 91/120
	I0731 18:01:46.927351   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 92/120
	I0731 18:01:47.928872   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 93/120
	I0731 18:01:48.930550   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 94/120
	I0731 18:01:49.932722   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 95/120
	I0731 18:01:50.934386   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 96/120
	I0731 18:01:51.935869   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 97/120
	I0731 18:01:52.937842   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 98/120
	I0731 18:01:53.939244   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 99/120
	I0731 18:01:54.941548   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 100/120
	I0731 18:01:55.943073   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 101/120
	I0731 18:01:56.944527   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 102/120
	I0731 18:01:57.945903   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 103/120
	I0731 18:01:58.947661   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 104/120
	I0731 18:01:59.949704   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 105/120
	I0731 18:02:00.951101   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 106/120
	I0731 18:02:01.952617   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 107/120
	I0731 18:02:02.953859   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 108/120
	I0731 18:02:03.955309   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 109/120
	I0731 18:02:04.956757   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 110/120
	I0731 18:02:05.958375   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 111/120
	I0731 18:02:06.959827   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 112/120
	I0731 18:02:07.961521   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 113/120
	I0731 18:02:08.962915   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 114/120
	I0731 18:02:09.965079   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 115/120
	I0731 18:02:10.966334   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 116/120
	I0731 18:02:11.967713   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 117/120
	I0731 18:02:12.969177   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 118/120
	I0731 18:02:13.970614   72598 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for machine to stop 119/120
	I0731 18:02:14.971496   72598 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0731 18:02:14.971545   72598 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0731 18:02:14.973477   72598 out.go:177] 
	W0731 18:02:14.974642   72598 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0731 18:02:14.974663   72598 out.go:239] * 
	* 
	W0731 18:02:14.977228   72598 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 18:02:14.978480   72598 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-094310 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-094310 -n default-k8s-diff-port-094310
E0731 18:02:16.167948   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/bridge-985288/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-094310 -n default-k8s-diff-port-094310: exit status 3 (18.655044343s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 18:02:33.635444   73322 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.197:22: connect: no route to host
	E0731 18:02:33.635468   73322 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.197:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-094310" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-436067 --alsologtostderr -v=3
E0731 18:00:28.393318   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/client.crt: no such file or directory
E0731 18:00:35.992178   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/calico-985288/client.crt: no such file or directory
E0731 18:00:35.997421   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/calico-985288/client.crt: no such file or directory
E0731 18:00:36.007689   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/calico-985288/client.crt: no such file or directory
E0731 18:00:36.027951   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/calico-985288/client.crt: no such file or directory
E0731 18:00:36.068275   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/calico-985288/client.crt: no such file or directory
E0731 18:00:36.148611   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/calico-985288/client.crt: no such file or directory
E0731 18:00:36.309584   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/calico-985288/client.crt: no such file or directory
E0731 18:00:36.630215   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/calico-985288/client.crt: no such file or directory
E0731 18:00:37.270482   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/calico-985288/client.crt: no such file or directory
E0731 18:00:38.551360   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/calico-985288/client.crt: no such file or directory
E0731 18:00:41.112100   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/calico-985288/client.crt: no such file or directory
E0731 18:00:41.325414   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/kindnet-985288/client.crt: no such file or directory
E0731 18:00:43.630673   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/auto-985288/client.crt: no such file or directory
E0731 18:00:46.233031   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/calico-985288/client.crt: no such file or directory
E0731 18:00:56.474026   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/calico-985288/client.crt: no such file or directory
E0731 18:01:07.908932   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/custom-flannel-985288/client.crt: no such file or directory
E0731 18:01:07.914173   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/custom-flannel-985288/client.crt: no such file or directory
E0731 18:01:07.924419   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/custom-flannel-985288/client.crt: no such file or directory
E0731 18:01:07.945412   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/custom-flannel-985288/client.crt: no such file or directory
E0731 18:01:07.985685   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/custom-flannel-985288/client.crt: no such file or directory
E0731 18:01:08.065993   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/custom-flannel-985288/client.crt: no such file or directory
E0731 18:01:08.226508   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/custom-flannel-985288/client.crt: no such file or directory
E0731 18:01:08.546700   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/custom-flannel-985288/client.crt: no such file or directory
E0731 18:01:09.187264   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/custom-flannel-985288/client.crt: no such file or directory
E0731 18:01:10.467763   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/custom-flannel-985288/client.crt: no such file or directory
E0731 18:01:13.028188   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/custom-flannel-985288/client.crt: no such file or directory
E0731 18:01:16.954770   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/calico-985288/client.crt: no such file or directory
E0731 18:01:18.148376   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/custom-flannel-985288/client.crt: no such file or directory
E0731 18:01:22.285741   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/kindnet-985288/client.crt: no such file or directory
E0731 18:01:28.389455   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/custom-flannel-985288/client.crt: no such file or directory
E0731 18:01:48.870384   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/custom-flannel-985288/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-436067 --alsologtostderr -v=3: exit status 82 (2m0.501110453s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-436067"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 18:00:23.023922   72700 out.go:291] Setting OutFile to fd 1 ...
	I0731 18:00:23.024136   72700 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:00:23.024145   72700 out.go:304] Setting ErrFile to fd 2...
	I0731 18:00:23.024149   72700 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:00:23.024293   72700 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19349-8084/.minikube/bin
	I0731 18:00:23.024509   72700 out.go:298] Setting JSON to false
	I0731 18:00:23.024576   72700 mustload.go:65] Loading cluster: embed-certs-436067
	I0731 18:00:23.024866   72700 config.go:182] Loaded profile config "embed-certs-436067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:00:23.024925   72700 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/embed-certs-436067/config.json ...
	I0731 18:00:23.025085   72700 mustload.go:65] Loading cluster: embed-certs-436067
	I0731 18:00:23.025178   72700 config.go:182] Loaded profile config "embed-certs-436067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:00:23.025203   72700 stop.go:39] StopHost: embed-certs-436067
	I0731 18:00:23.025587   72700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:00:23.025647   72700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:00:23.040855   72700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34089
	I0731 18:00:23.041292   72700 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:00:23.041848   72700 main.go:141] libmachine: Using API Version  1
	I0731 18:00:23.041871   72700 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:00:23.042225   72700 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:00:23.044619   72700 out.go:177] * Stopping node "embed-certs-436067"  ...
	I0731 18:00:23.045874   72700 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0731 18:00:23.045924   72700 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:00:23.046155   72700 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0731 18:00:23.046188   72700 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:00:23.049092   72700 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:00:23.049701   72700 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 18:58:48 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:00:23.049736   72700 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:00:23.049871   72700 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:00:23.050046   72700 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:00:23.050170   72700 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:00:23.050296   72700 sshutil.go:53] new ssh client: &{IP:192.168.50.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/embed-certs-436067/id_rsa Username:docker}
	I0731 18:00:23.158322   72700 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0731 18:00:23.222530   72700 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0731 18:00:23.278780   72700 main.go:141] libmachine: Stopping "embed-certs-436067"...
	I0731 18:00:23.278814   72700 main.go:141] libmachine: (embed-certs-436067) Calling .GetState
	I0731 18:00:23.280414   72700 main.go:141] libmachine: (embed-certs-436067) Calling .Stop
	I0731 18:00:23.284274   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 0/120
	I0731 18:00:24.286007   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 1/120
	I0731 18:00:25.287554   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 2/120
	I0731 18:00:26.289113   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 3/120
	I0731 18:00:27.290693   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 4/120
	I0731 18:00:28.293053   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 5/120
	I0731 18:00:29.294647   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 6/120
	I0731 18:00:30.295958   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 7/120
	I0731 18:00:31.297557   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 8/120
	I0731 18:00:32.299092   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 9/120
	I0731 18:00:33.301583   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 10/120
	I0731 18:00:34.303255   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 11/120
	I0731 18:00:35.304867   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 12/120
	I0731 18:00:36.306354   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 13/120
	I0731 18:00:37.307894   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 14/120
	I0731 18:00:38.309798   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 15/120
	I0731 18:00:39.311327   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 16/120
	I0731 18:00:40.312740   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 17/120
	I0731 18:00:41.314119   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 18/120
	I0731 18:00:42.315581   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 19/120
	I0731 18:00:43.317919   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 20/120
	I0731 18:00:44.319326   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 21/120
	I0731 18:00:45.320928   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 22/120
	I0731 18:00:46.322272   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 23/120
	I0731 18:00:47.324147   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 24/120
	I0731 18:00:48.326632   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 25/120
	I0731 18:00:49.327969   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 26/120
	I0731 18:00:50.329318   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 27/120
	I0731 18:00:51.330580   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 28/120
	I0731 18:00:52.331941   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 29/120
	I0731 18:00:53.334192   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 30/120
	I0731 18:00:54.335591   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 31/120
	I0731 18:00:55.336962   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 32/120
	I0731 18:00:56.338577   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 33/120
	I0731 18:00:57.340014   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 34/120
	I0731 18:00:58.342147   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 35/120
	I0731 18:00:59.343505   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 36/120
	I0731 18:01:00.344796   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 37/120
	I0731 18:01:01.346089   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 38/120
	I0731 18:01:02.347477   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 39/120
	I0731 18:01:03.348684   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 40/120
	I0731 18:01:04.349946   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 41/120
	I0731 18:01:05.351210   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 42/120
	I0731 18:01:06.352486   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 43/120
	I0731 18:01:07.354016   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 44/120
	I0731 18:01:08.355355   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 45/120
	I0731 18:01:09.356812   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 46/120
	I0731 18:01:10.358212   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 47/120
	I0731 18:01:11.359595   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 48/120
	I0731 18:01:12.361254   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 49/120
	I0731 18:01:13.363456   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 50/120
	I0731 18:01:14.364806   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 51/120
	I0731 18:01:15.366119   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 52/120
	I0731 18:01:16.367874   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 53/120
	I0731 18:01:17.369498   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 54/120
	I0731 18:01:18.371649   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 55/120
	I0731 18:01:19.373095   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 56/120
	I0731 18:01:20.374799   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 57/120
	I0731 18:01:21.376132   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 58/120
	I0731 18:01:22.377561   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 59/120
	I0731 18:01:23.379796   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 60/120
	I0731 18:01:24.381171   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 61/120
	I0731 18:01:25.382422   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 62/120
	I0731 18:01:26.384284   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 63/120
	I0731 18:01:27.385882   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 64/120
	I0731 18:01:28.388255   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 65/120
	I0731 18:01:29.389980   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 66/120
	I0731 18:01:30.391392   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 67/120
	I0731 18:01:31.392598   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 68/120
	I0731 18:01:32.393950   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 69/120
	I0731 18:01:33.396464   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 70/120
	I0731 18:01:34.397895   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 71/120
	I0731 18:01:35.399337   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 72/120
	I0731 18:01:36.400616   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 73/120
	I0731 18:01:37.401980   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 74/120
	I0731 18:01:38.404146   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 75/120
	I0731 18:01:39.405560   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 76/120
	I0731 18:01:40.406967   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 77/120
	I0731 18:01:41.408434   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 78/120
	I0731 18:01:42.409636   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 79/120
	I0731 18:01:43.411938   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 80/120
	I0731 18:01:44.413622   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 81/120
	I0731 18:01:45.415031   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 82/120
	I0731 18:01:46.416754   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 83/120
	I0731 18:01:47.418369   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 84/120
	I0731 18:01:48.420501   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 85/120
	I0731 18:01:49.422084   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 86/120
	I0731 18:01:50.423588   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 87/120
	I0731 18:01:51.425081   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 88/120
	I0731 18:01:52.426465   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 89/120
	I0731 18:01:53.428052   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 90/120
	I0731 18:01:54.430046   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 91/120
	I0731 18:01:55.431893   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 92/120
	I0731 18:01:56.433409   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 93/120
	I0731 18:01:57.434914   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 94/120
	I0731 18:01:58.437017   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 95/120
	I0731 18:01:59.438602   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 96/120
	I0731 18:02:00.440070   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 97/120
	I0731 18:02:01.441701   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 98/120
	I0731 18:02:02.443183   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 99/120
	I0731 18:02:03.445331   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 100/120
	I0731 18:02:04.446684   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 101/120
	I0731 18:02:05.448141   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 102/120
	I0731 18:02:06.449599   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 103/120
	I0731 18:02:07.451378   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 104/120
	I0731 18:02:08.453756   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 105/120
	I0731 18:02:09.455272   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 106/120
	I0731 18:02:10.456837   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 107/120
	I0731 18:02:11.458103   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 108/120
	I0731 18:02:12.459816   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 109/120
	I0731 18:02:13.462147   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 110/120
	I0731 18:02:14.463396   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 111/120
	I0731 18:02:15.464822   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 112/120
	I0731 18:02:16.466330   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 113/120
	I0731 18:02:17.467638   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 114/120
	I0731 18:02:18.469611   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 115/120
	I0731 18:02:19.471002   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 116/120
	I0731 18:02:20.472623   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 117/120
	I0731 18:02:21.474209   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 118/120
	I0731 18:02:22.475891   72700 main.go:141] libmachine: (embed-certs-436067) Waiting for machine to stop 119/120
	I0731 18:02:23.477480   72700 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0731 18:02:23.477548   72700 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0731 18:02:23.479395   72700 out.go:177] 
	W0731 18:02:23.480549   72700 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0731 18:02:23.480566   72700 out.go:239] * 
	* 
	W0731 18:02:23.483248   72700 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 18:02:23.484517   72700 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-436067 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-436067 -n embed-certs-436067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-436067 -n embed-certs-436067: exit status 3 (18.597091769s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 18:02:42.083442   73402 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.86:22: connect: no route to host
	E0731 18:02:42.083461   73402 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.86:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-436067" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-276459 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-276459 create -f testdata/busybox.yaml: exit status 1 (43.625643ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-276459" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-276459 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-276459 -n old-k8s-version-276459
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-276459 -n old-k8s-version-276459: exit status 6 (211.702729ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 18:02:12.229130   73202 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-276459" does not appear in /home/jenkins/minikube-integration/19349-8084/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-276459" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-276459 -n old-k8s-version-276459
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-276459 -n old-k8s-version-276459: exit status 6 (215.657286ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 18:02:12.444951   73232 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-276459" does not appear in /home/jenkins/minikube-integration/19349-8084/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-276459" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (100.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-276459 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0731 18:02:13.608590   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/bridge-985288/client.crt: no such file or directory
E0731 18:02:13.613963   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/bridge-985288/client.crt: no such file or directory
E0731 18:02:13.624234   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/bridge-985288/client.crt: no such file or directory
E0731 18:02:13.644494   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/bridge-985288/client.crt: no such file or directory
E0731 18:02:13.684882   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/bridge-985288/client.crt: no such file or directory
E0731 18:02:13.765274   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/bridge-985288/client.crt: no such file or directory
E0731 18:02:13.925724   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/bridge-985288/client.crt: no such file or directory
E0731 18:02:14.246096   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/bridge-985288/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-276459 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m40.053207311s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-276459 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-276459 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-276459 describe deploy/metrics-server -n kube-system: exit status 1 (41.790764ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-276459" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-276459 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-276459 -n old-k8s-version-276459
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-276459 -n old-k8s-version-276459: exit status 6 (216.944818ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 18:03:52.756676   74092 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-276459" does not appear in /home/jenkins/minikube-integration/19349-8084/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-276459" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (100.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-673754 -n no-preload-673754
E0731 18:02:14.887164   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/bridge-985288/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-673754 -n no-preload-673754: exit status 3 (3.167844447s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 18:02:17.603455   73292 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.126:22: connect: no route to host
	E0731 18:02:17.603484   73292 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.126:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-673754 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0731 18:02:18.728109   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/bridge-985288/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-673754 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152204993s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.126:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-673754 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-673754 -n no-preload-673754
E0731 18:02:23.848437   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/bridge-985288/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-673754 -n no-preload-673754: exit status 3 (3.063468516s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 18:02:26.819513   73432 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.126:22: connect: no route to host
	E0731 18:02:26.819568   73432 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.126:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-673754" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-094310 -n default-k8s-diff-port-094310
E0731 18:02:34.089079   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/bridge-985288/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-094310 -n default-k8s-diff-port-094310: exit status 3 (3.168115395s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 18:02:36.803446   73536 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.197:22: connect: no route to host
	E0731 18:02:36.803465   73536 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.197:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-094310 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0731 18:02:40.752550   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/flannel-985288/client.crt: no such file or directory
E0731 18:02:40.757803   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/flannel-985288/client.crt: no such file or directory
E0731 18:02:40.768051   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/flannel-985288/client.crt: no such file or directory
E0731 18:02:40.788321   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/flannel-985288/client.crt: no such file or directory
E0731 18:02:40.828581   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/flannel-985288/client.crt: no such file or directory
E0731 18:02:40.908889   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/flannel-985288/client.crt: no such file or directory
E0731 18:02:41.069395   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/flannel-985288/client.crt: no such file or directory
E0731 18:02:41.390037   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/flannel-985288/client.crt: no such file or directory
E0731 18:02:42.031041   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/flannel-985288/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-094310 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152138055s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.197:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-094310 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-094310 -n default-k8s-diff-port-094310
E0731 18:02:43.311703   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/flannel-985288/client.crt: no such file or directory
E0731 18:02:44.205904   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/kindnet-985288/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-094310 -n default-k8s-diff-port-094310: exit status 3 (3.063786416s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 18:02:46.019516   73647 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.197:22: connect: no route to host
	E0731 18:02:46.019554   73647 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.197:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-094310" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-436067 -n embed-certs-436067
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-436067 -n embed-certs-436067: exit status 3 (3.167964119s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 18:02:45.251475   73617 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.86:22: connect: no route to host
	E0731 18:02:45.251496   73617 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.86:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-436067 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0731 18:02:45.872008   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/flannel-985288/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-436067 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.151662998s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.86:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-436067 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-436067 -n embed-certs-436067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-436067 -n embed-certs-436067: exit status 3 (3.063917004s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 18:02:54.467572   73770 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.86:22: connect: no route to host
	E0731 18:02:54.467595   73770 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.86:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-436067" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (744.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-276459 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0731 18:04:02.673641   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/flannel-985288/client.crt: no such file or directory
E0731 18:04:05.346643   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/client.crt: no such file or directory
E0731 18:04:07.895152   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/enable-default-cni-985288/client.crt: no such file or directory
E0731 18:04:21.705753   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/auto-985288/client.crt: no such file or directory
E0731 18:04:48.855733   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/enable-default-cni-985288/client.crt: no such file or directory
E0731 18:04:49.392403   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/auto-985288/client.crt: no such file or directory
E0731 18:04:57.450056   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/bridge-985288/client.crt: no such file or directory
E0731 18:05:00.361725   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/kindnet-985288/client.crt: no such file or directory
E0731 18:05:24.594051   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/flannel-985288/client.crt: no such file or directory
E0731 18:05:28.046553   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/kindnet-985288/client.crt: no such file or directory
E0731 18:05:35.991729   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/calico-985288/client.crt: no such file or directory
E0731 18:06:03.677223   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/calico-985288/client.crt: no such file or directory
E0731 18:06:07.908486   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/custom-flannel-985288/client.crt: no such file or directory
E0731 18:06:10.776063   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/enable-default-cni-985288/client.crt: no such file or directory
E0731 18:06:35.593612   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/custom-flannel-985288/client.crt: no such file or directory
E0731 18:07:13.609167   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/bridge-985288/client.crt: no such file or directory
E0731 18:07:40.752602   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/flannel-985288/client.crt: no such file or directory
E0731 18:07:41.291167   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/bridge-985288/client.crt: no such file or directory
E0731 18:07:57.004683   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/functional-496242/client.crt: no such file or directory
E0731 18:08:08.434192   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/flannel-985288/client.crt: no such file or directory
E0731 18:08:26.933530   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/enable-default-cni-985288/client.crt: no such file or directory
E0731 18:08:54.617196   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/enable-default-cni-985288/client.crt: no such file or directory
E0731 18:09:05.346604   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/client.crt: no such file or directory
E0731 18:09:20.054705   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/functional-496242/client.crt: no such file or directory
E0731 18:09:21.705556   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/auto-985288/client.crt: no such file or directory
E0731 18:10:00.361993   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/kindnet-985288/client.crt: no such file or directory
E0731 18:10:35.992075   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/calico-985288/client.crt: no such file or directory
E0731 18:11:07.908626   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/custom-flannel-985288/client.crt: no such file or directory
E0731 18:12:13.609204   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/bridge-985288/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-276459 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m20.652227331s)

                                                
                                                
-- stdout --
	* [old-k8s-version-276459] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19349
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19349-8084/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19349-8084/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-276459" primary control-plane node in "old-k8s-version-276459" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-276459" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 18:03:55.344211   74203 out.go:291] Setting OutFile to fd 1 ...
	I0731 18:03:55.344313   74203 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:03:55.344321   74203 out.go:304] Setting ErrFile to fd 2...
	I0731 18:03:55.344324   74203 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:03:55.344541   74203 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19349-8084/.minikube/bin
	I0731 18:03:55.345055   74203 out.go:298] Setting JSON to false
	I0731 18:03:55.345905   74203 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6379,"bootTime":1722442656,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 18:03:55.345962   74203 start.go:139] virtualization: kvm guest
	I0731 18:03:55.347848   74203 out.go:177] * [old-k8s-version-276459] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 18:03:55.349045   74203 notify.go:220] Checking for updates...
	I0731 18:03:55.349052   74203 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 18:03:55.350359   74203 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 18:03:55.351583   74203 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 18:03:55.352789   74203 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19349-8084/.minikube
	I0731 18:03:55.354046   74203 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 18:03:55.355244   74203 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 18:03:55.356819   74203 config.go:182] Loaded profile config "old-k8s-version-276459": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0731 18:03:55.357218   74203 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:03:55.357268   74203 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:03:55.372081   74203 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38815
	I0731 18:03:55.372424   74203 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:03:55.372950   74203 main.go:141] libmachine: Using API Version  1
	I0731 18:03:55.372972   74203 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:03:55.373263   74203 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:03:55.373466   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:03:55.375198   74203 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0731 18:03:55.376370   74203 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 18:03:55.376714   74203 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:03:55.376748   74203 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:03:55.390924   74203 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46311
	I0731 18:03:55.391380   74203 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:03:55.391853   74203 main.go:141] libmachine: Using API Version  1
	I0731 18:03:55.391875   74203 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:03:55.392165   74203 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:03:55.392389   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:03:55.425283   74203 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 18:03:55.426485   74203 start.go:297] selected driver: kvm2
	I0731 18:03:55.426517   74203 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-276459 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-276459 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:03:55.426632   74203 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 18:03:55.427322   74203 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 18:03:55.427419   74203 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19349-8084/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 18:03:55.441518   74203 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 18:03:55.441891   74203 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 18:03:55.441921   74203 cni.go:84] Creating CNI manager for ""
	I0731 18:03:55.441928   74203 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:03:55.441970   74203 start.go:340] cluster config:
	{Name:old-k8s-version-276459 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-276459 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:03:55.442088   74203 iso.go:125] acquiring lock: {Name:mk4494bb5a63902bf12a909c3109db85229bc516 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 18:03:55.443745   74203 out.go:177] * Starting "old-k8s-version-276459" primary control-plane node in "old-k8s-version-276459" cluster
	I0731 18:03:55.445026   74203 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 18:03:55.445062   74203 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0731 18:03:55.445085   74203 cache.go:56] Caching tarball of preloaded images
	I0731 18:03:55.445157   74203 preload.go:172] Found /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 18:03:55.445167   74203 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0731 18:03:55.445250   74203 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/config.json ...
	I0731 18:03:55.445412   74203 start.go:360] acquireMachinesLock for old-k8s-version-276459: {Name:mkd78eacfab7d6c3058c6674434b3d889ec957e0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 18:07:41.552094   74203 start.go:364] duration metric: took 3m46.106649241s to acquireMachinesLock for "old-k8s-version-276459"
	I0731 18:07:41.552166   74203 start.go:96] Skipping create...Using existing machine configuration
	I0731 18:07:41.552174   74203 fix.go:54] fixHost starting: 
	I0731 18:07:41.552553   74203 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:07:41.552595   74203 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:07:41.569965   74203 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43329
	I0731 18:07:41.570361   74203 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:07:41.570884   74203 main.go:141] libmachine: Using API Version  1
	I0731 18:07:41.570905   74203 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:07:41.571247   74203 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:07:41.571454   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:07:41.571605   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetState
	I0731 18:07:41.573081   74203 fix.go:112] recreateIfNeeded on old-k8s-version-276459: state=Stopped err=<nil>
	I0731 18:07:41.573114   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	W0731 18:07:41.573276   74203 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 18:07:41.575254   74203 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-276459" ...
	I0731 18:07:41.576648   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .Start
	I0731 18:07:41.576823   74203 main.go:141] libmachine: (old-k8s-version-276459) Ensuring networks are active...
	I0731 18:07:41.577511   74203 main.go:141] libmachine: (old-k8s-version-276459) Ensuring network default is active
	I0731 18:07:41.578015   74203 main.go:141] libmachine: (old-k8s-version-276459) Ensuring network mk-old-k8s-version-276459 is active
	I0731 18:07:41.578469   74203 main.go:141] libmachine: (old-k8s-version-276459) Getting domain xml...
	I0731 18:07:41.579474   74203 main.go:141] libmachine: (old-k8s-version-276459) Creating domain...
	I0731 18:07:42.876409   74203 main.go:141] libmachine: (old-k8s-version-276459) Waiting to get IP...
	I0731 18:07:42.877345   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:42.877788   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:42.877841   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:42.877763   75164 retry.go:31] will retry after 218.764988ms: waiting for machine to come up
	I0731 18:07:43.098230   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:43.098697   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:43.098722   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:43.098650   75164 retry.go:31] will retry after 285.579707ms: waiting for machine to come up
	I0731 18:07:43.386356   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:43.386897   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:43.386928   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:43.386852   75164 retry.go:31] will retry after 389.197253ms: waiting for machine to come up
	I0731 18:07:43.778183   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:43.778672   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:43.778698   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:43.778622   75164 retry.go:31] will retry after 484.5108ms: waiting for machine to come up
	I0731 18:07:44.264412   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:44.265042   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:44.265073   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:44.264955   75164 retry.go:31] will retry after 621.551625ms: waiting for machine to come up
	I0731 18:07:44.887986   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:44.888534   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:44.888563   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:44.888489   75164 retry.go:31] will retry after 610.567971ms: waiting for machine to come up
	I0731 18:07:45.500400   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:45.500938   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:45.500966   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:45.500890   75164 retry.go:31] will retry after 1.069889786s: waiting for machine to come up
	I0731 18:07:46.572634   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:46.573085   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:46.573128   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:46.572979   75164 retry.go:31] will retry after 1.047722466s: waiting for machine to come up
	I0731 18:07:47.622035   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:47.622479   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:47.622507   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:47.622435   75164 retry.go:31] will retry after 1.292658555s: waiting for machine to come up
	I0731 18:07:48.916255   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:48.916755   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:48.916778   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:48.916701   75164 retry.go:31] will retry after 2.006539925s: waiting for machine to come up
	I0731 18:07:50.925369   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:50.925835   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:50.925856   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:50.925790   75164 retry.go:31] will retry after 2.875577792s: waiting for machine to come up
	I0731 18:07:53.802729   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:53.803164   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:53.803192   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:53.803122   75164 retry.go:31] will retry after 2.352020729s: waiting for machine to come up
	I0731 18:07:56.157721   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:56.158176   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:56.158216   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:56.158110   75164 retry.go:31] will retry after 3.552824334s: waiting for machine to come up
	I0731 18:07:59.712249   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.712759   74203 main.go:141] libmachine: (old-k8s-version-276459) Found IP for machine: 192.168.39.26
	I0731 18:07:59.712783   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has current primary IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.712793   74203 main.go:141] libmachine: (old-k8s-version-276459) Reserving static IP address...
	I0731 18:07:59.713268   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "old-k8s-version-276459", mac: "52:54:00:79:9d:96", ip: "192.168.39.26"} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:07:59.713297   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | skip adding static IP to network mk-old-k8s-version-276459 - found existing host DHCP lease matching {name: "old-k8s-version-276459", mac: "52:54:00:79:9d:96", ip: "192.168.39.26"}
	I0731 18:07:59.713320   74203 main.go:141] libmachine: (old-k8s-version-276459) Reserved static IP address: 192.168.39.26
	I0731 18:07:59.713343   74203 main.go:141] libmachine: (old-k8s-version-276459) Waiting for SSH to be available...
	I0731 18:07:59.713355   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | Getting to WaitForSSH function...
	I0731 18:07:59.716068   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.716456   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:07:59.716490   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.716701   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | Using SSH client type: external
	I0731 18:07:59.716725   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | Using SSH private key: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459/id_rsa (-rw-------)
	I0731 18:07:59.716762   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.26 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 18:07:59.716776   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | About to run SSH command:
	I0731 18:07:59.716792   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | exit 0
	I0731 18:07:59.847720   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | SSH cmd err, output: <nil>: 
	I0731 18:07:59.848089   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetConfigRaw
	I0731 18:07:59.848847   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetIP
	I0731 18:07:59.851632   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.852004   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:07:59.852030   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.852321   74203 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/config.json ...
	I0731 18:07:59.852505   74203 machine.go:94] provisionDockerMachine start ...
	I0731 18:07:59.852524   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:07:59.852752   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:07:59.855198   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.855596   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:07:59.855626   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.855756   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:07:59.855920   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:07:59.856071   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:07:59.856208   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:07:59.856372   74203 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:59.856601   74203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0731 18:07:59.856614   74203 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 18:07:59.963492   74203 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 18:07:59.963524   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetMachineName
	I0731 18:07:59.963762   74203 buildroot.go:166] provisioning hostname "old-k8s-version-276459"
	I0731 18:07:59.963794   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetMachineName
	I0731 18:07:59.963992   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:07:59.967261   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.967725   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:07:59.967762   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.967938   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:07:59.968131   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:07:59.968316   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:07:59.968487   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:07:59.968687   74203 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:59.968872   74203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0731 18:07:59.968890   74203 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-276459 && echo "old-k8s-version-276459" | sudo tee /etc/hostname
	I0731 18:08:00.084360   74203 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-276459
	
	I0731 18:08:00.084390   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:08:00.087433   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.087833   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.087862   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.088016   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:08:00.088187   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.088371   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.088521   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:08:00.088719   74203 main.go:141] libmachine: Using SSH client type: native
	I0731 18:08:00.088893   74203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0731 18:08:00.088915   74203 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-276459' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-276459/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-276459' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 18:08:00.200012   74203 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 18:08:00.200038   74203 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19349-8084/.minikube CaCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19349-8084/.minikube}
	I0731 18:08:00.200069   74203 buildroot.go:174] setting up certificates
	I0731 18:08:00.200081   74203 provision.go:84] configureAuth start
	I0731 18:08:00.200093   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetMachineName
	I0731 18:08:00.200360   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetIP
	I0731 18:08:00.203352   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.203694   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.203721   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.203951   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:08:00.206061   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.206398   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.206432   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.206510   74203 provision.go:143] copyHostCerts
	I0731 18:08:00.206580   74203 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem, removing ...
	I0731 18:08:00.206591   74203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem
	I0731 18:08:00.206654   74203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem (1082 bytes)
	I0731 18:08:00.206759   74203 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem, removing ...
	I0731 18:08:00.206769   74203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem
	I0731 18:08:00.206799   74203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem (1123 bytes)
	I0731 18:08:00.206876   74203 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem, removing ...
	I0731 18:08:00.206885   74203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem
	I0731 18:08:00.206913   74203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem (1675 bytes)
	I0731 18:08:00.207047   74203 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-276459 san=[127.0.0.1 192.168.39.26 localhost minikube old-k8s-version-276459]
	I0731 18:08:00.279363   74203 provision.go:177] copyRemoteCerts
	I0731 18:08:00.279423   74203 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 18:08:00.279456   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:08:00.282234   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.282601   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.282630   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.282751   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:08:00.283004   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.283178   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:08:00.283361   74203 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459/id_rsa Username:docker}
	I0731 18:08:00.365254   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 18:08:00.389729   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0731 18:08:00.413143   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 18:08:00.436040   74203 provision.go:87] duration metric: took 235.932619ms to configureAuth
	I0731 18:08:00.436080   74203 buildroot.go:189] setting minikube options for container-runtime
	I0731 18:08:00.436288   74203 config.go:182] Loaded profile config "old-k8s-version-276459": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0731 18:08:00.436403   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:08:00.439184   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.439543   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.439575   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.439734   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:08:00.439898   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.440093   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.440271   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:08:00.440450   74203 main.go:141] libmachine: Using SSH client type: native
	I0731 18:08:00.440661   74203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0731 18:08:00.440679   74203 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 18:08:00.707438   74203 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 18:08:00.707467   74203 machine.go:97] duration metric: took 854.948491ms to provisionDockerMachine
	I0731 18:08:00.707482   74203 start.go:293] postStartSetup for "old-k8s-version-276459" (driver="kvm2")
	I0731 18:08:00.707494   74203 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 18:08:00.707510   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:08:00.707811   74203 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 18:08:00.707837   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:08:00.710726   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.711285   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.711315   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.711458   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:08:00.711703   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.711895   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:08:00.712049   74203 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459/id_rsa Username:docker}
	I0731 18:08:00.793719   74203 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 18:08:00.797858   74203 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 18:08:00.797888   74203 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/addons for local assets ...
	I0731 18:08:00.797960   74203 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/files for local assets ...
	I0731 18:08:00.798038   74203 filesync.go:149] local asset: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem -> 152592.pem in /etc/ssl/certs
	I0731 18:08:00.798130   74203 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 18:08:00.807013   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /etc/ssl/certs/152592.pem (1708 bytes)
	I0731 18:08:00.829440   74203 start.go:296] duration metric: took 121.944271ms for postStartSetup
	I0731 18:08:00.829487   74203 fix.go:56] duration metric: took 19.277312964s for fixHost
	I0731 18:08:00.829518   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:08:00.832718   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.833048   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.833082   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.833317   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:08:00.833533   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.833752   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.833887   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:08:00.834189   74203 main.go:141] libmachine: Using SSH client type: native
	I0731 18:08:00.834364   74203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0731 18:08:00.834377   74203 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0731 18:08:00.935834   74203 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722449280.899364873
	
	I0731 18:08:00.935853   74203 fix.go:216] guest clock: 1722449280.899364873
	I0731 18:08:00.935860   74203 fix.go:229] Guest: 2024-07-31 18:08:00.899364873 +0000 UTC Remote: 2024-07-31 18:08:00.829491013 +0000 UTC m=+245.518063325 (delta=69.87386ms)
	I0731 18:08:00.935894   74203 fix.go:200] guest clock delta is within tolerance: 69.87386ms
	I0731 18:08:00.935899   74203 start.go:83] releasing machines lock for "old-k8s-version-276459", held for 19.38376262s
	I0731 18:08:00.935937   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:08:00.936220   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetIP
	I0731 18:08:00.939282   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.939691   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.939722   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.939911   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:08:00.940506   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:08:00.940704   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:08:00.940790   74203 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 18:08:00.940831   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:08:00.940960   74203 ssh_runner.go:195] Run: cat /version.json
	I0731 18:08:00.941043   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:08:00.943883   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.943909   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.944361   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.944405   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.944429   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.944442   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.944542   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:08:00.944639   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:08:00.944766   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.944817   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.944899   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:08:00.944979   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:08:00.945039   74203 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459/id_rsa Username:docker}
	I0731 18:08:00.945110   74203 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459/id_rsa Username:docker}
	I0731 18:08:01.023818   74203 ssh_runner.go:195] Run: systemctl --version
	I0731 18:08:01.063390   74203 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 18:08:01.205084   74203 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 18:08:01.210972   74203 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 18:08:01.211049   74203 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 18:08:01.226156   74203 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 18:08:01.226180   74203 start.go:495] detecting cgroup driver to use...
	I0731 18:08:01.226257   74203 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 18:08:01.241506   74203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 18:08:01.256615   74203 docker.go:217] disabling cri-docker service (if available) ...
	I0731 18:08:01.256671   74203 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 18:08:01.271515   74203 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 18:08:01.287213   74203 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 18:08:01.415827   74203 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 18:08:01.578122   74203 docker.go:233] disabling docker service ...
	I0731 18:08:01.578208   74203 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 18:08:01.596564   74203 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 18:08:01.611984   74203 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 18:08:01.748972   74203 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 18:08:01.896911   74203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 18:08:01.912921   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 18:08:01.931671   74203 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0731 18:08:01.931749   74203 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:01.943737   74203 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 18:08:01.943798   74203 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:01.954571   74203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:01.964733   74203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:01.976087   74203 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 18:08:01.987193   74203 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 18:08:01.996620   74203 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 18:08:01.996670   74203 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 18:08:02.011046   74203 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 18:08:02.022199   74203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:08:02.147855   74203 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 18:08:02.309868   74203 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 18:08:02.309940   74203 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 18:08:02.314966   74203 start.go:563] Will wait 60s for crictl version
	I0731 18:08:02.315031   74203 ssh_runner.go:195] Run: which crictl
	I0731 18:08:02.318685   74203 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 18:08:02.359361   74203 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 18:08:02.359460   74203 ssh_runner.go:195] Run: crio --version
	I0731 18:08:02.387053   74203 ssh_runner.go:195] Run: crio --version
	I0731 18:08:02.417054   74203 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0731 18:08:02.418272   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetIP
	I0731 18:08:02.421211   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:02.421714   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:02.421743   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:02.421949   74203 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 18:08:02.425878   74203 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:08:02.438082   74203 kubeadm.go:883] updating cluster {Name:old-k8s-version-276459 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-276459 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 18:08:02.438222   74203 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 18:08:02.438293   74203 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 18:08:02.484113   74203 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0731 18:08:02.484189   74203 ssh_runner.go:195] Run: which lz4
	I0731 18:08:02.488365   74203 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0731 18:08:02.492321   74203 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 18:08:02.492352   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0731 18:08:03.946187   74203 crio.go:462] duration metric: took 1.457852426s to copy over tarball
	I0731 18:08:03.946261   74203 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 18:08:06.945760   74203 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.99946679s)
	I0731 18:08:06.945790   74203 crio.go:469] duration metric: took 2.999576832s to extract the tarball
	I0731 18:08:06.945800   74203 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 18:08:06.989081   74203 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 18:08:07.024521   74203 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0731 18:08:07.024545   74203 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 18:08:07.024615   74203 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:08:07.024645   74203 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 18:08:07.024695   74203 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 18:08:07.024729   74203 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 18:08:07.024718   74203 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0731 18:08:07.024780   74203 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0731 18:08:07.024822   74203 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0731 18:08:07.024716   74203 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 18:08:07.026228   74203 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 18:08:07.026237   74203 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 18:08:07.026242   74203 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0731 18:08:07.026263   74203 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:08:07.026231   74203 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 18:08:07.026231   74203 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0731 18:08:07.026863   74203 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 18:08:07.027091   74203 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0731 18:08:07.282735   74203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0731 18:08:07.284464   74203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0731 18:08:07.287001   74203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 18:08:07.305873   74203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0731 18:08:07.307144   74203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0731 18:08:07.311401   74203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0731 18:08:07.318119   74203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0731 18:08:07.366929   74203 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0731 18:08:07.366979   74203 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0731 18:08:07.367041   74203 ssh_runner.go:195] Run: which crictl
	I0731 18:08:07.393481   74203 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0731 18:08:07.393534   74203 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0731 18:08:07.393594   74203 ssh_runner.go:195] Run: which crictl
	I0731 18:08:07.441987   74203 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0731 18:08:07.442036   74203 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 18:08:07.442083   74203 ssh_runner.go:195] Run: which crictl
	I0731 18:08:07.449033   74203 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0731 18:08:07.449085   74203 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 18:08:07.449137   74203 ssh_runner.go:195] Run: which crictl
	I0731 18:08:07.465248   74203 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0731 18:08:07.465291   74203 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 18:08:07.465341   74203 ssh_runner.go:195] Run: which crictl
	I0731 18:08:07.476013   74203 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0731 18:08:07.476053   74203 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0731 18:08:07.476074   74203 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 18:08:07.476090   74203 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0731 18:08:07.476111   74203 ssh_runner.go:195] Run: which crictl
	I0731 18:08:07.476129   74203 ssh_runner.go:195] Run: which crictl
	I0731 18:08:07.476146   74203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0731 18:08:07.476111   74203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0731 18:08:07.476196   74203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 18:08:07.476220   74203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0731 18:08:07.476273   74203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0731 18:08:07.592532   74203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0731 18:08:07.592677   74203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0731 18:08:07.592709   74203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0731 18:08:07.592797   74203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0731 18:08:07.637254   74203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0731 18:08:07.637276   74203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0731 18:08:07.637288   74203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0731 18:08:07.637292   74203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0731 18:08:07.640419   74203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0731 18:08:07.860814   74203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:08:08.002115   74203 cache_images.go:92] duration metric: took 977.553376ms to LoadCachedImages
	W0731 18:08:08.002248   74203 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0731 18:08:08.002267   74203 kubeadm.go:934] updating node { 192.168.39.26 8443 v1.20.0 crio true true} ...
	I0731 18:08:08.002404   74203 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-276459 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.26
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-276459 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 18:08:08.002500   74203 ssh_runner.go:195] Run: crio config
	I0731 18:08:08.059237   74203 cni.go:84] Creating CNI manager for ""
	I0731 18:08:08.059264   74203 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:08:08.059281   74203 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 18:08:08.059313   74203 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.26 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-276459 NodeName:old-k8s-version-276459 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.26"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.26 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0731 18:08:08.059503   74203 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.26
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-276459"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.26
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.26"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 18:08:08.059575   74203 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0731 18:08:08.070299   74203 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 18:08:08.070388   74203 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 18:08:08.082083   74203 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0731 18:08:08.101728   74203 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 18:08:08.120721   74203 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0731 18:08:08.137997   74203 ssh_runner.go:195] Run: grep 192.168.39.26	control-plane.minikube.internal$ /etc/hosts
	I0731 18:08:08.141797   74203 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.26	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:08:08.156861   74203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:08:08.287700   74203 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 18:08:08.307598   74203 certs.go:68] Setting up /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459 for IP: 192.168.39.26
	I0731 18:08:08.307623   74203 certs.go:194] generating shared ca certs ...
	I0731 18:08:08.307644   74203 certs.go:226] acquiring lock for ca certs: {Name:mk49eed439085f9c30746706460ce213570d1997 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:08:08.307811   74203 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key
	I0731 18:08:08.307855   74203 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key
	I0731 18:08:08.307868   74203 certs.go:256] generating profile certs ...
	I0731 18:08:08.307987   74203 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/client.key
	I0731 18:08:08.308062   74203 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/apiserver.key.7c620cac
	I0731 18:08:08.308123   74203 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/proxy-client.key
	I0731 18:08:08.308283   74203 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem (1338 bytes)
	W0731 18:08:08.308315   74203 certs.go:480] ignoring /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259_empty.pem, impossibly tiny 0 bytes
	I0731 18:08:08.308324   74203 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 18:08:08.308362   74203 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem (1082 bytes)
	I0731 18:08:08.308382   74203 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem (1123 bytes)
	I0731 18:08:08.308402   74203 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem (1675 bytes)
	I0731 18:08:08.308438   74203 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem (1708 bytes)
	I0731 18:08:08.309095   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 18:08:08.355508   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 18:08:08.391999   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 18:08:08.427937   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 18:08:08.456268   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0731 18:08:08.486991   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 18:08:08.519564   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 18:08:08.557029   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 18:08:08.583971   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /usr/share/ca-certificates/152592.pem (1708 bytes)
	I0731 18:08:08.608505   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 18:08:08.630279   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem --> /usr/share/ca-certificates/15259.pem (1338 bytes)
	I0731 18:08:08.655012   74203 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 18:08:08.671907   74203 ssh_runner.go:195] Run: openssl version
	I0731 18:08:08.677538   74203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/152592.pem && ln -fs /usr/share/ca-certificates/152592.pem /etc/ssl/certs/152592.pem"
	I0731 18:08:08.687877   74203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/152592.pem
	I0731 18:08:08.692201   74203 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 16:54 /usr/share/ca-certificates/152592.pem
	I0731 18:08:08.692258   74203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/152592.pem
	I0731 18:08:08.698563   74203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/152592.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 18:08:08.708986   74203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 18:08:08.719132   74203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:08:08.723242   74203 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 16:42 /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:08:08.723299   74203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:08:08.729032   74203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 18:08:08.739306   74203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15259.pem && ln -fs /usr/share/ca-certificates/15259.pem /etc/ssl/certs/15259.pem"
	I0731 18:08:08.749759   74203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15259.pem
	I0731 18:08:08.754167   74203 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 16:54 /usr/share/ca-certificates/15259.pem
	I0731 18:08:08.754228   74203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15259.pem
	I0731 18:08:08.759786   74203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15259.pem /etc/ssl/certs/51391683.0"
	I0731 18:08:08.770180   74203 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 18:08:08.775414   74203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 18:08:08.781830   74203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 18:08:08.787876   74203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 18:08:08.793927   74203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 18:08:08.800090   74203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 18:08:08.806169   74203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 18:08:08.811895   74203 kubeadm.go:392] StartCluster: {Name:old-k8s-version-276459 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-276459 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:08:08.811983   74203 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 18:08:08.812040   74203 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 18:08:08.853889   74203 cri.go:89] found id: ""
	I0731 18:08:08.853989   74203 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 18:08:08.863817   74203 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 18:08:08.863837   74203 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 18:08:08.863887   74203 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 18:08:08.873411   74203 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 18:08:08.874616   74203 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-276459" does not appear in /home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 18:08:08.875356   74203 kubeconfig.go:62] /home/jenkins/minikube-integration/19349-8084/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-276459" cluster setting kubeconfig missing "old-k8s-version-276459" context setting]
	I0731 18:08:08.876650   74203 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/kubeconfig: {Name:mkf2340bc1ada3c683f132aca37228d7588e1fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:08:08.918433   74203 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 18:08:08.931013   74203 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.26
	I0731 18:08:08.931067   74203 kubeadm.go:1160] stopping kube-system containers ...
	I0731 18:08:08.931083   74203 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 18:08:08.931163   74203 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 18:08:08.964683   74203 cri.go:89] found id: ""
	I0731 18:08:08.964759   74203 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 18:08:08.980459   74203 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 18:08:08.989969   74203 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 18:08:08.989997   74203 kubeadm.go:157] found existing configuration files:
	
	I0731 18:08:08.990049   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 18:08:08.999015   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 18:08:08.999074   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 18:08:09.008055   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 18:08:09.016532   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 18:08:09.016599   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 18:08:09.025791   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 18:08:09.034160   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 18:08:09.034227   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 18:08:09.043381   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 18:08:09.053419   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 18:08:09.053832   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 18:08:09.064966   74203 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 18:08:09.073962   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:09.198503   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:10.048258   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:10.283812   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:10.390012   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:10.477969   74203 api_server.go:52] waiting for apiserver process to appear ...
	I0731 18:08:10.478093   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:10.978427   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:11.478715   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:11.978685   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:12.478211   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:12.978218   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:13.478493   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:13.978778   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:14.478489   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:14.978983   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:15.478444   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:15.978399   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:16.478641   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:16.979036   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:17.479053   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:17.978819   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:18.478280   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:18.978448   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:19.479056   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:19.978969   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:20.478866   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:20.978311   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:21.478333   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:21.978289   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:22.479138   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:22.978406   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:23.478138   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:23.979189   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:24.478688   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:24.978795   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:25.478432   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:25.978823   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:26.478416   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:26.979075   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:27.478228   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:27.978970   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:28.478319   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:28.979028   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:29.479060   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:29.978544   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:30.478387   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:30.978443   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:31.478484   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:31.979231   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:32.478928   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:32.978790   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:33.478426   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:33.978839   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:34.478411   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:34.978378   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:35.478287   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:35.978546   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:36.479138   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:36.979173   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:37.478319   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:37.978768   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:38.479161   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:38.979129   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:39.478128   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:39.979147   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:40.478967   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:40.978610   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:41.479192   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:41.978737   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:42.479051   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:42.978274   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:43.478957   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:43.978973   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:44.478269   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:44.978737   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:45.478411   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:45.978802   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:46.478407   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:46.978134   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:47.479125   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:47.978991   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:48.478597   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:48.978742   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:49.479320   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:49.978288   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:50.478112   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:50.978272   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:51.478319   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:51.978880   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:52.479176   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:52.979001   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:53.478508   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:53.978517   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:54.478857   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:54.978290   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:55.478727   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:55.978552   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:56.478246   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:56.978732   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:57.478262   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:57.978216   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:58.478212   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:58.978406   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:59.478270   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:59.978221   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:00.478785   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:00.978350   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:01.478635   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:01.978192   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:02.478480   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:02.979021   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:03.478366   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:03.978984   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:04.479143   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:04.978913   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:05.478608   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:05.978345   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:06.478435   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:06.978551   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:07.478131   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:07.978354   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:08.478977   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:08.979122   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:09.478279   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:09.978350   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:10.479086   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:10.479175   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:10.516364   74203 cri.go:89] found id: ""
	I0731 18:09:10.516389   74203 logs.go:276] 0 containers: []
	W0731 18:09:10.516405   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:10.516411   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:10.516464   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:10.549398   74203 cri.go:89] found id: ""
	I0731 18:09:10.549422   74203 logs.go:276] 0 containers: []
	W0731 18:09:10.549433   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:10.549440   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:10.549503   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:10.584290   74203 cri.go:89] found id: ""
	I0731 18:09:10.584314   74203 logs.go:276] 0 containers: []
	W0731 18:09:10.584322   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:10.584327   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:10.584381   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:10.615832   74203 cri.go:89] found id: ""
	I0731 18:09:10.615860   74203 logs.go:276] 0 containers: []
	W0731 18:09:10.615871   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:10.615878   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:10.615941   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:10.647597   74203 cri.go:89] found id: ""
	I0731 18:09:10.647617   74203 logs.go:276] 0 containers: []
	W0731 18:09:10.647624   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:10.647629   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:10.647686   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:10.680981   74203 cri.go:89] found id: ""
	I0731 18:09:10.681016   74203 logs.go:276] 0 containers: []
	W0731 18:09:10.681027   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:10.681033   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:10.681093   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:10.713798   74203 cri.go:89] found id: ""
	I0731 18:09:10.713839   74203 logs.go:276] 0 containers: []
	W0731 18:09:10.713851   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:10.713865   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:10.713937   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:10.746378   74203 cri.go:89] found id: ""
	I0731 18:09:10.746405   74203 logs.go:276] 0 containers: []
	W0731 18:09:10.746413   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:10.746423   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:10.746439   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:10.799156   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:10.799187   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:10.812388   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:10.812413   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:10.932251   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:10.932271   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:10.932285   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:10.996810   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:10.996840   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:13.533936   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:13.549194   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:13.549250   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:13.599350   74203 cri.go:89] found id: ""
	I0731 18:09:13.599389   74203 logs.go:276] 0 containers: []
	W0731 18:09:13.599400   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:13.599407   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:13.599466   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:13.651736   74203 cri.go:89] found id: ""
	I0731 18:09:13.651771   74203 logs.go:276] 0 containers: []
	W0731 18:09:13.651791   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:13.651798   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:13.651855   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:13.699804   74203 cri.go:89] found id: ""
	I0731 18:09:13.699832   74203 logs.go:276] 0 containers: []
	W0731 18:09:13.699841   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:13.699846   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:13.699906   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:13.732760   74203 cri.go:89] found id: ""
	I0731 18:09:13.732781   74203 logs.go:276] 0 containers: []
	W0731 18:09:13.732788   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:13.732794   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:13.732849   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:13.766865   74203 cri.go:89] found id: ""
	I0731 18:09:13.766892   74203 logs.go:276] 0 containers: []
	W0731 18:09:13.766902   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:13.766910   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:13.766964   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:13.804706   74203 cri.go:89] found id: ""
	I0731 18:09:13.804733   74203 logs.go:276] 0 containers: []
	W0731 18:09:13.804743   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:13.804757   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:13.804821   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:13.838432   74203 cri.go:89] found id: ""
	I0731 18:09:13.838461   74203 logs.go:276] 0 containers: []
	W0731 18:09:13.838472   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:13.838479   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:13.838534   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:13.870455   74203 cri.go:89] found id: ""
	I0731 18:09:13.870480   74203 logs.go:276] 0 containers: []
	W0731 18:09:13.870490   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:13.870498   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:13.870510   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:13.922911   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:13.922947   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:13.936075   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:13.936098   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:14.006766   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:14.006790   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:14.006810   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:14.071066   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:14.071100   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:16.615212   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:16.627439   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:16.627499   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:16.660764   74203 cri.go:89] found id: ""
	I0731 18:09:16.660785   74203 logs.go:276] 0 containers: []
	W0731 18:09:16.660792   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:16.660798   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:16.660842   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:16.697154   74203 cri.go:89] found id: ""
	I0731 18:09:16.697182   74203 logs.go:276] 0 containers: []
	W0731 18:09:16.697196   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:16.697201   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:16.697259   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:16.730263   74203 cri.go:89] found id: ""
	I0731 18:09:16.730284   74203 logs.go:276] 0 containers: []
	W0731 18:09:16.730291   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:16.730318   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:16.730369   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:16.765226   74203 cri.go:89] found id: ""
	I0731 18:09:16.765249   74203 logs.go:276] 0 containers: []
	W0731 18:09:16.765257   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:16.765262   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:16.765336   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:16.800502   74203 cri.go:89] found id: ""
	I0731 18:09:16.800528   74203 logs.go:276] 0 containers: []
	W0731 18:09:16.800535   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:16.800541   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:16.800599   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:16.837391   74203 cri.go:89] found id: ""
	I0731 18:09:16.837418   74203 logs.go:276] 0 containers: []
	W0731 18:09:16.837427   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:16.837435   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:16.837490   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:16.867606   74203 cri.go:89] found id: ""
	I0731 18:09:16.867628   74203 logs.go:276] 0 containers: []
	W0731 18:09:16.867637   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:16.867642   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:16.867696   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:16.901639   74203 cri.go:89] found id: ""
	I0731 18:09:16.901669   74203 logs.go:276] 0 containers: []
	W0731 18:09:16.901681   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:16.901693   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:16.901707   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:16.951692   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:16.951729   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:16.965069   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:16.965101   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:17.040337   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:17.040358   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:17.040371   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:17.115058   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:17.115093   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:19.651538   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:19.663682   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:19.663739   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:19.697851   74203 cri.go:89] found id: ""
	I0731 18:09:19.697879   74203 logs.go:276] 0 containers: []
	W0731 18:09:19.697894   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:19.697900   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:19.697996   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:19.732745   74203 cri.go:89] found id: ""
	I0731 18:09:19.732772   74203 logs.go:276] 0 containers: []
	W0731 18:09:19.732783   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:19.732790   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:19.732855   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:19.763843   74203 cri.go:89] found id: ""
	I0731 18:09:19.763865   74203 logs.go:276] 0 containers: []
	W0731 18:09:19.763873   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:19.763878   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:19.763934   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:19.797398   74203 cri.go:89] found id: ""
	I0731 18:09:19.797422   74203 logs.go:276] 0 containers: []
	W0731 18:09:19.797429   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:19.797434   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:19.797504   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:19.833239   74203 cri.go:89] found id: ""
	I0731 18:09:19.833268   74203 logs.go:276] 0 containers: []
	W0731 18:09:19.833278   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:19.833284   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:19.833340   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:19.866135   74203 cri.go:89] found id: ""
	I0731 18:09:19.866163   74203 logs.go:276] 0 containers: []
	W0731 18:09:19.866173   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:19.866181   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:19.866242   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:19.900581   74203 cri.go:89] found id: ""
	I0731 18:09:19.900606   74203 logs.go:276] 0 containers: []
	W0731 18:09:19.900615   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:19.900621   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:19.900720   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:19.936451   74203 cri.go:89] found id: ""
	I0731 18:09:19.936475   74203 logs.go:276] 0 containers: []
	W0731 18:09:19.936487   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:19.936496   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:19.936508   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:19.990522   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:19.990559   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:20.003460   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:20.003487   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:20.070869   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:20.070893   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:20.070912   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:20.148316   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:20.148354   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:22.685964   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:22.698740   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:22.698814   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:22.735321   74203 cri.go:89] found id: ""
	I0731 18:09:22.735350   74203 logs.go:276] 0 containers: []
	W0731 18:09:22.735360   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:22.735367   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:22.735428   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:22.767689   74203 cri.go:89] found id: ""
	I0731 18:09:22.767718   74203 logs.go:276] 0 containers: []
	W0731 18:09:22.767729   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:22.767736   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:22.767795   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:22.804010   74203 cri.go:89] found id: ""
	I0731 18:09:22.804036   74203 logs.go:276] 0 containers: []
	W0731 18:09:22.804045   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:22.804050   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:22.804101   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:22.836820   74203 cri.go:89] found id: ""
	I0731 18:09:22.836847   74203 logs.go:276] 0 containers: []
	W0731 18:09:22.836858   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:22.836874   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:22.836933   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:22.870163   74203 cri.go:89] found id: ""
	I0731 18:09:22.870187   74203 logs.go:276] 0 containers: []
	W0731 18:09:22.870194   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:22.870199   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:22.870270   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:22.905926   74203 cri.go:89] found id: ""
	I0731 18:09:22.905951   74203 logs.go:276] 0 containers: []
	W0731 18:09:22.905959   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:22.905965   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:22.906020   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:22.938926   74203 cri.go:89] found id: ""
	I0731 18:09:22.938949   74203 logs.go:276] 0 containers: []
	W0731 18:09:22.938957   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:22.938963   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:22.939008   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:22.975150   74203 cri.go:89] found id: ""
	I0731 18:09:22.975185   74203 logs.go:276] 0 containers: []
	W0731 18:09:22.975194   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:22.975204   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:22.975219   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:23.043265   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:23.043290   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:23.043302   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:23.122681   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:23.122717   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:23.161745   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:23.161769   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:23.211274   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:23.211305   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:25.724702   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:25.739335   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:25.739415   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:25.778238   74203 cri.go:89] found id: ""
	I0731 18:09:25.778264   74203 logs.go:276] 0 containers: []
	W0731 18:09:25.778274   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:25.778282   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:25.778349   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:25.816530   74203 cri.go:89] found id: ""
	I0731 18:09:25.816566   74203 logs.go:276] 0 containers: []
	W0731 18:09:25.816579   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:25.816587   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:25.816652   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:25.853524   74203 cri.go:89] found id: ""
	I0731 18:09:25.853562   74203 logs.go:276] 0 containers: []
	W0731 18:09:25.853575   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:25.853583   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:25.853661   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:25.889690   74203 cri.go:89] found id: ""
	I0731 18:09:25.889719   74203 logs.go:276] 0 containers: []
	W0731 18:09:25.889728   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:25.889734   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:25.889803   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:25.922409   74203 cri.go:89] found id: ""
	I0731 18:09:25.922441   74203 logs.go:276] 0 containers: []
	W0731 18:09:25.922452   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:25.922459   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:25.922512   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:25.956849   74203 cri.go:89] found id: ""
	I0731 18:09:25.956876   74203 logs.go:276] 0 containers: []
	W0731 18:09:25.956886   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:25.956893   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:25.956958   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:25.994190   74203 cri.go:89] found id: ""
	I0731 18:09:25.994212   74203 logs.go:276] 0 containers: []
	W0731 18:09:25.994220   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:25.994225   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:25.994270   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:26.027980   74203 cri.go:89] found id: ""
	I0731 18:09:26.028005   74203 logs.go:276] 0 containers: []
	W0731 18:09:26.028014   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:26.028025   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:26.028044   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:26.076627   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:26.076661   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:26.089439   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:26.089464   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:26.167298   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:26.167319   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:26.167333   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:26.244611   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:26.244644   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:28.787238   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:28.800136   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:28.800221   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:28.843038   74203 cri.go:89] found id: ""
	I0731 18:09:28.843062   74203 logs.go:276] 0 containers: []
	W0731 18:09:28.843070   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:28.843076   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:28.843154   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:28.876979   74203 cri.go:89] found id: ""
	I0731 18:09:28.877010   74203 logs.go:276] 0 containers: []
	W0731 18:09:28.877021   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:28.877028   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:28.877095   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:28.913105   74203 cri.go:89] found id: ""
	I0731 18:09:28.913137   74203 logs.go:276] 0 containers: []
	W0731 18:09:28.913147   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:28.913155   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:28.913216   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:28.949113   74203 cri.go:89] found id: ""
	I0731 18:09:28.949144   74203 logs.go:276] 0 containers: []
	W0731 18:09:28.949153   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:28.949160   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:28.949208   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:28.983159   74203 cri.go:89] found id: ""
	I0731 18:09:28.983187   74203 logs.go:276] 0 containers: []
	W0731 18:09:28.983195   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:28.983200   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:28.983276   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:29.016316   74203 cri.go:89] found id: ""
	I0731 18:09:29.016356   74203 logs.go:276] 0 containers: []
	W0731 18:09:29.016364   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:29.016370   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:29.016419   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:29.050015   74203 cri.go:89] found id: ""
	I0731 18:09:29.050047   74203 logs.go:276] 0 containers: []
	W0731 18:09:29.050058   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:29.050069   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:29.050124   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:29.084711   74203 cri.go:89] found id: ""
	I0731 18:09:29.084739   74203 logs.go:276] 0 containers: []
	W0731 18:09:29.084749   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:29.084760   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:29.084777   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:29.135474   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:29.135516   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:29.149989   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:29.150022   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:29.223652   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:29.223676   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:29.223688   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:29.307949   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:29.307983   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:31.848760   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:31.861409   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:31.861470   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:31.894485   74203 cri.go:89] found id: ""
	I0731 18:09:31.894505   74203 logs.go:276] 0 containers: []
	W0731 18:09:31.894513   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:31.894518   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:31.894563   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:31.926760   74203 cri.go:89] found id: ""
	I0731 18:09:31.926784   74203 logs.go:276] 0 containers: []
	W0731 18:09:31.926791   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:31.926797   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:31.926857   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:31.963010   74203 cri.go:89] found id: ""
	I0731 18:09:31.963042   74203 logs.go:276] 0 containers: []
	W0731 18:09:31.963055   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:31.963062   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:31.963165   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:31.995221   74203 cri.go:89] found id: ""
	I0731 18:09:31.995249   74203 logs.go:276] 0 containers: []
	W0731 18:09:31.995260   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:31.995268   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:31.995333   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:32.033912   74203 cri.go:89] found id: ""
	I0731 18:09:32.033942   74203 logs.go:276] 0 containers: []
	W0731 18:09:32.033955   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:32.033963   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:32.034038   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:32.066416   74203 cri.go:89] found id: ""
	I0731 18:09:32.066446   74203 logs.go:276] 0 containers: []
	W0731 18:09:32.066477   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:32.066486   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:32.066549   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:32.100097   74203 cri.go:89] found id: ""
	I0731 18:09:32.100121   74203 logs.go:276] 0 containers: []
	W0731 18:09:32.100129   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:32.100135   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:32.100191   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:32.133061   74203 cri.go:89] found id: ""
	I0731 18:09:32.133088   74203 logs.go:276] 0 containers: []
	W0731 18:09:32.133096   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:32.133106   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:32.133120   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:32.169869   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:32.169897   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:32.218668   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:32.218707   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:32.231016   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:32.231039   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:32.304319   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:32.304342   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:32.304353   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:34.880423   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:34.893775   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:34.893853   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:34.925073   74203 cri.go:89] found id: ""
	I0731 18:09:34.925101   74203 logs.go:276] 0 containers: []
	W0731 18:09:34.925109   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:34.925115   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:34.925178   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:34.960870   74203 cri.go:89] found id: ""
	I0731 18:09:34.960896   74203 logs.go:276] 0 containers: []
	W0731 18:09:34.960904   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:34.960910   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:34.960961   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:34.996290   74203 cri.go:89] found id: ""
	I0731 18:09:34.996332   74203 logs.go:276] 0 containers: []
	W0731 18:09:34.996341   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:34.996347   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:34.996401   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:35.027900   74203 cri.go:89] found id: ""
	I0731 18:09:35.027932   74203 logs.go:276] 0 containers: []
	W0731 18:09:35.027940   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:35.027945   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:35.028004   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:35.060533   74203 cri.go:89] found id: ""
	I0731 18:09:35.060562   74203 logs.go:276] 0 containers: []
	W0731 18:09:35.060579   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:35.060586   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:35.060653   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:35.095307   74203 cri.go:89] found id: ""
	I0731 18:09:35.095339   74203 logs.go:276] 0 containers: []
	W0731 18:09:35.095348   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:35.095355   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:35.095421   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:35.127060   74203 cri.go:89] found id: ""
	I0731 18:09:35.127082   74203 logs.go:276] 0 containers: []
	W0731 18:09:35.127090   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:35.127095   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:35.127169   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:35.161300   74203 cri.go:89] found id: ""
	I0731 18:09:35.161328   74203 logs.go:276] 0 containers: []
	W0731 18:09:35.161339   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:35.161350   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:35.161369   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:35.233033   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:35.233060   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:35.233074   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:35.313279   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:35.313311   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:35.356120   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:35.356145   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:35.408231   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:35.408263   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:37.921242   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:37.933986   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:37.934044   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:37.964524   74203 cri.go:89] found id: ""
	I0731 18:09:37.964558   74203 logs.go:276] 0 containers: []
	W0731 18:09:37.964567   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:37.964574   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:37.964632   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:37.998157   74203 cri.go:89] found id: ""
	I0731 18:09:37.998183   74203 logs.go:276] 0 containers: []
	W0731 18:09:37.998191   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:37.998196   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:37.998257   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:38.034611   74203 cri.go:89] found id: ""
	I0731 18:09:38.034637   74203 logs.go:276] 0 containers: []
	W0731 18:09:38.034645   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:38.034650   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:38.034708   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:38.068005   74203 cri.go:89] found id: ""
	I0731 18:09:38.068029   74203 logs.go:276] 0 containers: []
	W0731 18:09:38.068039   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:38.068047   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:38.068104   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:38.106110   74203 cri.go:89] found id: ""
	I0731 18:09:38.106133   74203 logs.go:276] 0 containers: []
	W0731 18:09:38.106141   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:38.106146   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:38.106192   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:38.138337   74203 cri.go:89] found id: ""
	I0731 18:09:38.138364   74203 logs.go:276] 0 containers: []
	W0731 18:09:38.138375   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:38.138383   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:38.138440   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:38.171517   74203 cri.go:89] found id: ""
	I0731 18:09:38.171546   74203 logs.go:276] 0 containers: []
	W0731 18:09:38.171557   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:38.171564   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:38.171643   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:38.208708   74203 cri.go:89] found id: ""
	I0731 18:09:38.208733   74203 logs.go:276] 0 containers: []
	W0731 18:09:38.208741   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:38.208750   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:38.208760   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:38.243711   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:38.243736   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:38.298673   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:38.298705   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:38.311936   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:38.311962   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:38.384023   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:38.384049   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:38.384067   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:40.959426   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:40.972581   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:40.972645   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:41.008917   74203 cri.go:89] found id: ""
	I0731 18:09:41.008941   74203 logs.go:276] 0 containers: []
	W0731 18:09:41.008950   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:41.008957   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:41.009018   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:41.045342   74203 cri.go:89] found id: ""
	I0731 18:09:41.045375   74203 logs.go:276] 0 containers: []
	W0731 18:09:41.045384   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:41.045390   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:41.045454   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:41.081385   74203 cri.go:89] found id: ""
	I0731 18:09:41.081409   74203 logs.go:276] 0 containers: []
	W0731 18:09:41.081417   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:41.081423   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:41.081469   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:41.118028   74203 cri.go:89] found id: ""
	I0731 18:09:41.118051   74203 logs.go:276] 0 containers: []
	W0731 18:09:41.118062   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:41.118067   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:41.118114   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:41.154162   74203 cri.go:89] found id: ""
	I0731 18:09:41.154190   74203 logs.go:276] 0 containers: []
	W0731 18:09:41.154201   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:41.154209   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:41.154271   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:41.190789   74203 cri.go:89] found id: ""
	I0731 18:09:41.190814   74203 logs.go:276] 0 containers: []
	W0731 18:09:41.190822   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:41.190827   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:41.190887   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:41.226281   74203 cri.go:89] found id: ""
	I0731 18:09:41.226312   74203 logs.go:276] 0 containers: []
	W0731 18:09:41.226321   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:41.226327   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:41.226382   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:41.258270   74203 cri.go:89] found id: ""
	I0731 18:09:41.258299   74203 logs.go:276] 0 containers: []
	W0731 18:09:41.258309   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:41.258321   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:41.258335   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:41.342713   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:41.342749   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:41.389772   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:41.389795   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:41.442645   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:41.442676   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:41.455850   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:41.455874   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:41.522017   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:44.022439   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:44.035190   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:44.035258   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:44.070759   74203 cri.go:89] found id: ""
	I0731 18:09:44.070783   74203 logs.go:276] 0 containers: []
	W0731 18:09:44.070790   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:44.070796   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:44.070857   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:44.105313   74203 cri.go:89] found id: ""
	I0731 18:09:44.105350   74203 logs.go:276] 0 containers: []
	W0731 18:09:44.105358   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:44.105364   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:44.105416   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:44.140159   74203 cri.go:89] found id: ""
	I0731 18:09:44.140208   74203 logs.go:276] 0 containers: []
	W0731 18:09:44.140220   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:44.140229   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:44.140301   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:44.176407   74203 cri.go:89] found id: ""
	I0731 18:09:44.176429   74203 logs.go:276] 0 containers: []
	W0731 18:09:44.176437   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:44.176442   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:44.176490   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:44.210875   74203 cri.go:89] found id: ""
	I0731 18:09:44.210899   74203 logs.go:276] 0 containers: []
	W0731 18:09:44.210907   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:44.210916   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:44.210969   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:44.247021   74203 cri.go:89] found id: ""
	I0731 18:09:44.247045   74203 logs.go:276] 0 containers: []
	W0731 18:09:44.247055   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:44.247061   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:44.247141   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:44.282983   74203 cri.go:89] found id: ""
	I0731 18:09:44.283011   74203 logs.go:276] 0 containers: []
	W0731 18:09:44.283021   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:44.283029   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:44.283092   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:44.319717   74203 cri.go:89] found id: ""
	I0731 18:09:44.319742   74203 logs.go:276] 0 containers: []
	W0731 18:09:44.319750   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:44.319766   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:44.319781   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:44.398602   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:44.398636   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:44.435350   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:44.435384   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:44.488021   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:44.488053   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:44.501790   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:44.501813   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:44.578374   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:47.079192   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:47.093516   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:47.093597   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:47.132872   74203 cri.go:89] found id: ""
	I0731 18:09:47.132899   74203 logs.go:276] 0 containers: []
	W0731 18:09:47.132907   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:47.132913   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:47.132969   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:47.167428   74203 cri.go:89] found id: ""
	I0731 18:09:47.167460   74203 logs.go:276] 0 containers: []
	W0731 18:09:47.167472   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:47.167480   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:47.167551   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:47.202206   74203 cri.go:89] found id: ""
	I0731 18:09:47.202237   74203 logs.go:276] 0 containers: []
	W0731 18:09:47.202250   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:47.202256   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:47.202308   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:47.238513   74203 cri.go:89] found id: ""
	I0731 18:09:47.238537   74203 logs.go:276] 0 containers: []
	W0731 18:09:47.238545   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:47.238551   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:47.238604   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:47.271732   74203 cri.go:89] found id: ""
	I0731 18:09:47.271755   74203 logs.go:276] 0 containers: []
	W0731 18:09:47.271764   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:47.271770   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:47.271828   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:47.305906   74203 cri.go:89] found id: ""
	I0731 18:09:47.305932   74203 logs.go:276] 0 containers: []
	W0731 18:09:47.305943   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:47.305948   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:47.305996   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:47.338427   74203 cri.go:89] found id: ""
	I0731 18:09:47.338452   74203 logs.go:276] 0 containers: []
	W0731 18:09:47.338461   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:47.338468   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:47.338526   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:47.374909   74203 cri.go:89] found id: ""
	I0731 18:09:47.374943   74203 logs.go:276] 0 containers: []
	W0731 18:09:47.374954   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:47.374963   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:47.374976   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:47.387739   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:47.387765   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:47.480479   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:47.480505   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:47.480519   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:47.562857   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:47.562890   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:47.608435   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:47.608466   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:50.164351   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:50.177485   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:50.177546   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:50.211474   74203 cri.go:89] found id: ""
	I0731 18:09:50.211502   74203 logs.go:276] 0 containers: []
	W0731 18:09:50.211512   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:50.211520   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:50.211583   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:50.248167   74203 cri.go:89] found id: ""
	I0731 18:09:50.248190   74203 logs.go:276] 0 containers: []
	W0731 18:09:50.248197   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:50.248203   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:50.248250   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:50.286323   74203 cri.go:89] found id: ""
	I0731 18:09:50.286358   74203 logs.go:276] 0 containers: []
	W0731 18:09:50.286366   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:50.286372   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:50.286420   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:50.316634   74203 cri.go:89] found id: ""
	I0731 18:09:50.316661   74203 logs.go:276] 0 containers: []
	W0731 18:09:50.316670   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:50.316675   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:50.316726   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:50.349881   74203 cri.go:89] found id: ""
	I0731 18:09:50.349909   74203 logs.go:276] 0 containers: []
	W0731 18:09:50.349919   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:50.349926   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:50.349989   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:50.384147   74203 cri.go:89] found id: ""
	I0731 18:09:50.384181   74203 logs.go:276] 0 containers: []
	W0731 18:09:50.384194   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:50.384203   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:50.384272   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:50.418024   74203 cri.go:89] found id: ""
	I0731 18:09:50.418052   74203 logs.go:276] 0 containers: []
	W0731 18:09:50.418062   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:50.418069   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:50.418130   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:50.454484   74203 cri.go:89] found id: ""
	I0731 18:09:50.454517   74203 logs.go:276] 0 containers: []
	W0731 18:09:50.454525   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:50.454533   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:50.454544   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:50.505508   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:50.505545   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:50.518504   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:50.518529   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:50.587950   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:50.587974   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:50.587989   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:50.669268   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:50.669302   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:53.209229   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:53.222114   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:53.222175   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:53.255330   74203 cri.go:89] found id: ""
	I0731 18:09:53.255356   74203 logs.go:276] 0 containers: []
	W0731 18:09:53.255365   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:53.255371   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:53.255432   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:53.290354   74203 cri.go:89] found id: ""
	I0731 18:09:53.290375   74203 logs.go:276] 0 containers: []
	W0731 18:09:53.290382   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:53.290387   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:53.290438   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:53.323621   74203 cri.go:89] found id: ""
	I0731 18:09:53.323645   74203 logs.go:276] 0 containers: []
	W0731 18:09:53.323653   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:53.323658   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:53.323718   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:53.355850   74203 cri.go:89] found id: ""
	I0731 18:09:53.355877   74203 logs.go:276] 0 containers: []
	W0731 18:09:53.355887   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:53.355894   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:53.355957   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:53.388686   74203 cri.go:89] found id: ""
	I0731 18:09:53.388716   74203 logs.go:276] 0 containers: []
	W0731 18:09:53.388726   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:53.388733   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:53.388785   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:53.426924   74203 cri.go:89] found id: ""
	I0731 18:09:53.426952   74203 logs.go:276] 0 containers: []
	W0731 18:09:53.426961   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:53.426967   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:53.427019   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:53.462041   74203 cri.go:89] found id: ""
	I0731 18:09:53.462067   74203 logs.go:276] 0 containers: []
	W0731 18:09:53.462078   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:53.462084   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:53.462145   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:53.493810   74203 cri.go:89] found id: ""
	I0731 18:09:53.493833   74203 logs.go:276] 0 containers: []
	W0731 18:09:53.493842   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:53.493852   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:53.493867   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:53.530019   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:53.530053   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:53.580749   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:53.580782   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:53.594457   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:53.594482   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:53.662096   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:53.662116   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:53.662134   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:56.238479   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:56.251272   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:56.251350   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:56.287380   74203 cri.go:89] found id: ""
	I0731 18:09:56.287406   74203 logs.go:276] 0 containers: []
	W0731 18:09:56.287414   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:56.287419   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:56.287471   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:56.322490   74203 cri.go:89] found id: ""
	I0731 18:09:56.322512   74203 logs.go:276] 0 containers: []
	W0731 18:09:56.322520   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:56.322526   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:56.322572   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:56.355845   74203 cri.go:89] found id: ""
	I0731 18:09:56.355874   74203 logs.go:276] 0 containers: []
	W0731 18:09:56.355885   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:56.355895   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:56.355958   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:56.388304   74203 cri.go:89] found id: ""
	I0731 18:09:56.388330   74203 logs.go:276] 0 containers: []
	W0731 18:09:56.388340   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:56.388348   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:56.388411   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:56.420837   74203 cri.go:89] found id: ""
	I0731 18:09:56.420867   74203 logs.go:276] 0 containers: []
	W0731 18:09:56.420877   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:56.420884   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:56.420950   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:56.453095   74203 cri.go:89] found id: ""
	I0731 18:09:56.453135   74203 logs.go:276] 0 containers: []
	W0731 18:09:56.453146   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:56.453155   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:56.453214   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:56.484245   74203 cri.go:89] found id: ""
	I0731 18:09:56.484272   74203 logs.go:276] 0 containers: []
	W0731 18:09:56.484282   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:56.484296   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:56.484366   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:56.519473   74203 cri.go:89] found id: ""
	I0731 18:09:56.519501   74203 logs.go:276] 0 containers: []
	W0731 18:09:56.519508   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:56.519516   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:56.519530   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:56.532178   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:56.532203   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:56.600092   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:56.600122   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:56.600137   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:56.679176   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:56.679208   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:56.715464   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:56.715499   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:59.267214   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:59.280666   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:59.280740   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:59.312898   74203 cri.go:89] found id: ""
	I0731 18:09:59.312928   74203 logs.go:276] 0 containers: []
	W0731 18:09:59.312940   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:59.312947   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:59.313013   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:59.347881   74203 cri.go:89] found id: ""
	I0731 18:09:59.347907   74203 logs.go:276] 0 containers: []
	W0731 18:09:59.347915   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:59.347919   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:59.347978   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:59.382566   74203 cri.go:89] found id: ""
	I0731 18:09:59.382603   74203 logs.go:276] 0 containers: []
	W0731 18:09:59.382615   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:59.382629   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:59.382691   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:59.417123   74203 cri.go:89] found id: ""
	I0731 18:09:59.417148   74203 logs.go:276] 0 containers: []
	W0731 18:09:59.417157   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:59.417163   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:59.417220   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:59.452674   74203 cri.go:89] found id: ""
	I0731 18:09:59.452699   74203 logs.go:276] 0 containers: []
	W0731 18:09:59.452709   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:59.452715   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:59.452775   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:59.488879   74203 cri.go:89] found id: ""
	I0731 18:09:59.488905   74203 logs.go:276] 0 containers: []
	W0731 18:09:59.488913   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:59.488921   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:59.488981   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:59.521773   74203 cri.go:89] found id: ""
	I0731 18:09:59.521801   74203 logs.go:276] 0 containers: []
	W0731 18:09:59.521809   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:59.521816   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:59.521876   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:59.566619   74203 cri.go:89] found id: ""
	I0731 18:09:59.566649   74203 logs.go:276] 0 containers: []
	W0731 18:09:59.566659   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:59.566670   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:59.566687   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:59.638301   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:59.638351   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:59.638367   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:59.721561   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:59.721597   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:59.759371   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:59.759402   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:59.811223   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:59.811255   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:02.325339   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:02.337908   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:02.337963   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:02.369343   74203 cri.go:89] found id: ""
	I0731 18:10:02.369369   74203 logs.go:276] 0 containers: []
	W0731 18:10:02.369378   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:02.369384   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:02.369442   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:02.406207   74203 cri.go:89] found id: ""
	I0731 18:10:02.406234   74203 logs.go:276] 0 containers: []
	W0731 18:10:02.406242   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:02.406247   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:02.406297   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:02.442001   74203 cri.go:89] found id: ""
	I0731 18:10:02.442031   74203 logs.go:276] 0 containers: []
	W0731 18:10:02.442041   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:02.442049   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:02.442109   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:02.478407   74203 cri.go:89] found id: ""
	I0731 18:10:02.478431   74203 logs.go:276] 0 containers: []
	W0731 18:10:02.478439   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:02.478444   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:02.478491   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:02.513832   74203 cri.go:89] found id: ""
	I0731 18:10:02.513875   74203 logs.go:276] 0 containers: []
	W0731 18:10:02.513888   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:02.513896   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:02.513962   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:02.550830   74203 cri.go:89] found id: ""
	I0731 18:10:02.550856   74203 logs.go:276] 0 containers: []
	W0731 18:10:02.550867   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:02.550874   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:02.550937   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:02.584649   74203 cri.go:89] found id: ""
	I0731 18:10:02.584676   74203 logs.go:276] 0 containers: []
	W0731 18:10:02.584684   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:02.584691   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:02.584752   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:02.617436   74203 cri.go:89] found id: ""
	I0731 18:10:02.617464   74203 logs.go:276] 0 containers: []
	W0731 18:10:02.617475   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:02.617485   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:02.617500   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:02.671571   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:02.671609   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:02.686657   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:02.686694   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:02.755974   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:02.756008   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:02.756025   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:02.837976   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:02.838012   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:05.375212   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:05.388635   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:05.388703   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:05.427583   74203 cri.go:89] found id: ""
	I0731 18:10:05.427610   74203 logs.go:276] 0 containers: []
	W0731 18:10:05.427617   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:05.427622   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:05.427673   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:05.462550   74203 cri.go:89] found id: ""
	I0731 18:10:05.462575   74203 logs.go:276] 0 containers: []
	W0731 18:10:05.462583   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:05.462589   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:05.462645   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:05.501768   74203 cri.go:89] found id: ""
	I0731 18:10:05.501790   74203 logs.go:276] 0 containers: []
	W0731 18:10:05.501797   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:05.501802   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:05.501860   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:05.539692   74203 cri.go:89] found id: ""
	I0731 18:10:05.539719   74203 logs.go:276] 0 containers: []
	W0731 18:10:05.539731   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:05.539737   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:05.539798   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:05.573844   74203 cri.go:89] found id: ""
	I0731 18:10:05.573872   74203 logs.go:276] 0 containers: []
	W0731 18:10:05.573884   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:05.573891   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:05.573953   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:05.607827   74203 cri.go:89] found id: ""
	I0731 18:10:05.607848   74203 logs.go:276] 0 containers: []
	W0731 18:10:05.607858   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:05.607863   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:05.607913   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:05.639644   74203 cri.go:89] found id: ""
	I0731 18:10:05.639673   74203 logs.go:276] 0 containers: []
	W0731 18:10:05.639684   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:05.639691   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:05.639753   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:05.673164   74203 cri.go:89] found id: ""
	I0731 18:10:05.673188   74203 logs.go:276] 0 containers: []
	W0731 18:10:05.673195   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:05.673203   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:05.673215   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:05.755189   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:05.755221   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:05.793686   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:05.793715   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:05.844930   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:05.844965   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:05.859150   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:05.859176   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:05.929945   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:08.430669   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:08.444918   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:08.444989   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:08.482598   74203 cri.go:89] found id: ""
	I0731 18:10:08.482625   74203 logs.go:276] 0 containers: []
	W0731 18:10:08.482635   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:08.482642   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:08.482708   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:08.519687   74203 cri.go:89] found id: ""
	I0731 18:10:08.519717   74203 logs.go:276] 0 containers: []
	W0731 18:10:08.519726   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:08.519734   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:08.519795   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:08.551600   74203 cri.go:89] found id: ""
	I0731 18:10:08.551638   74203 logs.go:276] 0 containers: []
	W0731 18:10:08.551649   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:08.551657   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:08.551713   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:08.585233   74203 cri.go:89] found id: ""
	I0731 18:10:08.585263   74203 logs.go:276] 0 containers: []
	W0731 18:10:08.585274   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:08.585282   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:08.585343   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:08.622464   74203 cri.go:89] found id: ""
	I0731 18:10:08.622492   74203 logs.go:276] 0 containers: []
	W0731 18:10:08.622502   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:08.622510   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:08.622569   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:08.658360   74203 cri.go:89] found id: ""
	I0731 18:10:08.658390   74203 logs.go:276] 0 containers: []
	W0731 18:10:08.658402   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:08.658410   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:08.658471   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:08.692076   74203 cri.go:89] found id: ""
	I0731 18:10:08.692100   74203 logs.go:276] 0 containers: []
	W0731 18:10:08.692109   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:08.692116   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:08.692179   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:08.729584   74203 cri.go:89] found id: ""
	I0731 18:10:08.729612   74203 logs.go:276] 0 containers: []
	W0731 18:10:08.729622   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:08.729633   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:08.729647   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:08.806395   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:08.806457   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:08.806485   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:08.884008   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:08.884046   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:08.924359   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:08.924398   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:08.978161   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:08.978195   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:11.491784   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:11.504711   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:11.504784   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:11.541314   74203 cri.go:89] found id: ""
	I0731 18:10:11.541353   74203 logs.go:276] 0 containers: []
	W0731 18:10:11.541361   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:11.541366   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:11.541424   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:11.576481   74203 cri.go:89] found id: ""
	I0731 18:10:11.576509   74203 logs.go:276] 0 containers: []
	W0731 18:10:11.576527   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:11.576535   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:11.576597   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:11.610370   74203 cri.go:89] found id: ""
	I0731 18:10:11.610395   74203 logs.go:276] 0 containers: []
	W0731 18:10:11.610404   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:11.610412   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:11.610470   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:11.645559   74203 cri.go:89] found id: ""
	I0731 18:10:11.645586   74203 logs.go:276] 0 containers: []
	W0731 18:10:11.645593   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:11.645598   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:11.645654   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:11.677576   74203 cri.go:89] found id: ""
	I0731 18:10:11.677613   74203 logs.go:276] 0 containers: []
	W0731 18:10:11.677624   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:11.677631   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:11.677681   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:11.710173   74203 cri.go:89] found id: ""
	I0731 18:10:11.710199   74203 logs.go:276] 0 containers: []
	W0731 18:10:11.710208   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:11.710215   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:11.710273   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:11.743722   74203 cri.go:89] found id: ""
	I0731 18:10:11.743752   74203 logs.go:276] 0 containers: []
	W0731 18:10:11.743763   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:11.743782   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:11.743857   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:11.776730   74203 cri.go:89] found id: ""
	I0731 18:10:11.776752   74203 logs.go:276] 0 containers: []
	W0731 18:10:11.776759   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:11.776766   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:11.776777   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:11.846385   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:11.846404   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:11.846415   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:11.923748   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:11.923779   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:11.959700   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:11.959734   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:12.009971   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:12.010002   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:14.524097   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:14.537349   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:14.537449   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:14.569907   74203 cri.go:89] found id: ""
	I0731 18:10:14.569934   74203 logs.go:276] 0 containers: []
	W0731 18:10:14.569941   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:14.569947   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:14.569999   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:14.605058   74203 cri.go:89] found id: ""
	I0731 18:10:14.605085   74203 logs.go:276] 0 containers: []
	W0731 18:10:14.605095   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:14.605102   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:14.605155   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:14.640941   74203 cri.go:89] found id: ""
	I0731 18:10:14.640964   74203 logs.go:276] 0 containers: []
	W0731 18:10:14.640975   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:14.640982   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:14.641039   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:14.678774   74203 cri.go:89] found id: ""
	I0731 18:10:14.678803   74203 logs.go:276] 0 containers: []
	W0731 18:10:14.678814   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:14.678822   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:14.678880   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:14.714123   74203 cri.go:89] found id: ""
	I0731 18:10:14.714152   74203 logs.go:276] 0 containers: []
	W0731 18:10:14.714163   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:14.714171   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:14.714230   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:14.750212   74203 cri.go:89] found id: ""
	I0731 18:10:14.750243   74203 logs.go:276] 0 containers: []
	W0731 18:10:14.750255   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:14.750262   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:14.750322   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:14.786820   74203 cri.go:89] found id: ""
	I0731 18:10:14.786842   74203 logs.go:276] 0 containers: []
	W0731 18:10:14.786850   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:14.786856   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:14.786904   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:14.819667   74203 cri.go:89] found id: ""
	I0731 18:10:14.819689   74203 logs.go:276] 0 containers: []
	W0731 18:10:14.819697   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:14.819705   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:14.819725   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:14.832525   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:14.832550   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:14.901190   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:14.901216   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:14.901229   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:14.977123   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:14.977158   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:15.014882   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:15.014912   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:17.564989   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:17.578676   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:17.578740   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:17.610077   74203 cri.go:89] found id: ""
	I0731 18:10:17.610103   74203 logs.go:276] 0 containers: []
	W0731 18:10:17.610112   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:17.610117   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:17.610169   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:17.643143   74203 cri.go:89] found id: ""
	I0731 18:10:17.643166   74203 logs.go:276] 0 containers: []
	W0731 18:10:17.643173   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:17.643179   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:17.643225   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:17.677979   74203 cri.go:89] found id: ""
	I0731 18:10:17.678002   74203 logs.go:276] 0 containers: []
	W0731 18:10:17.678010   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:17.678016   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:17.678086   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:17.711905   74203 cri.go:89] found id: ""
	I0731 18:10:17.711941   74203 logs.go:276] 0 containers: []
	W0731 18:10:17.711953   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:17.711960   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:17.712013   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:17.745842   74203 cri.go:89] found id: ""
	I0731 18:10:17.745870   74203 logs.go:276] 0 containers: []
	W0731 18:10:17.745881   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:17.745889   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:17.745949   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:17.778170   74203 cri.go:89] found id: ""
	I0731 18:10:17.778242   74203 logs.go:276] 0 containers: []
	W0731 18:10:17.778260   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:17.778272   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:17.778340   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:17.810717   74203 cri.go:89] found id: ""
	I0731 18:10:17.810744   74203 logs.go:276] 0 containers: []
	W0731 18:10:17.810755   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:17.810762   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:17.810823   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:17.843237   74203 cri.go:89] found id: ""
	I0731 18:10:17.843268   74203 logs.go:276] 0 containers: []
	W0731 18:10:17.843278   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:17.843288   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:17.843303   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:17.894338   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:17.894376   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:17.907898   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:17.907927   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:17.977115   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:17.977133   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:17.977145   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:18.059924   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:18.059968   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:20.600903   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:20.613609   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:20.613680   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:20.646352   74203 cri.go:89] found id: ""
	I0731 18:10:20.646379   74203 logs.go:276] 0 containers: []
	W0731 18:10:20.646388   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:20.646395   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:20.646453   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:20.680448   74203 cri.go:89] found id: ""
	I0731 18:10:20.680475   74203 logs.go:276] 0 containers: []
	W0731 18:10:20.680486   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:20.680493   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:20.680555   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:20.716330   74203 cri.go:89] found id: ""
	I0731 18:10:20.716365   74203 logs.go:276] 0 containers: []
	W0731 18:10:20.716378   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:20.716387   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:20.716448   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:20.748630   74203 cri.go:89] found id: ""
	I0731 18:10:20.748657   74203 logs.go:276] 0 containers: []
	W0731 18:10:20.748665   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:20.748670   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:20.748736   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:20.787769   74203 cri.go:89] found id: ""
	I0731 18:10:20.787793   74203 logs.go:276] 0 containers: []
	W0731 18:10:20.787802   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:20.787809   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:20.787869   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:20.819884   74203 cri.go:89] found id: ""
	I0731 18:10:20.819911   74203 logs.go:276] 0 containers: []
	W0731 18:10:20.819921   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:20.819929   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:20.819988   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:20.853414   74203 cri.go:89] found id: ""
	I0731 18:10:20.853437   74203 logs.go:276] 0 containers: []
	W0731 18:10:20.853445   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:20.853450   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:20.853508   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:20.889198   74203 cri.go:89] found id: ""
	I0731 18:10:20.889224   74203 logs.go:276] 0 containers: []
	W0731 18:10:20.889231   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:20.889239   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:20.889251   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:20.903240   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:20.903268   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:20.971003   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:20.971032   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:20.971051   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:21.045856   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:21.045888   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:21.086089   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:21.086121   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:23.639664   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:23.652573   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:23.652632   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:23.684719   74203 cri.go:89] found id: ""
	I0731 18:10:23.684746   74203 logs.go:276] 0 containers: []
	W0731 18:10:23.684757   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:23.684765   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:23.684820   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:23.717315   74203 cri.go:89] found id: ""
	I0731 18:10:23.717350   74203 logs.go:276] 0 containers: []
	W0731 18:10:23.717362   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:23.717369   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:23.717432   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:23.750251   74203 cri.go:89] found id: ""
	I0731 18:10:23.750275   74203 logs.go:276] 0 containers: []
	W0731 18:10:23.750286   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:23.750293   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:23.750397   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:23.785700   74203 cri.go:89] found id: ""
	I0731 18:10:23.785726   74203 logs.go:276] 0 containers: []
	W0731 18:10:23.785737   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:23.785745   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:23.785792   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:23.816856   74203 cri.go:89] found id: ""
	I0731 18:10:23.816885   74203 logs.go:276] 0 containers: []
	W0731 18:10:23.816895   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:23.816902   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:23.816965   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:23.849931   74203 cri.go:89] found id: ""
	I0731 18:10:23.849962   74203 logs.go:276] 0 containers: []
	W0731 18:10:23.849972   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:23.849980   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:23.850043   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:23.881413   74203 cri.go:89] found id: ""
	I0731 18:10:23.881444   74203 logs.go:276] 0 containers: []
	W0731 18:10:23.881452   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:23.881458   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:23.881516   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:23.914272   74203 cri.go:89] found id: ""
	I0731 18:10:23.914303   74203 logs.go:276] 0 containers: []
	W0731 18:10:23.914313   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:23.914325   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:23.914352   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:23.979988   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:23.980015   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:23.980027   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:24.057159   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:24.057198   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:24.097567   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:24.097603   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:24.154740   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:24.154781   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:26.670324   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:26.683866   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:26.683951   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:26.717671   74203 cri.go:89] found id: ""
	I0731 18:10:26.717722   74203 logs.go:276] 0 containers: []
	W0731 18:10:26.717733   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:26.717739   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:26.717790   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:26.751201   74203 cri.go:89] found id: ""
	I0731 18:10:26.751228   74203 logs.go:276] 0 containers: []
	W0731 18:10:26.751236   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:26.751246   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:26.751315   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:26.784768   74203 cri.go:89] found id: ""
	I0731 18:10:26.784793   74203 logs.go:276] 0 containers: []
	W0731 18:10:26.784803   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:26.784811   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:26.784868   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:26.822269   74203 cri.go:89] found id: ""
	I0731 18:10:26.822298   74203 logs.go:276] 0 containers: []
	W0731 18:10:26.822307   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:26.822315   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:26.822378   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:26.854405   74203 cri.go:89] found id: ""
	I0731 18:10:26.854427   74203 logs.go:276] 0 containers: []
	W0731 18:10:26.854434   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:26.854441   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:26.854490   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:26.888975   74203 cri.go:89] found id: ""
	I0731 18:10:26.889000   74203 logs.go:276] 0 containers: []
	W0731 18:10:26.889007   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:26.889013   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:26.889085   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:26.922940   74203 cri.go:89] found id: ""
	I0731 18:10:26.922967   74203 logs.go:276] 0 containers: []
	W0731 18:10:26.922976   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:26.922981   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:26.923040   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:26.955717   74203 cri.go:89] found id: ""
	I0731 18:10:26.955743   74203 logs.go:276] 0 containers: []
	W0731 18:10:26.955754   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:26.955764   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:26.955779   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:27.006453   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:27.006481   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:27.019136   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:27.019159   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:27.086988   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:27.087014   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:27.087031   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:27.161574   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:27.161604   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:29.705620   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:29.718718   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:29.718775   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:29.751079   74203 cri.go:89] found id: ""
	I0731 18:10:29.751123   74203 logs.go:276] 0 containers: []
	W0731 18:10:29.751134   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:29.751142   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:29.751198   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:29.790944   74203 cri.go:89] found id: ""
	I0731 18:10:29.790971   74203 logs.go:276] 0 containers: []
	W0731 18:10:29.790982   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:29.790988   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:29.791041   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:29.827921   74203 cri.go:89] found id: ""
	I0731 18:10:29.827951   74203 logs.go:276] 0 containers: []
	W0731 18:10:29.827965   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:29.827971   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:29.828031   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:29.861365   74203 cri.go:89] found id: ""
	I0731 18:10:29.861398   74203 logs.go:276] 0 containers: []
	W0731 18:10:29.861409   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:29.861417   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:29.861472   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:29.894509   74203 cri.go:89] found id: ""
	I0731 18:10:29.894537   74203 logs.go:276] 0 containers: []
	W0731 18:10:29.894546   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:29.894552   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:29.894614   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:29.926793   74203 cri.go:89] found id: ""
	I0731 18:10:29.926821   74203 logs.go:276] 0 containers: []
	W0731 18:10:29.926832   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:29.926839   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:29.926904   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:29.963765   74203 cri.go:89] found id: ""
	I0731 18:10:29.963792   74203 logs.go:276] 0 containers: []
	W0731 18:10:29.963802   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:29.963809   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:29.963870   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:29.998577   74203 cri.go:89] found id: ""
	I0731 18:10:29.998604   74203 logs.go:276] 0 containers: []
	W0731 18:10:29.998611   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:29.998619   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:29.998630   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:30.050035   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:30.050072   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:30.064147   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:30.064178   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:30.136990   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:30.137012   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:30.137030   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:30.214687   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:30.214719   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:32.753503   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:32.766795   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:32.766873   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:32.812134   74203 cri.go:89] found id: ""
	I0731 18:10:32.812161   74203 logs.go:276] 0 containers: []
	W0731 18:10:32.812169   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:32.812175   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:32.812229   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:32.846997   74203 cri.go:89] found id: ""
	I0731 18:10:32.847029   74203 logs.go:276] 0 containers: []
	W0731 18:10:32.847039   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:32.847044   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:32.847092   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:32.884093   74203 cri.go:89] found id: ""
	I0731 18:10:32.884123   74203 logs.go:276] 0 containers: []
	W0731 18:10:32.884132   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:32.884138   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:32.884188   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:32.920160   74203 cri.go:89] found id: ""
	I0731 18:10:32.920186   74203 logs.go:276] 0 containers: []
	W0731 18:10:32.920197   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:32.920204   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:32.920263   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:32.952750   74203 cri.go:89] found id: ""
	I0731 18:10:32.952777   74203 logs.go:276] 0 containers: []
	W0731 18:10:32.952788   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:32.952795   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:32.952865   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:32.989086   74203 cri.go:89] found id: ""
	I0731 18:10:32.989115   74203 logs.go:276] 0 containers: []
	W0731 18:10:32.989125   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:32.989135   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:32.989189   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:33.021554   74203 cri.go:89] found id: ""
	I0731 18:10:33.021590   74203 logs.go:276] 0 containers: []
	W0731 18:10:33.021602   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:33.021609   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:33.021662   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:33.061097   74203 cri.go:89] found id: ""
	I0731 18:10:33.061128   74203 logs.go:276] 0 containers: []
	W0731 18:10:33.061141   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:33.061160   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:33.061174   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:33.113497   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:33.113534   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:33.126816   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:33.126842   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:33.196713   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:33.196733   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:33.196744   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:33.277697   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:33.277724   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:35.817143   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:35.829760   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:35.829820   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:35.862974   74203 cri.go:89] found id: ""
	I0731 18:10:35.863002   74203 logs.go:276] 0 containers: []
	W0731 18:10:35.863014   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:35.863022   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:35.863078   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:35.898547   74203 cri.go:89] found id: ""
	I0731 18:10:35.898576   74203 logs.go:276] 0 containers: []
	W0731 18:10:35.898584   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:35.898590   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:35.898651   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:35.930351   74203 cri.go:89] found id: ""
	I0731 18:10:35.930379   74203 logs.go:276] 0 containers: []
	W0731 18:10:35.930390   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:35.930396   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:35.930463   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:35.962623   74203 cri.go:89] found id: ""
	I0731 18:10:35.962652   74203 logs.go:276] 0 containers: []
	W0731 18:10:35.962663   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:35.962670   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:35.962727   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:35.998213   74203 cri.go:89] found id: ""
	I0731 18:10:35.998233   74203 logs.go:276] 0 containers: []
	W0731 18:10:35.998240   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:35.998245   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:35.998291   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:36.032670   74203 cri.go:89] found id: ""
	I0731 18:10:36.032695   74203 logs.go:276] 0 containers: []
	W0731 18:10:36.032703   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:36.032709   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:36.032757   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:36.066349   74203 cri.go:89] found id: ""
	I0731 18:10:36.066381   74203 logs.go:276] 0 containers: []
	W0731 18:10:36.066392   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:36.066399   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:36.066461   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:36.104137   74203 cri.go:89] found id: ""
	I0731 18:10:36.104168   74203 logs.go:276] 0 containers: []
	W0731 18:10:36.104180   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:36.104200   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:36.104215   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:36.155814   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:36.155844   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:36.168885   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:36.168912   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:36.235950   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:36.235972   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:36.235987   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:36.318382   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:36.318414   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:38.853972   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:38.867018   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:38.867089   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:38.902069   74203 cri.go:89] found id: ""
	I0731 18:10:38.902097   74203 logs.go:276] 0 containers: []
	W0731 18:10:38.902109   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:38.902115   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:38.902181   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:38.935272   74203 cri.go:89] found id: ""
	I0731 18:10:38.935296   74203 logs.go:276] 0 containers: []
	W0731 18:10:38.935316   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:38.935329   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:38.935387   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:38.968582   74203 cri.go:89] found id: ""
	I0731 18:10:38.968610   74203 logs.go:276] 0 containers: []
	W0731 18:10:38.968621   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:38.968629   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:38.968688   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:38.999740   74203 cri.go:89] found id: ""
	I0731 18:10:38.999770   74203 logs.go:276] 0 containers: []
	W0731 18:10:38.999780   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:38.999787   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:38.999845   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:39.032964   74203 cri.go:89] found id: ""
	I0731 18:10:39.032993   74203 logs.go:276] 0 containers: []
	W0731 18:10:39.033008   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:39.033015   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:39.033099   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:39.064121   74203 cri.go:89] found id: ""
	I0731 18:10:39.064149   74203 logs.go:276] 0 containers: []
	W0731 18:10:39.064158   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:39.064164   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:39.064222   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:39.098462   74203 cri.go:89] found id: ""
	I0731 18:10:39.098488   74203 logs.go:276] 0 containers: []
	W0731 18:10:39.098498   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:39.098505   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:39.098564   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:39.130627   74203 cri.go:89] found id: ""
	I0731 18:10:39.130653   74203 logs.go:276] 0 containers: []
	W0731 18:10:39.130663   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:39.130674   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:39.130687   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:39.223664   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:39.223698   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:39.260502   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:39.260533   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:39.315643   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:39.315675   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:39.329731   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:39.329761   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:39.395078   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:41.895698   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:41.910111   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:41.910191   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:41.943700   74203 cri.go:89] found id: ""
	I0731 18:10:41.943732   74203 logs.go:276] 0 containers: []
	W0731 18:10:41.943743   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:41.943751   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:41.943812   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:41.976848   74203 cri.go:89] found id: ""
	I0731 18:10:41.976879   74203 logs.go:276] 0 containers: []
	W0731 18:10:41.976888   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:41.976894   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:41.976967   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:42.009424   74203 cri.go:89] found id: ""
	I0731 18:10:42.009451   74203 logs.go:276] 0 containers: []
	W0731 18:10:42.009462   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:42.009477   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:42.009546   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:42.047233   74203 cri.go:89] found id: ""
	I0731 18:10:42.047260   74203 logs.go:276] 0 containers: []
	W0731 18:10:42.047268   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:42.047274   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:42.047342   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:42.079900   74203 cri.go:89] found id: ""
	I0731 18:10:42.079928   74203 logs.go:276] 0 containers: []
	W0731 18:10:42.079938   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:42.079945   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:42.080025   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:42.114122   74203 cri.go:89] found id: ""
	I0731 18:10:42.114152   74203 logs.go:276] 0 containers: []
	W0731 18:10:42.114164   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:42.114172   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:42.114224   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:42.148741   74203 cri.go:89] found id: ""
	I0731 18:10:42.148768   74203 logs.go:276] 0 containers: []
	W0731 18:10:42.148780   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:42.148789   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:42.148853   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:42.184739   74203 cri.go:89] found id: ""
	I0731 18:10:42.184762   74203 logs.go:276] 0 containers: []
	W0731 18:10:42.184769   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:42.184777   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:42.184791   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:42.254676   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:42.254694   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:42.254706   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:42.334936   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:42.334978   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:42.371511   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:42.371540   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:42.421800   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:42.421831   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:44.934983   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:44.947212   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:44.947293   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:44.979722   74203 cri.go:89] found id: ""
	I0731 18:10:44.979748   74203 logs.go:276] 0 containers: []
	W0731 18:10:44.979760   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:44.979767   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:44.979819   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:45.011594   74203 cri.go:89] found id: ""
	I0731 18:10:45.011620   74203 logs.go:276] 0 containers: []
	W0731 18:10:45.011630   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:45.011637   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:45.011803   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:45.043174   74203 cri.go:89] found id: ""
	I0731 18:10:45.043197   74203 logs.go:276] 0 containers: []
	W0731 18:10:45.043207   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:45.043214   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:45.043278   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:45.074629   74203 cri.go:89] found id: ""
	I0731 18:10:45.074652   74203 logs.go:276] 0 containers: []
	W0731 18:10:45.074662   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:45.074669   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:45.074727   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:45.108917   74203 cri.go:89] found id: ""
	I0731 18:10:45.108944   74203 logs.go:276] 0 containers: []
	W0731 18:10:45.108952   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:45.108959   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:45.109018   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:45.142200   74203 cri.go:89] found id: ""
	I0731 18:10:45.142227   74203 logs.go:276] 0 containers: []
	W0731 18:10:45.142237   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:45.142244   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:45.142306   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:45.177076   74203 cri.go:89] found id: ""
	I0731 18:10:45.177101   74203 logs.go:276] 0 containers: []
	W0731 18:10:45.177109   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:45.177114   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:45.177168   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:45.209352   74203 cri.go:89] found id: ""
	I0731 18:10:45.209376   74203 logs.go:276] 0 containers: []
	W0731 18:10:45.209383   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:45.209392   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:45.209407   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:45.257966   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:45.257998   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:45.272429   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:45.272462   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:45.347952   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:45.347973   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:45.347988   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:45.428556   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:45.428609   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:47.971089   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:47.986677   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:47.986749   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:48.020396   74203 cri.go:89] found id: ""
	I0731 18:10:48.020426   74203 logs.go:276] 0 containers: []
	W0731 18:10:48.020438   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:48.020446   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:48.020512   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:48.058129   74203 cri.go:89] found id: ""
	I0731 18:10:48.058161   74203 logs.go:276] 0 containers: []
	W0731 18:10:48.058172   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:48.058180   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:48.058249   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:48.091894   74203 cri.go:89] found id: ""
	I0731 18:10:48.091922   74203 logs.go:276] 0 containers: []
	W0731 18:10:48.091932   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:48.091939   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:48.091998   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:48.124757   74203 cri.go:89] found id: ""
	I0731 18:10:48.124788   74203 logs.go:276] 0 containers: []
	W0731 18:10:48.124798   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:48.124807   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:48.124871   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:48.159145   74203 cri.go:89] found id: ""
	I0731 18:10:48.159172   74203 logs.go:276] 0 containers: []
	W0731 18:10:48.159184   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:48.159191   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:48.159253   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:48.200024   74203 cri.go:89] found id: ""
	I0731 18:10:48.200051   74203 logs.go:276] 0 containers: []
	W0731 18:10:48.200061   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:48.200069   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:48.200128   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:48.233838   74203 cri.go:89] found id: ""
	I0731 18:10:48.233870   74203 logs.go:276] 0 containers: []
	W0731 18:10:48.233880   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:48.233886   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:48.233941   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:48.265786   74203 cri.go:89] found id: ""
	I0731 18:10:48.265812   74203 logs.go:276] 0 containers: []
	W0731 18:10:48.265821   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:48.265832   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:48.265846   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:48.280422   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:48.280449   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:48.346774   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:48.346796   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:48.346808   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:48.424017   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:48.424052   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:48.464139   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:48.464166   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:51.013681   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:51.028745   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:51.028814   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:51.062656   74203 cri.go:89] found id: ""
	I0731 18:10:51.062683   74203 logs.go:276] 0 containers: []
	W0731 18:10:51.062691   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:51.062700   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:51.062749   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:51.099203   74203 cri.go:89] found id: ""
	I0731 18:10:51.099228   74203 logs.go:276] 0 containers: []
	W0731 18:10:51.099237   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:51.099243   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:51.099310   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:51.133507   74203 cri.go:89] found id: ""
	I0731 18:10:51.133533   74203 logs.go:276] 0 containers: []
	W0731 18:10:51.133540   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:51.133546   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:51.133596   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:51.169935   74203 cri.go:89] found id: ""
	I0731 18:10:51.169954   74203 logs.go:276] 0 containers: []
	W0731 18:10:51.169961   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:51.169966   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:51.170012   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:51.202877   74203 cri.go:89] found id: ""
	I0731 18:10:51.202903   74203 logs.go:276] 0 containers: []
	W0731 18:10:51.202913   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:51.202919   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:51.202988   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:51.239913   74203 cri.go:89] found id: ""
	I0731 18:10:51.239939   74203 logs.go:276] 0 containers: []
	W0731 18:10:51.239949   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:51.239957   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:51.240018   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:51.272024   74203 cri.go:89] found id: ""
	I0731 18:10:51.272095   74203 logs.go:276] 0 containers: []
	W0731 18:10:51.272115   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:51.272123   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:51.272185   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:51.307016   74203 cri.go:89] found id: ""
	I0731 18:10:51.307043   74203 logs.go:276] 0 containers: []
	W0731 18:10:51.307053   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:51.307063   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:51.307079   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:51.364018   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:51.364066   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:51.384277   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:51.384303   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:51.472657   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:51.472679   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:51.472696   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:51.548408   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:51.548439   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:54.086526   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:54.099293   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:54.099368   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:54.129927   74203 cri.go:89] found id: ""
	I0731 18:10:54.129954   74203 logs.go:276] 0 containers: []
	W0731 18:10:54.129965   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:54.129972   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:54.130042   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:54.166428   74203 cri.go:89] found id: ""
	I0731 18:10:54.166457   74203 logs.go:276] 0 containers: []
	W0731 18:10:54.166468   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:54.166476   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:54.166538   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:54.204523   74203 cri.go:89] found id: ""
	I0731 18:10:54.204549   74203 logs.go:276] 0 containers: []
	W0731 18:10:54.204556   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:54.204562   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:54.204619   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:54.241706   74203 cri.go:89] found id: ""
	I0731 18:10:54.241730   74203 logs.go:276] 0 containers: []
	W0731 18:10:54.241737   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:54.241744   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:54.241802   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:54.277154   74203 cri.go:89] found id: ""
	I0731 18:10:54.277178   74203 logs.go:276] 0 containers: []
	W0731 18:10:54.277187   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:54.277193   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:54.277255   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:54.310198   74203 cri.go:89] found id: ""
	I0731 18:10:54.310223   74203 logs.go:276] 0 containers: []
	W0731 18:10:54.310231   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:54.310237   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:54.310283   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:54.344807   74203 cri.go:89] found id: ""
	I0731 18:10:54.344837   74203 logs.go:276] 0 containers: []
	W0731 18:10:54.344847   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:54.344854   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:54.344915   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:54.383358   74203 cri.go:89] found id: ""
	I0731 18:10:54.383391   74203 logs.go:276] 0 containers: []
	W0731 18:10:54.383400   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:54.383410   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:54.383424   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:54.431876   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:54.431908   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:54.444797   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:54.444824   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:54.518816   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:54.518839   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:54.518855   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:54.600072   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:54.600109   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:57.141070   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:57.155903   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:57.155975   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:57.189406   74203 cri.go:89] found id: ""
	I0731 18:10:57.189428   74203 logs.go:276] 0 containers: []
	W0731 18:10:57.189435   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:57.189441   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:57.189510   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:57.221507   74203 cri.go:89] found id: ""
	I0731 18:10:57.221531   74203 logs.go:276] 0 containers: []
	W0731 18:10:57.221540   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:57.221547   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:57.221614   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:57.257843   74203 cri.go:89] found id: ""
	I0731 18:10:57.257868   74203 logs.go:276] 0 containers: []
	W0731 18:10:57.257880   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:57.257887   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:57.257939   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:57.292697   74203 cri.go:89] found id: ""
	I0731 18:10:57.292728   74203 logs.go:276] 0 containers: []
	W0731 18:10:57.292737   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:57.292744   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:57.292802   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:57.325705   74203 cri.go:89] found id: ""
	I0731 18:10:57.325728   74203 logs.go:276] 0 containers: []
	W0731 18:10:57.325735   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:57.325740   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:57.325787   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:57.357436   74203 cri.go:89] found id: ""
	I0731 18:10:57.357463   74203 logs.go:276] 0 containers: []
	W0731 18:10:57.357471   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:57.357477   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:57.357534   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:57.388215   74203 cri.go:89] found id: ""
	I0731 18:10:57.388240   74203 logs.go:276] 0 containers: []
	W0731 18:10:57.388249   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:57.388256   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:57.388315   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:57.419609   74203 cri.go:89] found id: ""
	I0731 18:10:57.419631   74203 logs.go:276] 0 containers: []
	W0731 18:10:57.419643   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:57.419652   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:57.419663   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:57.497157   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:57.497188   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:57.533512   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:57.533552   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:57.587866   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:57.587904   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:57.601191   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:57.601222   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:57.681899   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:00.182160   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:00.195509   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:00.195598   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:00.230650   74203 cri.go:89] found id: ""
	I0731 18:11:00.230674   74203 logs.go:276] 0 containers: []
	W0731 18:11:00.230682   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:00.230689   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:00.230747   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:00.268629   74203 cri.go:89] found id: ""
	I0731 18:11:00.268656   74203 logs.go:276] 0 containers: []
	W0731 18:11:00.268666   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:00.268672   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:00.268740   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:00.301805   74203 cri.go:89] found id: ""
	I0731 18:11:00.301827   74203 logs.go:276] 0 containers: []
	W0731 18:11:00.301836   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:00.301843   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:00.301901   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:00.333844   74203 cri.go:89] found id: ""
	I0731 18:11:00.333871   74203 logs.go:276] 0 containers: []
	W0731 18:11:00.333882   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:00.333889   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:00.333949   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:00.366250   74203 cri.go:89] found id: ""
	I0731 18:11:00.366278   74203 logs.go:276] 0 containers: []
	W0731 18:11:00.366288   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:00.366295   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:00.366358   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:00.399301   74203 cri.go:89] found id: ""
	I0731 18:11:00.399325   74203 logs.go:276] 0 containers: []
	W0731 18:11:00.399335   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:00.399342   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:00.399405   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:00.432182   74203 cri.go:89] found id: ""
	I0731 18:11:00.432207   74203 logs.go:276] 0 containers: []
	W0731 18:11:00.432218   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:00.432224   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:00.432284   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:00.465395   74203 cri.go:89] found id: ""
	I0731 18:11:00.465423   74203 logs.go:276] 0 containers: []
	W0731 18:11:00.465432   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:00.465440   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:00.465453   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:00.516042   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:00.516077   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:00.528621   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:00.528647   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:00.600297   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:00.600322   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:00.600339   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:00.680368   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:00.680399   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:03.217684   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:03.230691   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:03.230752   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:03.264882   74203 cri.go:89] found id: ""
	I0731 18:11:03.264910   74203 logs.go:276] 0 containers: []
	W0731 18:11:03.264918   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:03.264924   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:03.264976   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:03.301608   74203 cri.go:89] found id: ""
	I0731 18:11:03.301733   74203 logs.go:276] 0 containers: []
	W0731 18:11:03.301754   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:03.301765   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:03.301838   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:03.335077   74203 cri.go:89] found id: ""
	I0731 18:11:03.335102   74203 logs.go:276] 0 containers: []
	W0731 18:11:03.335121   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:03.335128   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:03.335196   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:03.370755   74203 cri.go:89] found id: ""
	I0731 18:11:03.370783   74203 logs.go:276] 0 containers: []
	W0731 18:11:03.370794   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:03.370802   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:03.370862   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:03.403004   74203 cri.go:89] found id: ""
	I0731 18:11:03.403035   74203 logs.go:276] 0 containers: []
	W0731 18:11:03.403045   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:03.403052   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:03.403125   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:03.437169   74203 cri.go:89] found id: ""
	I0731 18:11:03.437209   74203 logs.go:276] 0 containers: []
	W0731 18:11:03.437219   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:03.437235   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:03.437296   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:03.469956   74203 cri.go:89] found id: ""
	I0731 18:11:03.469981   74203 logs.go:276] 0 containers: []
	W0731 18:11:03.469991   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:03.469998   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:03.470056   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:03.503850   74203 cri.go:89] found id: ""
	I0731 18:11:03.503878   74203 logs.go:276] 0 containers: []
	W0731 18:11:03.503888   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:03.503898   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:03.503913   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:03.554993   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:03.555036   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:03.567898   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:03.567925   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:03.630151   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:03.630188   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:03.630207   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:03.708552   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:03.708596   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:06.249728   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:06.261923   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:06.261998   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:06.296249   74203 cri.go:89] found id: ""
	I0731 18:11:06.296276   74203 logs.go:276] 0 containers: []
	W0731 18:11:06.296286   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:06.296292   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:06.296356   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:06.329355   74203 cri.go:89] found id: ""
	I0731 18:11:06.329381   74203 logs.go:276] 0 containers: []
	W0731 18:11:06.329389   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:06.329395   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:06.329443   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:06.362585   74203 cri.go:89] found id: ""
	I0731 18:11:06.362618   74203 logs.go:276] 0 containers: []
	W0731 18:11:06.362630   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:06.362643   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:06.362704   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:06.396489   74203 cri.go:89] found id: ""
	I0731 18:11:06.396514   74203 logs.go:276] 0 containers: []
	W0731 18:11:06.396521   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:06.396527   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:06.396590   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:06.428859   74203 cri.go:89] found id: ""
	I0731 18:11:06.428888   74203 logs.go:276] 0 containers: []
	W0731 18:11:06.428897   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:06.428903   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:06.428960   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:06.468817   74203 cri.go:89] found id: ""
	I0731 18:11:06.468846   74203 logs.go:276] 0 containers: []
	W0731 18:11:06.468856   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:06.468864   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:06.468924   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:06.499975   74203 cri.go:89] found id: ""
	I0731 18:11:06.500000   74203 logs.go:276] 0 containers: []
	W0731 18:11:06.500008   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:06.500013   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:06.500067   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:06.537410   74203 cri.go:89] found id: ""
	I0731 18:11:06.537440   74203 logs.go:276] 0 containers: []
	W0731 18:11:06.537451   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:06.537461   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:06.537476   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:06.589664   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:06.589709   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:06.603978   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:06.604005   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:06.673436   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:06.673454   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:06.673465   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:06.757101   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:06.757143   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:09.299562   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:09.311910   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:09.311971   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:09.346517   74203 cri.go:89] found id: ""
	I0731 18:11:09.346545   74203 logs.go:276] 0 containers: []
	W0731 18:11:09.346555   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:09.346562   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:09.346634   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:09.377688   74203 cri.go:89] found id: ""
	I0731 18:11:09.377713   74203 logs.go:276] 0 containers: []
	W0731 18:11:09.377720   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:09.377726   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:09.377787   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:09.412149   74203 cri.go:89] found id: ""
	I0731 18:11:09.412176   74203 logs.go:276] 0 containers: []
	W0731 18:11:09.412186   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:09.412193   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:09.412259   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:09.444134   74203 cri.go:89] found id: ""
	I0731 18:11:09.444162   74203 logs.go:276] 0 containers: []
	W0731 18:11:09.444172   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:09.444178   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:09.444233   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:09.481407   74203 cri.go:89] found id: ""
	I0731 18:11:09.481436   74203 logs.go:276] 0 containers: []
	W0731 18:11:09.481447   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:09.481453   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:09.481513   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:09.514926   74203 cri.go:89] found id: ""
	I0731 18:11:09.514950   74203 logs.go:276] 0 containers: []
	W0731 18:11:09.514967   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:09.514974   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:09.515036   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:09.547253   74203 cri.go:89] found id: ""
	I0731 18:11:09.547278   74203 logs.go:276] 0 containers: []
	W0731 18:11:09.547285   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:09.547291   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:09.547376   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:09.587585   74203 cri.go:89] found id: ""
	I0731 18:11:09.587614   74203 logs.go:276] 0 containers: []
	W0731 18:11:09.587622   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:09.587632   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:09.587646   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:09.642024   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:09.642057   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:09.655244   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:09.655270   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:09.721446   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:09.721474   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:09.721489   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:09.803315   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:09.803349   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:12.344355   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:12.357122   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:12.357194   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:12.392237   74203 cri.go:89] found id: ""
	I0731 18:11:12.392258   74203 logs.go:276] 0 containers: []
	W0731 18:11:12.392267   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:12.392272   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:12.392339   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:12.424490   74203 cri.go:89] found id: ""
	I0731 18:11:12.424514   74203 logs.go:276] 0 containers: []
	W0731 18:11:12.424523   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:12.424529   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:12.424587   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:12.458438   74203 cri.go:89] found id: ""
	I0731 18:11:12.458467   74203 logs.go:276] 0 containers: []
	W0731 18:11:12.458477   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:12.458483   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:12.458545   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:12.495343   74203 cri.go:89] found id: ""
	I0731 18:11:12.495371   74203 logs.go:276] 0 containers: []
	W0731 18:11:12.495383   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:12.495391   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:12.495455   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:12.527285   74203 cri.go:89] found id: ""
	I0731 18:11:12.527314   74203 logs.go:276] 0 containers: []
	W0731 18:11:12.527324   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:12.527334   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:12.527393   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:12.560341   74203 cri.go:89] found id: ""
	I0731 18:11:12.560369   74203 logs.go:276] 0 containers: []
	W0731 18:11:12.560379   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:12.560387   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:12.560444   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:12.595084   74203 cri.go:89] found id: ""
	I0731 18:11:12.595120   74203 logs.go:276] 0 containers: []
	W0731 18:11:12.595133   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:12.595141   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:12.595215   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:12.630666   74203 cri.go:89] found id: ""
	I0731 18:11:12.630692   74203 logs.go:276] 0 containers: []
	W0731 18:11:12.630702   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:12.630711   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:12.630727   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:12.683588   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:12.683620   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:12.696899   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:12.696925   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:12.757815   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:12.757837   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:12.757870   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:12.834888   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:12.834927   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:15.372797   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:15.386268   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:15.386356   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:15.420446   74203 cri.go:89] found id: ""
	I0731 18:11:15.420477   74203 logs.go:276] 0 containers: []
	W0731 18:11:15.420488   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:15.420497   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:15.420556   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:15.456092   74203 cri.go:89] found id: ""
	I0731 18:11:15.456118   74203 logs.go:276] 0 containers: []
	W0731 18:11:15.456129   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:15.456136   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:15.456194   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:15.488277   74203 cri.go:89] found id: ""
	I0731 18:11:15.488304   74203 logs.go:276] 0 containers: []
	W0731 18:11:15.488316   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:15.488323   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:15.488384   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:15.520701   74203 cri.go:89] found id: ""
	I0731 18:11:15.520730   74203 logs.go:276] 0 containers: []
	W0731 18:11:15.520741   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:15.520749   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:15.520818   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:15.552831   74203 cri.go:89] found id: ""
	I0731 18:11:15.552854   74203 logs.go:276] 0 containers: []
	W0731 18:11:15.552862   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:15.552867   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:15.552920   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:15.589161   74203 cri.go:89] found id: ""
	I0731 18:11:15.589191   74203 logs.go:276] 0 containers: []
	W0731 18:11:15.589203   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:15.589210   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:15.589274   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:15.622501   74203 cri.go:89] found id: ""
	I0731 18:11:15.622532   74203 logs.go:276] 0 containers: []
	W0731 18:11:15.622544   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:15.622552   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:15.622611   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:15.654772   74203 cri.go:89] found id: ""
	I0731 18:11:15.654801   74203 logs.go:276] 0 containers: []
	W0731 18:11:15.654815   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:15.654826   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:15.654843   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:15.703103   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:15.703148   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:15.716620   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:15.716645   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:15.783391   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:15.783416   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:15.783431   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:15.857462   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:15.857495   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:18.394223   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:18.407297   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:18.407374   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:18.439542   74203 cri.go:89] found id: ""
	I0731 18:11:18.439564   74203 logs.go:276] 0 containers: []
	W0731 18:11:18.439572   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:18.439578   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:18.439625   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:18.471838   74203 cri.go:89] found id: ""
	I0731 18:11:18.471863   74203 logs.go:276] 0 containers: []
	W0731 18:11:18.471873   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:18.471883   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:18.471943   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:18.505325   74203 cri.go:89] found id: ""
	I0731 18:11:18.505355   74203 logs.go:276] 0 containers: []
	W0731 18:11:18.505365   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:18.505372   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:18.505432   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:18.536155   74203 cri.go:89] found id: ""
	I0731 18:11:18.536180   74203 logs.go:276] 0 containers: []
	W0731 18:11:18.536189   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:18.536194   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:18.536241   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:18.569301   74203 cri.go:89] found id: ""
	I0731 18:11:18.569329   74203 logs.go:276] 0 containers: []
	W0731 18:11:18.569339   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:18.569344   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:18.569398   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:18.603053   74203 cri.go:89] found id: ""
	I0731 18:11:18.603079   74203 logs.go:276] 0 containers: []
	W0731 18:11:18.603087   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:18.603092   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:18.603170   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:18.636259   74203 cri.go:89] found id: ""
	I0731 18:11:18.636287   74203 logs.go:276] 0 containers: []
	W0731 18:11:18.636298   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:18.636305   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:18.636361   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:18.667839   74203 cri.go:89] found id: ""
	I0731 18:11:18.667863   74203 logs.go:276] 0 containers: []
	W0731 18:11:18.667873   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:18.667883   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:18.667897   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:18.681005   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:18.681030   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:18.747793   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:18.747875   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:18.747892   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:18.828970   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:18.829005   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:18.866724   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:18.866749   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:21.416598   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:21.431968   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:21.432027   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:21.469670   74203 cri.go:89] found id: ""
	I0731 18:11:21.469696   74203 logs.go:276] 0 containers: []
	W0731 18:11:21.469703   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:21.469709   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:21.469762   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:21.508461   74203 cri.go:89] found id: ""
	I0731 18:11:21.508490   74203 logs.go:276] 0 containers: []
	W0731 18:11:21.508500   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:21.508506   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:21.508570   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:21.548101   74203 cri.go:89] found id: ""
	I0731 18:11:21.548127   74203 logs.go:276] 0 containers: []
	W0731 18:11:21.548136   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:21.548142   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:21.548204   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:21.582617   74203 cri.go:89] found id: ""
	I0731 18:11:21.582646   74203 logs.go:276] 0 containers: []
	W0731 18:11:21.582653   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:21.582659   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:21.582712   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:21.614185   74203 cri.go:89] found id: ""
	I0731 18:11:21.614210   74203 logs.go:276] 0 containers: []
	W0731 18:11:21.614218   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:21.614223   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:21.614278   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:21.647596   74203 cri.go:89] found id: ""
	I0731 18:11:21.647619   74203 logs.go:276] 0 containers: []
	W0731 18:11:21.647629   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:21.647636   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:21.647693   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:21.680106   74203 cri.go:89] found id: ""
	I0731 18:11:21.680132   74203 logs.go:276] 0 containers: []
	W0731 18:11:21.680142   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:21.680149   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:21.680208   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:21.714708   74203 cri.go:89] found id: ""
	I0731 18:11:21.714733   74203 logs.go:276] 0 containers: []
	W0731 18:11:21.714742   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:21.714754   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:21.714779   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:21.783425   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:21.783448   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:21.783462   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:21.859943   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:21.859980   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:21.898374   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:21.898405   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:21.945753   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:21.945784   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:24.459481   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:24.471376   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:24.471435   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:24.506474   74203 cri.go:89] found id: ""
	I0731 18:11:24.506502   74203 logs.go:276] 0 containers: []
	W0731 18:11:24.506511   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:24.506516   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:24.506572   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:24.547298   74203 cri.go:89] found id: ""
	I0731 18:11:24.547324   74203 logs.go:276] 0 containers: []
	W0731 18:11:24.547332   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:24.547337   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:24.547402   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:24.579912   74203 cri.go:89] found id: ""
	I0731 18:11:24.579944   74203 logs.go:276] 0 containers: []
	W0731 18:11:24.579955   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:24.579963   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:24.580032   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:24.613754   74203 cri.go:89] found id: ""
	I0731 18:11:24.613782   74203 logs.go:276] 0 containers: []
	W0731 18:11:24.613791   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:24.613799   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:24.613859   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:24.649782   74203 cri.go:89] found id: ""
	I0731 18:11:24.649811   74203 logs.go:276] 0 containers: []
	W0731 18:11:24.649822   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:24.649829   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:24.649888   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:24.689232   74203 cri.go:89] found id: ""
	I0731 18:11:24.689264   74203 logs.go:276] 0 containers: []
	W0731 18:11:24.689274   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:24.689283   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:24.689361   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:24.727861   74203 cri.go:89] found id: ""
	I0731 18:11:24.727894   74203 logs.go:276] 0 containers: []
	W0731 18:11:24.727902   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:24.727924   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:24.727983   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:24.763839   74203 cri.go:89] found id: ""
	I0731 18:11:24.763866   74203 logs.go:276] 0 containers: []
	W0731 18:11:24.763876   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:24.763886   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:24.763901   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:24.841090   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:24.841131   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:24.877206   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:24.877231   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:24.926149   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:24.926180   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:24.938795   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:24.938822   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:25.008349   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:27.509192   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:27.522506   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:27.522582   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:27.557915   74203 cri.go:89] found id: ""
	I0731 18:11:27.557943   74203 logs.go:276] 0 containers: []
	W0731 18:11:27.557954   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:27.557962   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:27.558019   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:27.594295   74203 cri.go:89] found id: ""
	I0731 18:11:27.594322   74203 logs.go:276] 0 containers: []
	W0731 18:11:27.594332   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:27.594348   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:27.594410   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:27.626830   74203 cri.go:89] found id: ""
	I0731 18:11:27.626857   74203 logs.go:276] 0 containers: []
	W0731 18:11:27.626868   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:27.626875   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:27.626964   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:27.662062   74203 cri.go:89] found id: ""
	I0731 18:11:27.662084   74203 logs.go:276] 0 containers: []
	W0731 18:11:27.662092   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:27.662099   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:27.662158   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:27.695686   74203 cri.go:89] found id: ""
	I0731 18:11:27.695715   74203 logs.go:276] 0 containers: []
	W0731 18:11:27.695727   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:27.695735   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:27.695785   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:27.729444   74203 cri.go:89] found id: ""
	I0731 18:11:27.729467   74203 logs.go:276] 0 containers: []
	W0731 18:11:27.729475   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:27.729481   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:27.729531   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:27.761889   74203 cri.go:89] found id: ""
	I0731 18:11:27.761916   74203 logs.go:276] 0 containers: []
	W0731 18:11:27.761926   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:27.761934   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:27.761995   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:27.796178   74203 cri.go:89] found id: ""
	I0731 18:11:27.796199   74203 logs.go:276] 0 containers: []
	W0731 18:11:27.796206   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:27.796214   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:27.796227   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:27.849613   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:27.849650   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:27.862892   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:27.862923   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:27.928691   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:27.928717   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:27.928740   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:28.006310   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:28.006340   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:30.543065   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:30.555951   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:30.556013   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:30.597411   74203 cri.go:89] found id: ""
	I0731 18:11:30.597440   74203 logs.go:276] 0 containers: []
	W0731 18:11:30.597451   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:30.597458   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:30.597518   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:30.629836   74203 cri.go:89] found id: ""
	I0731 18:11:30.629866   74203 logs.go:276] 0 containers: []
	W0731 18:11:30.629873   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:30.629878   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:30.629932   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:30.667402   74203 cri.go:89] found id: ""
	I0731 18:11:30.667432   74203 logs.go:276] 0 containers: []
	W0731 18:11:30.667443   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:30.667450   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:30.667513   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:30.701677   74203 cri.go:89] found id: ""
	I0731 18:11:30.701708   74203 logs.go:276] 0 containers: []
	W0731 18:11:30.701716   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:30.701722   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:30.701773   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:30.736685   74203 cri.go:89] found id: ""
	I0731 18:11:30.736714   74203 logs.go:276] 0 containers: []
	W0731 18:11:30.736721   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:30.736736   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:30.736786   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:30.771501   74203 cri.go:89] found id: ""
	I0731 18:11:30.771526   74203 logs.go:276] 0 containers: []
	W0731 18:11:30.771543   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:30.771549   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:30.771597   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:30.805878   74203 cri.go:89] found id: ""
	I0731 18:11:30.805902   74203 logs.go:276] 0 containers: []
	W0731 18:11:30.805911   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:30.805921   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:30.805966   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:30.839001   74203 cri.go:89] found id: ""
	I0731 18:11:30.839027   74203 logs.go:276] 0 containers: []
	W0731 18:11:30.839038   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:30.839048   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:30.839062   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:30.893357   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:30.893387   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:30.907222   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:30.907248   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:30.985626   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:30.985648   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:30.985668   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:31.067900   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:31.067948   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:33.607259   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:33.621596   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:33.621656   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:33.663616   74203 cri.go:89] found id: ""
	I0731 18:11:33.663642   74203 logs.go:276] 0 containers: []
	W0731 18:11:33.663649   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:33.663655   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:33.663704   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:33.702133   74203 cri.go:89] found id: ""
	I0731 18:11:33.702159   74203 logs.go:276] 0 containers: []
	W0731 18:11:33.702167   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:33.702173   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:33.702226   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:33.733730   74203 cri.go:89] found id: ""
	I0731 18:11:33.733752   74203 logs.go:276] 0 containers: []
	W0731 18:11:33.733760   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:33.733765   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:33.733813   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:33.765036   74203 cri.go:89] found id: ""
	I0731 18:11:33.765064   74203 logs.go:276] 0 containers: []
	W0731 18:11:33.765074   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:33.765080   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:33.765128   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:33.799604   74203 cri.go:89] found id: ""
	I0731 18:11:33.799630   74203 logs.go:276] 0 containers: []
	W0731 18:11:33.799640   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:33.799648   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:33.799716   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:33.831434   74203 cri.go:89] found id: ""
	I0731 18:11:33.831455   74203 logs.go:276] 0 containers: []
	W0731 18:11:33.831464   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:33.831469   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:33.831514   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:33.862975   74203 cri.go:89] found id: ""
	I0731 18:11:33.863004   74203 logs.go:276] 0 containers: []
	W0731 18:11:33.863014   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:33.863022   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:33.863090   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:33.895674   74203 cri.go:89] found id: ""
	I0731 18:11:33.895704   74203 logs.go:276] 0 containers: []
	W0731 18:11:33.895714   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:33.895723   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:33.895737   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:33.931954   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:33.931980   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:33.985353   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:33.985385   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:33.997857   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:33.997882   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:34.060523   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:34.060553   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:34.060575   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:36.643003   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:36.659306   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:36.659385   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:36.717097   74203 cri.go:89] found id: ""
	I0731 18:11:36.717129   74203 logs.go:276] 0 containers: []
	W0731 18:11:36.717141   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:36.717149   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:36.717212   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:36.750288   74203 cri.go:89] found id: ""
	I0731 18:11:36.750314   74203 logs.go:276] 0 containers: []
	W0731 18:11:36.750325   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:36.750331   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:36.750391   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:36.785272   74203 cri.go:89] found id: ""
	I0731 18:11:36.785296   74203 logs.go:276] 0 containers: []
	W0731 18:11:36.785304   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:36.785310   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:36.785360   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:36.818927   74203 cri.go:89] found id: ""
	I0731 18:11:36.818953   74203 logs.go:276] 0 containers: []
	W0731 18:11:36.818965   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:36.818972   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:36.819020   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:36.854562   74203 cri.go:89] found id: ""
	I0731 18:11:36.854593   74203 logs.go:276] 0 containers: []
	W0731 18:11:36.854602   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:36.854607   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:36.854670   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:36.887786   74203 cri.go:89] found id: ""
	I0731 18:11:36.887814   74203 logs.go:276] 0 containers: []
	W0731 18:11:36.887825   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:36.887833   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:36.887893   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:36.919418   74203 cri.go:89] found id: ""
	I0731 18:11:36.919446   74203 logs.go:276] 0 containers: []
	W0731 18:11:36.919457   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:36.919471   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:36.919533   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:36.956934   74203 cri.go:89] found id: ""
	I0731 18:11:36.956957   74203 logs.go:276] 0 containers: []
	W0731 18:11:36.956964   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:36.956971   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:36.956989   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:37.003755   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:37.003783   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:37.016977   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:37.017004   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:37.091617   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:37.091646   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:37.091662   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:37.170870   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:37.170903   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:39.714271   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:39.730306   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:39.730383   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:39.765368   74203 cri.go:89] found id: ""
	I0731 18:11:39.765399   74203 logs.go:276] 0 containers: []
	W0731 18:11:39.765407   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:39.765412   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:39.765471   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:39.800394   74203 cri.go:89] found id: ""
	I0731 18:11:39.800419   74203 logs.go:276] 0 containers: []
	W0731 18:11:39.800427   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:39.800433   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:39.800486   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:39.834861   74203 cri.go:89] found id: ""
	I0731 18:11:39.834889   74203 logs.go:276] 0 containers: []
	W0731 18:11:39.834898   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:39.834903   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:39.834958   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:39.868108   74203 cri.go:89] found id: ""
	I0731 18:11:39.868132   74203 logs.go:276] 0 containers: []
	W0731 18:11:39.868141   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:39.868146   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:39.868220   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:39.902097   74203 cri.go:89] found id: ""
	I0731 18:11:39.902120   74203 logs.go:276] 0 containers: []
	W0731 18:11:39.902128   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:39.902134   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:39.902184   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:39.933073   74203 cri.go:89] found id: ""
	I0731 18:11:39.933100   74203 logs.go:276] 0 containers: []
	W0731 18:11:39.933109   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:39.933114   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:39.933165   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:39.965748   74203 cri.go:89] found id: ""
	I0731 18:11:39.965775   74203 logs.go:276] 0 containers: []
	W0731 18:11:39.965785   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:39.965796   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:39.965856   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:39.998164   74203 cri.go:89] found id: ""
	I0731 18:11:39.998189   74203 logs.go:276] 0 containers: []
	W0731 18:11:39.998197   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:39.998205   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:39.998222   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:40.049991   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:40.050027   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:40.063676   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:40.063705   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:40.125855   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:40.125880   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:40.125896   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:40.207937   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:40.207970   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:42.746315   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:42.758998   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:42.759053   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:42.791921   74203 cri.go:89] found id: ""
	I0731 18:11:42.791946   74203 logs.go:276] 0 containers: []
	W0731 18:11:42.791954   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:42.791958   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:42.792004   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:42.822888   74203 cri.go:89] found id: ""
	I0731 18:11:42.822914   74203 logs.go:276] 0 containers: []
	W0731 18:11:42.822922   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:42.822927   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:42.822973   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:42.854516   74203 cri.go:89] found id: ""
	I0731 18:11:42.854545   74203 logs.go:276] 0 containers: []
	W0731 18:11:42.854564   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:42.854574   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:42.854638   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:42.890933   74203 cri.go:89] found id: ""
	I0731 18:11:42.890955   74203 logs.go:276] 0 containers: []
	W0731 18:11:42.890963   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:42.890968   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:42.891013   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:42.925170   74203 cri.go:89] found id: ""
	I0731 18:11:42.925196   74203 logs.go:276] 0 containers: []
	W0731 18:11:42.925206   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:42.925213   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:42.925273   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:42.959845   74203 cri.go:89] found id: ""
	I0731 18:11:42.959868   74203 logs.go:276] 0 containers: []
	W0731 18:11:42.959876   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:42.959881   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:42.959938   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:42.997305   74203 cri.go:89] found id: ""
	I0731 18:11:42.997346   74203 logs.go:276] 0 containers: []
	W0731 18:11:42.997358   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:42.997366   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:42.997427   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:43.030663   74203 cri.go:89] found id: ""
	I0731 18:11:43.030690   74203 logs.go:276] 0 containers: []
	W0731 18:11:43.030700   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:43.030711   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:43.030725   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:43.112280   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:43.112303   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:43.112318   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:43.209002   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:43.209035   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:43.249596   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:43.249629   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:43.302419   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:43.302449   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:45.816910   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:45.829909   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:45.829976   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:45.865534   74203 cri.go:89] found id: ""
	I0731 18:11:45.865561   74203 logs.go:276] 0 containers: []
	W0731 18:11:45.865572   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:45.865584   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:45.865646   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:45.901552   74203 cri.go:89] found id: ""
	I0731 18:11:45.901585   74203 logs.go:276] 0 containers: []
	W0731 18:11:45.901593   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:45.901598   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:45.901678   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:45.938790   74203 cri.go:89] found id: ""
	I0731 18:11:45.938820   74203 logs.go:276] 0 containers: []
	W0731 18:11:45.938842   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:45.938859   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:45.938926   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:45.971502   74203 cri.go:89] found id: ""
	I0731 18:11:45.971534   74203 logs.go:276] 0 containers: []
	W0731 18:11:45.971546   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:45.971553   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:45.971620   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:46.009281   74203 cri.go:89] found id: ""
	I0731 18:11:46.009316   74203 logs.go:276] 0 containers: []
	W0731 18:11:46.009327   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:46.009335   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:46.009399   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:46.042899   74203 cri.go:89] found id: ""
	I0731 18:11:46.042928   74203 logs.go:276] 0 containers: []
	W0731 18:11:46.042939   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:46.042947   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:46.043005   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:46.079982   74203 cri.go:89] found id: ""
	I0731 18:11:46.080013   74203 logs.go:276] 0 containers: []
	W0731 18:11:46.080024   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:46.080031   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:46.080098   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:46.113136   74203 cri.go:89] found id: ""
	I0731 18:11:46.113168   74203 logs.go:276] 0 containers: []
	W0731 18:11:46.113179   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:46.113191   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:46.113206   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:46.165818   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:46.165855   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:46.181058   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:46.181083   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:46.256805   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:46.256826   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:46.256838   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:46.353045   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:46.353093   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:48.894656   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:48.910648   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:48.910723   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:48.941080   74203 cri.go:89] found id: ""
	I0731 18:11:48.941103   74203 logs.go:276] 0 containers: []
	W0731 18:11:48.941111   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:48.941117   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:48.941164   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:48.972113   74203 cri.go:89] found id: ""
	I0731 18:11:48.972136   74203 logs.go:276] 0 containers: []
	W0731 18:11:48.972146   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:48.972151   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:48.972208   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:49.004521   74203 cri.go:89] found id: ""
	I0731 18:11:49.004547   74203 logs.go:276] 0 containers: []
	W0731 18:11:49.004557   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:49.004571   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:49.004658   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:49.036600   74203 cri.go:89] found id: ""
	I0731 18:11:49.036622   74203 logs.go:276] 0 containers: []
	W0731 18:11:49.036629   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:49.036635   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:49.036683   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:49.071397   74203 cri.go:89] found id: ""
	I0731 18:11:49.071426   74203 logs.go:276] 0 containers: []
	W0731 18:11:49.071436   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:49.071444   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:49.071501   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:49.108907   74203 cri.go:89] found id: ""
	I0731 18:11:49.108933   74203 logs.go:276] 0 containers: []
	W0731 18:11:49.108944   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:49.108952   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:49.109007   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:49.141808   74203 cri.go:89] found id: ""
	I0731 18:11:49.141834   74203 logs.go:276] 0 containers: []
	W0731 18:11:49.141844   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:49.141856   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:49.141917   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:49.174063   74203 cri.go:89] found id: ""
	I0731 18:11:49.174087   74203 logs.go:276] 0 containers: []
	W0731 18:11:49.174095   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:49.174104   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:49.174116   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:49.212152   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:49.212181   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:49.267297   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:49.267324   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:49.281342   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:49.281365   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:49.349843   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:49.349866   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:49.349882   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:51.927764   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:51.940480   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:51.940539   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:51.973731   74203 cri.go:89] found id: ""
	I0731 18:11:51.973759   74203 logs.go:276] 0 containers: []
	W0731 18:11:51.973768   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:51.973780   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:51.973837   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:52.003761   74203 cri.go:89] found id: ""
	I0731 18:11:52.003783   74203 logs.go:276] 0 containers: []
	W0731 18:11:52.003790   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:52.003795   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:52.003844   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:52.035009   74203 cri.go:89] found id: ""
	I0731 18:11:52.035028   74203 logs.go:276] 0 containers: []
	W0731 18:11:52.035035   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:52.035041   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:52.035100   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:52.065475   74203 cri.go:89] found id: ""
	I0731 18:11:52.065501   74203 logs.go:276] 0 containers: []
	W0731 18:11:52.065509   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:52.065515   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:52.065574   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:52.097529   74203 cri.go:89] found id: ""
	I0731 18:11:52.097558   74203 logs.go:276] 0 containers: []
	W0731 18:11:52.097567   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:52.097573   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:52.097622   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:52.128881   74203 cri.go:89] found id: ""
	I0731 18:11:52.128909   74203 logs.go:276] 0 containers: []
	W0731 18:11:52.128917   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:52.128923   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:52.128974   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:52.159894   74203 cri.go:89] found id: ""
	I0731 18:11:52.159921   74203 logs.go:276] 0 containers: []
	W0731 18:11:52.159931   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:52.159939   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:52.159998   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:52.191955   74203 cri.go:89] found id: ""
	I0731 18:11:52.191981   74203 logs.go:276] 0 containers: []
	W0731 18:11:52.191990   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:52.191999   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:52.192009   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:52.246389   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:52.246423   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:52.260226   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:52.260253   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:52.328423   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:52.328447   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:52.328459   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:52.408456   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:52.408495   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:54.947734   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:54.960359   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:54.960420   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:54.994231   74203 cri.go:89] found id: ""
	I0731 18:11:54.994256   74203 logs.go:276] 0 containers: []
	W0731 18:11:54.994264   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:54.994270   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:54.994332   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:55.027323   74203 cri.go:89] found id: ""
	I0731 18:11:55.027364   74203 logs.go:276] 0 containers: []
	W0731 18:11:55.027374   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:55.027382   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:55.027440   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:55.061741   74203 cri.go:89] found id: ""
	I0731 18:11:55.061763   74203 logs.go:276] 0 containers: []
	W0731 18:11:55.061771   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:55.061776   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:55.061822   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:55.100685   74203 cri.go:89] found id: ""
	I0731 18:11:55.100712   74203 logs.go:276] 0 containers: []
	W0731 18:11:55.100720   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:55.100726   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:55.100780   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:55.141917   74203 cri.go:89] found id: ""
	I0731 18:11:55.141958   74203 logs.go:276] 0 containers: []
	W0731 18:11:55.141971   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:55.141980   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:55.142054   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:55.176669   74203 cri.go:89] found id: ""
	I0731 18:11:55.176702   74203 logs.go:276] 0 containers: []
	W0731 18:11:55.176711   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:55.176718   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:55.176780   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:55.209795   74203 cri.go:89] found id: ""
	I0731 18:11:55.209829   74203 logs.go:276] 0 containers: []
	W0731 18:11:55.209842   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:55.209850   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:55.209915   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:55.244503   74203 cri.go:89] found id: ""
	I0731 18:11:55.244527   74203 logs.go:276] 0 containers: []
	W0731 18:11:55.244537   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:55.244556   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:55.244572   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:55.320033   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:55.320071   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:55.357684   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:55.357719   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:55.411465   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:55.411501   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:55.423802   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:55.423833   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:55.487945   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:57.988078   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:58.001639   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:58.001724   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:58.036075   74203 cri.go:89] found id: ""
	I0731 18:11:58.036099   74203 logs.go:276] 0 containers: []
	W0731 18:11:58.036107   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:58.036112   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:58.036163   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:58.067316   74203 cri.go:89] found id: ""
	I0731 18:11:58.067340   74203 logs.go:276] 0 containers: []
	W0731 18:11:58.067348   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:58.067353   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:58.067420   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:58.102446   74203 cri.go:89] found id: ""
	I0731 18:11:58.102470   74203 logs.go:276] 0 containers: []
	W0731 18:11:58.102479   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:58.102485   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:58.102553   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:58.134924   74203 cri.go:89] found id: ""
	I0731 18:11:58.134949   74203 logs.go:276] 0 containers: []
	W0731 18:11:58.134957   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:58.134963   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:58.135023   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:58.171589   74203 cri.go:89] found id: ""
	I0731 18:11:58.171611   74203 logs.go:276] 0 containers: []
	W0731 18:11:58.171620   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:58.171625   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:58.171673   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:58.203813   74203 cri.go:89] found id: ""
	I0731 18:11:58.203836   74203 logs.go:276] 0 containers: []
	W0731 18:11:58.203844   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:58.203850   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:58.203911   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:58.236251   74203 cri.go:89] found id: ""
	I0731 18:11:58.236277   74203 logs.go:276] 0 containers: []
	W0731 18:11:58.236288   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:58.236295   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:58.236357   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:58.270595   74203 cri.go:89] found id: ""
	I0731 18:11:58.270624   74203 logs.go:276] 0 containers: []
	W0731 18:11:58.270636   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:58.270647   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:58.270662   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:58.321889   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:58.321927   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:58.334529   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:58.334552   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:58.398489   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:58.398515   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:58.398540   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:58.479657   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:58.479695   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:12:01.014684   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:12:01.027959   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:12:01.028026   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:12:01.065423   74203 cri.go:89] found id: ""
	I0731 18:12:01.065459   74203 logs.go:276] 0 containers: []
	W0731 18:12:01.065472   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:12:01.065481   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:12:01.065545   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:12:01.099519   74203 cri.go:89] found id: ""
	I0731 18:12:01.099549   74203 logs.go:276] 0 containers: []
	W0731 18:12:01.099561   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:12:01.099568   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:12:01.099630   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:12:01.131239   74203 cri.go:89] found id: ""
	I0731 18:12:01.131262   74203 logs.go:276] 0 containers: []
	W0731 18:12:01.131270   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:12:01.131275   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:12:01.131321   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:12:01.163209   74203 cri.go:89] found id: ""
	I0731 18:12:01.163229   74203 logs.go:276] 0 containers: []
	W0731 18:12:01.163237   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:12:01.163242   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:12:01.163295   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:12:01.201165   74203 cri.go:89] found id: ""
	I0731 18:12:01.201193   74203 logs.go:276] 0 containers: []
	W0731 18:12:01.201204   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:12:01.201217   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:12:01.201274   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:12:01.233310   74203 cri.go:89] found id: ""
	I0731 18:12:01.233334   74203 logs.go:276] 0 containers: []
	W0731 18:12:01.233342   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:12:01.233348   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:12:01.233415   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:12:01.263412   74203 cri.go:89] found id: ""
	I0731 18:12:01.263442   74203 logs.go:276] 0 containers: []
	W0731 18:12:01.263452   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:12:01.263459   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:12:01.263521   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:12:01.296598   74203 cri.go:89] found id: ""
	I0731 18:12:01.296624   74203 logs.go:276] 0 containers: []
	W0731 18:12:01.296632   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:12:01.296642   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:12:01.296656   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:12:01.372362   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:12:01.372381   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:12:01.372395   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:12:01.461997   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:12:01.462029   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:12:01.507610   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:12:01.507636   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:12:01.558335   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:12:01.558375   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:12:04.073333   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:12:04.091122   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:12:04.091205   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:12:04.130510   74203 cri.go:89] found id: ""
	I0731 18:12:04.130545   74203 logs.go:276] 0 containers: []
	W0731 18:12:04.130558   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:12:04.130566   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:12:04.130632   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:12:04.174749   74203 cri.go:89] found id: ""
	I0731 18:12:04.174775   74203 logs.go:276] 0 containers: []
	W0731 18:12:04.174785   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:12:04.174792   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:12:04.174846   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:12:04.212123   74203 cri.go:89] found id: ""
	I0731 18:12:04.212160   74203 logs.go:276] 0 containers: []
	W0731 18:12:04.212172   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:12:04.212180   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:12:04.212254   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:12:04.251558   74203 cri.go:89] found id: ""
	I0731 18:12:04.251589   74203 logs.go:276] 0 containers: []
	W0731 18:12:04.251600   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:12:04.251608   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:12:04.251671   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:12:04.284831   74203 cri.go:89] found id: ""
	I0731 18:12:04.284864   74203 logs.go:276] 0 containers: []
	W0731 18:12:04.284878   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:12:04.284886   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:12:04.284954   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:12:04.325076   74203 cri.go:89] found id: ""
	I0731 18:12:04.325115   74203 logs.go:276] 0 containers: []
	W0731 18:12:04.325126   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:12:04.325135   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:12:04.325195   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:12:04.370883   74203 cri.go:89] found id: ""
	I0731 18:12:04.370922   74203 logs.go:276] 0 containers: []
	W0731 18:12:04.370933   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:12:04.370940   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:12:04.370999   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:12:04.410639   74203 cri.go:89] found id: ""
	I0731 18:12:04.410671   74203 logs.go:276] 0 containers: []
	W0731 18:12:04.410685   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:12:04.410697   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:12:04.410713   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:12:04.462988   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:12:04.463023   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:12:04.479086   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:12:04.479123   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:12:04.544675   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:12:04.544699   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:12:04.544712   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:12:04.633231   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:12:04.633267   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:12:07.174252   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:12:07.187289   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:12:07.187393   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:12:07.220927   74203 cri.go:89] found id: ""
	I0731 18:12:07.220953   74203 logs.go:276] 0 containers: []
	W0731 18:12:07.220964   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:12:07.220972   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:12:07.221040   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:12:07.256817   74203 cri.go:89] found id: ""
	I0731 18:12:07.256849   74203 logs.go:276] 0 containers: []
	W0731 18:12:07.256861   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:12:07.256870   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:12:07.256935   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:12:07.290267   74203 cri.go:89] found id: ""
	I0731 18:12:07.290297   74203 logs.go:276] 0 containers: []
	W0731 18:12:07.290309   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:12:07.290315   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:12:07.290373   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:12:07.330037   74203 cri.go:89] found id: ""
	I0731 18:12:07.330068   74203 logs.go:276] 0 containers: []
	W0731 18:12:07.330079   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:12:07.330087   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:12:07.330143   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:12:07.366745   74203 cri.go:89] found id: ""
	I0731 18:12:07.366770   74203 logs.go:276] 0 containers: []
	W0731 18:12:07.366778   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:12:07.366783   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:12:07.366861   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:12:07.400608   74203 cri.go:89] found id: ""
	I0731 18:12:07.400637   74203 logs.go:276] 0 containers: []
	W0731 18:12:07.400648   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:12:07.400661   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:12:07.400727   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:12:07.434996   74203 cri.go:89] found id: ""
	I0731 18:12:07.435028   74203 logs.go:276] 0 containers: []
	W0731 18:12:07.435037   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:12:07.435044   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:12:07.435130   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:12:07.474347   74203 cri.go:89] found id: ""
	I0731 18:12:07.474375   74203 logs.go:276] 0 containers: []
	W0731 18:12:07.474387   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:12:07.474400   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:12:07.474415   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:12:07.549009   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:12:07.549045   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:12:07.586710   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:12:07.586736   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:12:07.640770   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:12:07.640800   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:12:07.654380   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:12:07.654405   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:12:07.721479   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:12:10.221837   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:12:10.235686   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:12:10.235746   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:12:10.268769   74203 cri.go:89] found id: ""
	I0731 18:12:10.268794   74203 logs.go:276] 0 containers: []
	W0731 18:12:10.268802   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:12:10.268808   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:12:10.268860   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:12:10.305229   74203 cri.go:89] found id: ""
	I0731 18:12:10.305264   74203 logs.go:276] 0 containers: []
	W0731 18:12:10.305277   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:12:10.305290   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:12:10.305353   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:12:10.337070   74203 cri.go:89] found id: ""
	I0731 18:12:10.337095   74203 logs.go:276] 0 containers: []
	W0731 18:12:10.337104   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:12:10.337109   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:12:10.337155   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:12:10.372979   74203 cri.go:89] found id: ""
	I0731 18:12:10.373005   74203 logs.go:276] 0 containers: []
	W0731 18:12:10.373015   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:12:10.373022   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:12:10.373079   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:12:10.407225   74203 cri.go:89] found id: ""
	I0731 18:12:10.407252   74203 logs.go:276] 0 containers: []
	W0731 18:12:10.407264   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:12:10.407270   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:12:10.407327   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:12:10.443338   74203 cri.go:89] found id: ""
	I0731 18:12:10.443366   74203 logs.go:276] 0 containers: []
	W0731 18:12:10.443377   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:12:10.443385   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:12:10.443474   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:12:10.477005   74203 cri.go:89] found id: ""
	I0731 18:12:10.477030   74203 logs.go:276] 0 containers: []
	W0731 18:12:10.477038   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:12:10.477043   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:12:10.477092   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:12:10.509338   74203 cri.go:89] found id: ""
	I0731 18:12:10.509367   74203 logs.go:276] 0 containers: []
	W0731 18:12:10.509378   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:12:10.509389   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:12:10.509405   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:12:10.559604   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:12:10.559639   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:12:10.572652   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:12:10.572682   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:12:10.642749   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:12:10.642772   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:12:10.642789   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:12:10.728716   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:12:10.728753   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:12:13.267783   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:12:13.282235   74203 kubeadm.go:597] duration metric: took 4m4.41837453s to restartPrimaryControlPlane
	W0731 18:12:13.282324   74203 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 18:12:13.282355   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 18:12:16.959318   74203 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.676937263s)
	I0731 18:12:16.959425   74203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:12:16.973440   74203 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 18:12:16.983482   74203 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 18:12:16.993930   74203 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 18:12:16.993951   74203 kubeadm.go:157] found existing configuration files:
	
	I0731 18:12:16.993993   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 18:12:17.002713   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 18:12:17.002771   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 18:12:17.012107   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 18:12:17.022548   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 18:12:17.022604   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 18:12:17.033569   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 18:12:17.043338   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 18:12:17.043391   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 18:12:17.052064   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 18:12:17.060785   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 18:12:17.060850   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 18:12:17.069499   74203 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 18:12:17.136512   74203 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0731 18:12:17.136579   74203 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 18:12:17.286224   74203 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 18:12:17.286383   74203 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 18:12:17.286506   74203 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 18:12:17.467092   74203 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 18:12:17.468918   74203 out.go:204]   - Generating certificates and keys ...
	I0731 18:12:17.469024   74203 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 18:12:17.469135   74203 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 18:12:17.469229   74203 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 18:12:17.469307   74203 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 18:12:17.469439   74203 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 18:12:17.469525   74203 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 18:12:17.469609   74203 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 18:12:17.470025   74203 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 18:12:17.470501   74203 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 18:12:17.470852   74203 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 18:12:17.470899   74203 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 18:12:17.470949   74203 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 18:12:17.673308   74203 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 18:12:17.922789   74203 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 18:12:18.391239   74203 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 18:12:18.464854   74203 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 18:12:18.480495   74203 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 18:12:18.480675   74203 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 18:12:18.480746   74203 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 18:12:18.632564   74203 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 18:12:18.635416   74203 out.go:204]   - Booting up control plane ...
	I0731 18:12:18.635542   74203 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 18:12:18.643338   74203 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 18:12:18.645881   74203 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 18:12:18.646898   74203 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 18:12:18.650052   74203 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 18:12:58.652853   74203 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0731 18:12:58.653480   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:12:58.653735   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:13:03.654494   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:13:03.654747   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:13:13.655618   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:13:13.655843   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:13:33.657356   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:13:33.657560   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:14:13.660934   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:14:13.661161   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:14:13.661183   74203 kubeadm.go:310] 
	I0731 18:14:13.661216   74203 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 18:14:13.661251   74203 kubeadm.go:310] 		timed out waiting for the condition
	I0731 18:14:13.661279   74203 kubeadm.go:310] 
	I0731 18:14:13.661338   74203 kubeadm.go:310] 	This error is likely caused by:
	I0731 18:14:13.661378   74203 kubeadm.go:310] 		- The kubelet is not running
	I0731 18:14:13.661477   74203 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 18:14:13.661483   74203 kubeadm.go:310] 
	I0731 18:14:13.661577   74203 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 18:14:13.661617   74203 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 18:14:13.661646   74203 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 18:14:13.661651   74203 kubeadm.go:310] 
	I0731 18:14:13.661781   74203 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 18:14:13.661897   74203 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 18:14:13.661909   74203 kubeadm.go:310] 
	I0731 18:14:13.662044   74203 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 18:14:13.662164   74203 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 18:14:13.662265   74203 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 18:14:13.662444   74203 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 18:14:13.662477   74203 kubeadm.go:310] 
	I0731 18:14:13.663123   74203 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 18:14:13.663235   74203 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 18:14:13.663331   74203 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0731 18:14:13.663497   74203 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0731 18:14:13.663559   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 18:14:18.956376   74203 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.292787213s)
	I0731 18:14:18.956479   74203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:14:18.970820   74203 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 18:14:18.980747   74203 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 18:14:18.980771   74203 kubeadm.go:157] found existing configuration files:
	
	I0731 18:14:18.980816   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 18:14:18.989985   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 18:14:18.990063   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 18:14:18.999143   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 18:14:19.008740   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 18:14:19.008798   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 18:14:19.018729   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 18:14:19.028953   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 18:14:19.029015   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 18:14:19.039399   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 18:14:19.049072   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 18:14:19.049124   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 18:14:19.059592   74203 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 18:14:19.121542   74203 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0731 18:14:19.121613   74203 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 18:14:19.271989   74203 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 18:14:19.272100   74203 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 18:14:19.272223   74203 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 18:14:19.440224   74203 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 18:14:19.441929   74203 out.go:204]   - Generating certificates and keys ...
	I0731 18:14:19.442025   74203 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 18:14:19.442104   74203 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 18:14:19.442196   74203 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 18:14:19.442245   74203 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 18:14:19.442326   74203 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 18:14:19.442395   74203 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 18:14:19.442498   74203 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 18:14:19.442610   74203 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 18:14:19.442687   74203 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 18:14:19.442770   74203 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 18:14:19.442813   74203 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 18:14:19.442887   74203 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 18:14:19.481696   74203 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 18:14:19.804252   74203 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 18:14:20.038734   74203 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 18:14:20.211133   74203 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 18:14:20.225726   74203 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 18:14:20.227920   74203 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 18:14:20.227977   74203 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 18:14:20.364068   74203 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 18:14:20.365991   74203 out.go:204]   - Booting up control plane ...
	I0731 18:14:20.366094   74203 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 18:14:20.366195   74203 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 18:14:20.366270   74203 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 18:14:20.366379   74203 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 18:14:20.367688   74203 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 18:15:00.365616   74203 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0731 18:15:00.366184   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:15:00.366412   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:15:05.366332   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:15:05.366529   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:15:15.366241   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:15:15.366499   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:15:35.366114   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:15:35.366344   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:16:15.365995   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:16:15.366181   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:16:15.366191   74203 kubeadm.go:310] 
	I0731 18:16:15.366224   74203 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 18:16:15.366448   74203 kubeadm.go:310] 		timed out waiting for the condition
	I0731 18:16:15.366472   74203 kubeadm.go:310] 
	I0731 18:16:15.366517   74203 kubeadm.go:310] 	This error is likely caused by:
	I0731 18:16:15.366568   74203 kubeadm.go:310] 		- The kubelet is not running
	I0731 18:16:15.366723   74203 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 18:16:15.366740   74203 kubeadm.go:310] 
	I0731 18:16:15.366863   74203 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 18:16:15.366924   74203 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 18:16:15.366986   74203 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 18:16:15.366999   74203 kubeadm.go:310] 
	I0731 18:16:15.367153   74203 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 18:16:15.367271   74203 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 18:16:15.367283   74203 kubeadm.go:310] 
	I0731 18:16:15.367418   74203 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 18:16:15.367504   74203 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 18:16:15.367609   74203 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 18:16:15.367725   74203 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 18:16:15.367734   74203 kubeadm.go:310] 
	I0731 18:16:15.369210   74203 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 18:16:15.369361   74203 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 18:16:15.369434   74203 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0731 18:16:15.369496   74203 kubeadm.go:394] duration metric: took 8m6.557607575s to StartCluster
	I0731 18:16:15.369537   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:16:15.369590   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:16:15.432899   74203 cri.go:89] found id: ""
	I0731 18:16:15.432929   74203 logs.go:276] 0 containers: []
	W0731 18:16:15.432941   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:16:15.432947   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:16:15.433005   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:16:15.470506   74203 cri.go:89] found id: ""
	I0731 18:16:15.470534   74203 logs.go:276] 0 containers: []
	W0731 18:16:15.470542   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:16:15.470549   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:16:15.470609   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:16:15.502032   74203 cri.go:89] found id: ""
	I0731 18:16:15.502055   74203 logs.go:276] 0 containers: []
	W0731 18:16:15.502062   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:16:15.502067   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:16:15.502115   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:16:15.533897   74203 cri.go:89] found id: ""
	I0731 18:16:15.533918   74203 logs.go:276] 0 containers: []
	W0731 18:16:15.533925   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:16:15.533930   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:16:15.533980   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:16:15.565275   74203 cri.go:89] found id: ""
	I0731 18:16:15.565311   74203 logs.go:276] 0 containers: []
	W0731 18:16:15.565326   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:16:15.565333   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:16:15.565395   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:16:15.601402   74203 cri.go:89] found id: ""
	I0731 18:16:15.601427   74203 logs.go:276] 0 containers: []
	W0731 18:16:15.601435   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:16:15.601440   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:16:15.601489   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:16:15.638778   74203 cri.go:89] found id: ""
	I0731 18:16:15.638801   74203 logs.go:276] 0 containers: []
	W0731 18:16:15.638808   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:16:15.638813   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:16:15.638861   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:16:15.675697   74203 cri.go:89] found id: ""
	I0731 18:16:15.675720   74203 logs.go:276] 0 containers: []
	W0731 18:16:15.675728   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:16:15.675736   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:16:15.675748   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:16:15.745287   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:16:15.745325   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:16:15.745341   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:16:15.848503   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:16:15.848536   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:16:15.887234   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:16:15.887258   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:16:15.934871   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:16:15.934901   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0731 18:16:15.947727   74203 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0731 18:16:15.947769   74203 out.go:239] * 
	* 
	W0731 18:16:15.947817   74203 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 18:16:15.947836   74203 out.go:239] * 
	* 
	W0731 18:16:15.948669   74203 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 18:16:15.952286   74203 out.go:177] 
	W0731 18:16:15.953375   74203 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 18:16:15.953424   74203 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0731 18:16:15.953442   74203 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0731 18:16:15.954734   74203 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-276459 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-276459 -n old-k8s-version-276459
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-276459 -n old-k8s-version-276459: exit status 2 (215.395177ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-276459 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-276459 logs -n 25: (1.534603637s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-985288                           | enable-default-cni-985288    | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 17:58 UTC |
	|         | sudo cat                                               |                              |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-985288                           | enable-default-cni-985288    | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 17:58 UTC |
	|         | sudo containerd config dump                            |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-985288                           | enable-default-cni-985288    | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 17:58 UTC |
	|         | sudo systemctl status crio                             |                              |         |         |                     |                     |
	|         | --all --full --no-pager                                |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-985288                           | enable-default-cni-985288    | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 17:58 UTC |
	|         | sudo systemctl cat crio                                |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-985288                           | enable-default-cni-985288    | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 17:58 UTC |
	|         | sudo find /etc/crio -type f                            |                              |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                          |                              |         |         |                     |                     |
	|         | \;                                                     |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-985288                           | enable-default-cni-985288    | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 17:58 UTC |
	|         | sudo crio config                                       |                              |         |         |                     |                     |
	| delete  | -p enable-default-cni-985288                           | enable-default-cni-985288    | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 17:58 UTC |
	| delete  | -p                                                     | disable-driver-mounts-280161 | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 17:58 UTC |
	|         | disable-driver-mounts-280161                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-094310 | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 18:00 UTC |
	|         | default-k8s-diff-port-094310                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-673754             | no-preload-673754            | jenkins | v1.33.1 | 31 Jul 24 17:59 UTC | 31 Jul 24 17:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-673754                                   | no-preload-673754            | jenkins | v1.33.1 | 31 Jul 24 17:59 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-094310  | default-k8s-diff-port-094310 | jenkins | v1.33.1 | 31 Jul 24 18:00 UTC | 31 Jul 24 18:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-094310 | jenkins | v1.33.1 | 31 Jul 24 18:00 UTC |                     |
	|         | default-k8s-diff-port-094310                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-436067            | embed-certs-436067           | jenkins | v1.33.1 | 31 Jul 24 18:00 UTC | 31 Jul 24 18:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-436067                                  | embed-certs-436067           | jenkins | v1.33.1 | 31 Jul 24 18:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-276459        | old-k8s-version-276459       | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-673754                  | no-preload-673754            | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-673754 --memory=2200                     | no-preload-673754            | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC | 31 Jul 24 18:13 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-094310       | default-k8s-diff-port-094310 | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-436067                 | embed-certs-436067           | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-094310 | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC | 31 Jul 24 18:12 UTC |
	|         | default-k8s-diff-port-094310                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-436067                                  | embed-certs-436067           | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC | 31 Jul 24 18:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-276459                              | old-k8s-version-276459       | jenkins | v1.33.1 | 31 Jul 24 18:03 UTC | 31 Jul 24 18:03 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-276459             | old-k8s-version-276459       | jenkins | v1.33.1 | 31 Jul 24 18:03 UTC | 31 Jul 24 18:03 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-276459                              | old-k8s-version-276459       | jenkins | v1.33.1 | 31 Jul 24 18:03 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 18:03:55
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 18:03:55.344211   74203 out.go:291] Setting OutFile to fd 1 ...
	I0731 18:03:55.344313   74203 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:03:55.344321   74203 out.go:304] Setting ErrFile to fd 2...
	I0731 18:03:55.344324   74203 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:03:55.344541   74203 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19349-8084/.minikube/bin
	I0731 18:03:55.345055   74203 out.go:298] Setting JSON to false
	I0731 18:03:55.345905   74203 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6379,"bootTime":1722442656,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 18:03:55.345962   74203 start.go:139] virtualization: kvm guest
	I0731 18:03:55.347848   74203 out.go:177] * [old-k8s-version-276459] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 18:03:55.349045   74203 notify.go:220] Checking for updates...
	I0731 18:03:55.349052   74203 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 18:03:55.350359   74203 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 18:03:55.351583   74203 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 18:03:55.352789   74203 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19349-8084/.minikube
	I0731 18:03:55.354046   74203 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 18:03:55.355244   74203 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 18:03:55.356819   74203 config.go:182] Loaded profile config "old-k8s-version-276459": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0731 18:03:55.357218   74203 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:03:55.357268   74203 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:03:55.372081   74203 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38815
	I0731 18:03:55.372424   74203 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:03:55.372950   74203 main.go:141] libmachine: Using API Version  1
	I0731 18:03:55.372972   74203 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:03:55.373263   74203 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:03:55.373466   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:03:55.375198   74203 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0731 18:03:55.376370   74203 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 18:03:55.376714   74203 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:03:55.376748   74203 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:03:55.390924   74203 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46311
	I0731 18:03:55.391380   74203 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:03:55.391853   74203 main.go:141] libmachine: Using API Version  1
	I0731 18:03:55.391875   74203 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:03:55.392165   74203 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:03:55.392389   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:03:55.425283   74203 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 18:03:55.426485   74203 start.go:297] selected driver: kvm2
	I0731 18:03:55.426517   74203 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-276459 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-276459 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:03:55.426632   74203 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 18:03:55.427322   74203 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 18:03:55.427419   74203 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19349-8084/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 18:03:55.441518   74203 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 18:03:55.441891   74203 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 18:03:55.441921   74203 cni.go:84] Creating CNI manager for ""
	I0731 18:03:55.441928   74203 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:03:55.441970   74203 start.go:340] cluster config:
	{Name:old-k8s-version-276459 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-276459 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:03:55.442088   74203 iso.go:125] acquiring lock: {Name:mk4494bb5a63902bf12a909c3109db85229bc516 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 18:03:55.443745   74203 out.go:177] * Starting "old-k8s-version-276459" primary control-plane node in "old-k8s-version-276459" cluster
	I0731 18:03:55.299338   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:03:55.445026   74203 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 18:03:55.445062   74203 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0731 18:03:55.445085   74203 cache.go:56] Caching tarball of preloaded images
	I0731 18:03:55.445157   74203 preload.go:172] Found /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 18:03:55.445167   74203 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0731 18:03:55.445250   74203 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/config.json ...
	I0731 18:03:55.445412   74203 start.go:360] acquireMachinesLock for old-k8s-version-276459: {Name:mkd78eacfab7d6c3058c6674434b3d889ec957e0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 18:03:58.371340   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:04.451379   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:07.523408   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:13.603407   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:16.675437   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:22.755418   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:25.827434   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:31.907379   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:34.979426   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:41.059417   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:44.131434   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:50.211391   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:53.283445   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:59.363428   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:02.435450   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:08.515394   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:11.587394   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:17.667388   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:20.739413   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:26.819368   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:29.891394   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:35.971391   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:39.043445   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:45.123378   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:48.195378   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:54.275417   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:57.347374   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:03.427390   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:06.499378   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:12.579395   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:15.651447   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:21.731394   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:24.803405   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:30.883468   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:33.955397   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:40.035387   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:43.107448   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:49.187413   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:52.259420   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:58.339413   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:07:01.411396   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:07:04.416121   73696 start.go:364] duration metric: took 4m18.256589549s to acquireMachinesLock for "default-k8s-diff-port-094310"
	I0731 18:07:04.416183   73696 start.go:96] Skipping create...Using existing machine configuration
	I0731 18:07:04.416192   73696 fix.go:54] fixHost starting: 
	I0731 18:07:04.416522   73696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:07:04.416570   73696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:07:04.432249   73696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32817
	I0731 18:07:04.432715   73696 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:07:04.433206   73696 main.go:141] libmachine: Using API Version  1
	I0731 18:07:04.433234   73696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:07:04.433616   73696 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:07:04.433833   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:07:04.434001   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetState
	I0731 18:07:04.436061   73696 fix.go:112] recreateIfNeeded on default-k8s-diff-port-094310: state=Stopped err=<nil>
	I0731 18:07:04.436082   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	W0731 18:07:04.436241   73696 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 18:07:04.438139   73696 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-094310" ...
	I0731 18:07:04.439463   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .Start
	I0731 18:07:04.439678   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Ensuring networks are active...
	I0731 18:07:04.440645   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Ensuring network default is active
	I0731 18:07:04.441067   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Ensuring network mk-default-k8s-diff-port-094310 is active
	I0731 18:07:04.441473   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Getting domain xml...
	I0731 18:07:04.442331   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Creating domain...
	I0731 18:07:05.660745   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting to get IP...
	I0731 18:07:05.661963   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:05.662532   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:05.662620   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:05.662524   74854 retry.go:31] will retry after 294.438382ms: waiting for machine to come up
	I0731 18:07:05.959200   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:05.959668   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:05.959699   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:05.959619   74854 retry.go:31] will retry after 331.316387ms: waiting for machine to come up
	I0731 18:07:04.413166   73479 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 18:07:04.413216   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetMachineName
	I0731 18:07:04.413580   73479 buildroot.go:166] provisioning hostname "no-preload-673754"
	I0731 18:07:04.413609   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetMachineName
	I0731 18:07:04.413827   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:07:04.415964   73479 machine.go:97] duration metric: took 4m37.431900974s to provisionDockerMachine
	I0731 18:07:04.416013   73479 fix.go:56] duration metric: took 4m37.452176305s for fixHost
	I0731 18:07:04.416023   73479 start.go:83] releasing machines lock for "no-preload-673754", held for 4m37.452227129s
	W0731 18:07:04.416048   73479 start.go:714] error starting host: provision: host is not running
	W0731 18:07:04.416143   73479 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0731 18:07:04.416157   73479 start.go:729] Will try again in 5 seconds ...
	I0731 18:07:06.292146   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:06.292533   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:06.292555   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:06.292487   74854 retry.go:31] will retry after 324.512889ms: waiting for machine to come up
	I0731 18:07:06.619045   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:06.619440   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:06.619470   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:06.619404   74854 retry.go:31] will retry after 556.332506ms: waiting for machine to come up
	I0731 18:07:07.177224   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:07.177689   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:07.177722   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:07.177631   74854 retry.go:31] will retry after 599.567638ms: waiting for machine to come up
	I0731 18:07:07.778444   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:07.778848   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:07.778885   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:07.778820   74854 retry.go:31] will retry after 944.17246ms: waiting for machine to come up
	I0731 18:07:08.724983   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:08.725484   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:08.725512   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:08.725433   74854 retry.go:31] will retry after 1.077726279s: waiting for machine to come up
	I0731 18:07:09.805196   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:09.805629   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:09.805667   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:09.805575   74854 retry.go:31] will retry after 1.140059854s: waiting for machine to come up
	I0731 18:07:10.951633   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:10.952066   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:10.952091   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:10.952028   74854 retry.go:31] will retry after 1.691707383s: waiting for machine to come up
	I0731 18:07:09.418606   73479 start.go:360] acquireMachinesLock for no-preload-673754: {Name:mkd78eacfab7d6c3058c6674434b3d889ec957e0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 18:07:12.645970   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:12.646588   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:12.646623   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:12.646525   74854 retry.go:31] will retry after 2.257630784s: waiting for machine to come up
	I0731 18:07:14.905494   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:14.905891   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:14.905922   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:14.905833   74854 retry.go:31] will retry after 2.877713561s: waiting for machine to come up
	I0731 18:07:17.786797   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:17.787173   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:17.787194   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:17.787140   74854 retry.go:31] will retry after 3.028611559s: waiting for machine to come up
	I0731 18:07:20.817593   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:20.817898   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Found IP for machine: 192.168.72.197
	I0731 18:07:20.817921   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Reserving static IP address...
	I0731 18:07:20.817934   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has current primary IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:20.818352   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-094310", mac: "52:54:00:a9:b2:ae", ip: "192.168.72.197"} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:20.818379   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Reserved static IP address: 192.168.72.197
	I0731 18:07:20.818400   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | skip adding static IP to network mk-default-k8s-diff-port-094310 - found existing host DHCP lease matching {name: "default-k8s-diff-port-094310", mac: "52:54:00:a9:b2:ae", ip: "192.168.72.197"}
	I0731 18:07:20.818414   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for SSH to be available...
	I0731 18:07:20.818431   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | Getting to WaitForSSH function...
	I0731 18:07:20.820417   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:20.820731   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:20.820758   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:20.820893   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | Using SSH client type: external
	I0731 18:07:20.820916   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | Using SSH private key: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/default-k8s-diff-port-094310/id_rsa (-rw-------)
	I0731 18:07:20.820940   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.197 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19349-8084/.minikube/machines/default-k8s-diff-port-094310/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 18:07:20.820950   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | About to run SSH command:
	I0731 18:07:20.820959   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | exit 0
	I0731 18:07:20.943348   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | SSH cmd err, output: <nil>: 
	I0731 18:07:20.943708   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetConfigRaw
	I0731 18:07:20.944373   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetIP
	I0731 18:07:20.947080   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:20.947465   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:20.947499   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:20.947731   73696 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/default-k8s-diff-port-094310/config.json ...
	I0731 18:07:20.947909   73696 machine.go:94] provisionDockerMachine start ...
	I0731 18:07:20.947926   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:07:20.948124   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:07:20.950698   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:20.951056   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:20.951083   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:20.951228   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:07:20.951443   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:20.951608   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:20.951780   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:07:20.952016   73696 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:20.952208   73696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.197 22 <nil> <nil>}
	I0731 18:07:20.952220   73696 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 18:07:21.051082   73696 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 18:07:21.051137   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetMachineName
	I0731 18:07:21.051424   73696 buildroot.go:166] provisioning hostname "default-k8s-diff-port-094310"
	I0731 18:07:21.051454   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetMachineName
	I0731 18:07:21.051650   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:07:21.054527   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.054913   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:21.054940   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.055151   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:07:21.055377   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:21.055516   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:21.055670   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:07:21.055838   73696 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:21.056037   73696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.197 22 <nil> <nil>}
	I0731 18:07:21.056051   73696 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-094310 && echo "default-k8s-diff-port-094310" | sudo tee /etc/hostname
	I0731 18:07:22.127802   73800 start.go:364] duration metric: took 4m27.5245732s to acquireMachinesLock for "embed-certs-436067"
	I0731 18:07:22.127861   73800 start.go:96] Skipping create...Using existing machine configuration
	I0731 18:07:22.127871   73800 fix.go:54] fixHost starting: 
	I0731 18:07:22.128296   73800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:07:22.128386   73800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:07:22.144783   73800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34313
	I0731 18:07:22.145111   73800 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:07:22.145531   73800 main.go:141] libmachine: Using API Version  1
	I0731 18:07:22.145549   73800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:07:22.145894   73800 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:07:22.146086   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:07:22.146226   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetState
	I0731 18:07:22.147718   73800 fix.go:112] recreateIfNeeded on embed-certs-436067: state=Stopped err=<nil>
	I0731 18:07:22.147737   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	W0731 18:07:22.147878   73800 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 18:07:22.149896   73800 out.go:177] * Restarting existing kvm2 VM for "embed-certs-436067" ...
	I0731 18:07:21.168797   73696 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-094310
	
	I0731 18:07:21.168828   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:07:21.171672   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.172012   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:21.172043   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.172183   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:07:21.172351   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:21.172510   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:21.172633   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:07:21.172800   73696 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:21.172976   73696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.197 22 <nil> <nil>}
	I0731 18:07:21.173010   73696 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-094310' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-094310/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-094310' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 18:07:21.284583   73696 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 18:07:21.284610   73696 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19349-8084/.minikube CaCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19349-8084/.minikube}
	I0731 18:07:21.284633   73696 buildroot.go:174] setting up certificates
	I0731 18:07:21.284645   73696 provision.go:84] configureAuth start
	I0731 18:07:21.284656   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetMachineName
	I0731 18:07:21.284931   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetIP
	I0731 18:07:21.287526   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.287945   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:21.287973   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.288161   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:07:21.290169   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.290469   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:21.290495   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.290602   73696 provision.go:143] copyHostCerts
	I0731 18:07:21.290661   73696 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem, removing ...
	I0731 18:07:21.290673   73696 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem
	I0731 18:07:21.290757   73696 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem (1082 bytes)
	I0731 18:07:21.290844   73696 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem, removing ...
	I0731 18:07:21.290856   73696 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem
	I0731 18:07:21.290881   73696 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem (1123 bytes)
	I0731 18:07:21.290933   73696 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem, removing ...
	I0731 18:07:21.290939   73696 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem
	I0731 18:07:21.290959   73696 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem (1675 bytes)
	I0731 18:07:21.291005   73696 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-094310 san=[127.0.0.1 192.168.72.197 default-k8s-diff-port-094310 localhost minikube]
	I0731 18:07:21.483241   73696 provision.go:177] copyRemoteCerts
	I0731 18:07:21.483314   73696 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 18:07:21.483343   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:07:21.486231   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.486619   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:21.486659   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.486850   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:07:21.487084   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:21.487285   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:07:21.487443   73696 sshutil.go:53] new ssh client: &{IP:192.168.72.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/default-k8s-diff-port-094310/id_rsa Username:docker}
	I0731 18:07:21.568564   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 18:07:21.598766   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0731 18:07:21.621602   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 18:07:21.643361   73696 provision.go:87] duration metric: took 358.702982ms to configureAuth
	I0731 18:07:21.643393   73696 buildroot.go:189] setting minikube options for container-runtime
	I0731 18:07:21.643598   73696 config.go:182] Loaded profile config "default-k8s-diff-port-094310": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:07:21.643699   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:07:21.646487   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.646921   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:21.646967   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.647126   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:07:21.647331   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:21.647527   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:21.647675   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:07:21.647879   73696 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:21.648051   73696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.197 22 <nil> <nil>}
	I0731 18:07:21.648066   73696 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 18:07:21.896109   73696 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 18:07:21.896138   73696 machine.go:97] duration metric: took 948.216479ms to provisionDockerMachine
	I0731 18:07:21.896152   73696 start.go:293] postStartSetup for "default-k8s-diff-port-094310" (driver="kvm2")
	I0731 18:07:21.896166   73696 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 18:07:21.896185   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:07:21.896500   73696 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 18:07:21.896533   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:07:21.899447   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.899784   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:21.899817   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.899936   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:07:21.900136   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:21.900268   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:07:21.900415   73696 sshutil.go:53] new ssh client: &{IP:192.168.72.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/default-k8s-diff-port-094310/id_rsa Username:docker}
	I0731 18:07:21.981347   73696 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 18:07:21.985297   73696 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 18:07:21.985324   73696 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/addons for local assets ...
	I0731 18:07:21.985397   73696 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/files for local assets ...
	I0731 18:07:21.985513   73696 filesync.go:149] local asset: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem -> 152592.pem in /etc/ssl/certs
	I0731 18:07:21.985646   73696 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 18:07:21.994700   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /etc/ssl/certs/152592.pem (1708 bytes)
	I0731 18:07:22.022005   73696 start.go:296] duration metric: took 125.838186ms for postStartSetup
	I0731 18:07:22.022052   73696 fix.go:56] duration metric: took 17.605858897s for fixHost
	I0731 18:07:22.022075   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:07:22.025151   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:22.025445   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:22.025478   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:22.025622   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:07:22.025829   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:22.026023   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:22.026199   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:07:22.026390   73696 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:22.026632   73696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.197 22 <nil> <nil>}
	I0731 18:07:22.026653   73696 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 18:07:22.127643   73696 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722449242.103036947
	
	I0731 18:07:22.127668   73696 fix.go:216] guest clock: 1722449242.103036947
	I0731 18:07:22.127675   73696 fix.go:229] Guest: 2024-07-31 18:07:22.103036947 +0000 UTC Remote: 2024-07-31 18:07:22.022056299 +0000 UTC m=+275.995802468 (delta=80.980648ms)
	I0731 18:07:22.127698   73696 fix.go:200] guest clock delta is within tolerance: 80.980648ms
	I0731 18:07:22.127704   73696 start.go:83] releasing machines lock for "default-k8s-diff-port-094310", held for 17.711543911s
	I0731 18:07:22.127735   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:07:22.128006   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetIP
	I0731 18:07:22.130905   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:22.131291   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:22.131322   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:22.131568   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:07:22.132072   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:07:22.132244   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:07:22.132334   73696 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 18:07:22.132373   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:07:22.132488   73696 ssh_runner.go:195] Run: cat /version.json
	I0731 18:07:22.132511   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:07:22.134976   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:22.135269   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:22.135350   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:22.135386   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:22.135583   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:07:22.135702   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:22.135735   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:22.135751   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:22.135837   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:07:22.135945   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:07:22.135966   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:22.136068   73696 sshutil.go:53] new ssh client: &{IP:192.168.72.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/default-k8s-diff-port-094310/id_rsa Username:docker}
	I0731 18:07:22.136101   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:07:22.136246   73696 sshutil.go:53] new ssh client: &{IP:192.168.72.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/default-k8s-diff-port-094310/id_rsa Username:docker}
	I0731 18:07:22.245752   73696 ssh_runner.go:195] Run: systemctl --version
	I0731 18:07:22.251574   73696 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 18:07:22.391398   73696 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 18:07:22.396765   73696 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 18:07:22.396842   73696 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 18:07:22.412102   73696 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 18:07:22.412119   73696 start.go:495] detecting cgroup driver to use...
	I0731 18:07:22.412170   73696 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 18:07:22.427198   73696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 18:07:22.441511   73696 docker.go:217] disabling cri-docker service (if available) ...
	I0731 18:07:22.441589   73696 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 18:07:22.455498   73696 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 18:07:22.469702   73696 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 18:07:22.584218   73696 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 18:07:22.719105   73696 docker.go:233] disabling docker service ...
	I0731 18:07:22.719195   73696 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 18:07:22.733625   73696 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 18:07:22.746500   73696 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 18:07:22.893624   73696 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 18:07:23.012965   73696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 18:07:23.027132   73696 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 18:07:23.044766   73696 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 18:07:23.044832   73696 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:23.054276   73696 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 18:07:23.054363   73696 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:23.063873   73696 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:23.073392   73696 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:23.082908   73696 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 18:07:23.093468   73696 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:23.103419   73696 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:23.119920   73696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:23.130427   73696 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 18:07:23.139397   73696 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 18:07:23.139465   73696 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 18:07:23.152275   73696 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 18:07:23.162439   73696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:07:23.280030   73696 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 18:07:23.412019   73696 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 18:07:23.412083   73696 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 18:07:23.416884   73696 start.go:563] Will wait 60s for crictl version
	I0731 18:07:23.416930   73696 ssh_runner.go:195] Run: which crictl
	I0731 18:07:23.420518   73696 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 18:07:23.458895   73696 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 18:07:23.458976   73696 ssh_runner.go:195] Run: crio --version
	I0731 18:07:23.486961   73696 ssh_runner.go:195] Run: crio --version
	I0731 18:07:23.519648   73696 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 18:07:22.151159   73800 main.go:141] libmachine: (embed-certs-436067) Calling .Start
	I0731 18:07:22.151319   73800 main.go:141] libmachine: (embed-certs-436067) Ensuring networks are active...
	I0731 18:07:22.151951   73800 main.go:141] libmachine: (embed-certs-436067) Ensuring network default is active
	I0731 18:07:22.152245   73800 main.go:141] libmachine: (embed-certs-436067) Ensuring network mk-embed-certs-436067 is active
	I0731 18:07:22.152747   73800 main.go:141] libmachine: (embed-certs-436067) Getting domain xml...
	I0731 18:07:22.153446   73800 main.go:141] libmachine: (embed-certs-436067) Creating domain...
	I0731 18:07:23.410530   73800 main.go:141] libmachine: (embed-certs-436067) Waiting to get IP...
	I0731 18:07:23.411687   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:23.412152   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:23.412231   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:23.412133   74994 retry.go:31] will retry after 233.281104ms: waiting for machine to come up
	I0731 18:07:23.646659   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:23.647147   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:23.647174   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:23.647069   74994 retry.go:31] will retry after 307.068766ms: waiting for machine to come up
	I0731 18:07:23.955614   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:23.956140   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:23.956166   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:23.956094   74994 retry.go:31] will retry after 410.095032ms: waiting for machine to come up
	I0731 18:07:24.367793   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:24.368231   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:24.368264   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:24.368188   74994 retry.go:31] will retry after 366.242055ms: waiting for machine to come up
	I0731 18:07:23.520927   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetIP
	I0731 18:07:23.524167   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:23.524615   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:23.524663   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:23.524913   73696 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0731 18:07:23.528924   73696 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:07:23.540496   73696 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-094310 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-094310 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.197 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 18:07:23.540633   73696 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 18:07:23.540681   73696 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 18:07:23.579224   73696 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 18:07:23.579295   73696 ssh_runner.go:195] Run: which lz4
	I0731 18:07:23.583060   73696 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 18:07:23.586888   73696 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 18:07:23.586922   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 18:07:24.864241   73696 crio.go:462] duration metric: took 1.281254602s to copy over tarball
	I0731 18:07:24.864321   73696 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 18:07:24.735741   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:24.736325   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:24.736356   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:24.736275   74994 retry.go:31] will retry after 593.179812ms: waiting for machine to come up
	I0731 18:07:25.331004   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:25.331406   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:25.331470   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:25.331381   74994 retry.go:31] will retry after 778.352855ms: waiting for machine to come up
	I0731 18:07:26.111327   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:26.111828   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:26.111855   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:26.111757   74994 retry.go:31] will retry after 993.157171ms: waiting for machine to come up
	I0731 18:07:27.106111   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:27.106543   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:27.106574   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:27.106507   74994 retry.go:31] will retry after 963.581879ms: waiting for machine to come up
	I0731 18:07:28.072100   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:28.072628   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:28.072657   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:28.072560   74994 retry.go:31] will retry after 1.608497907s: waiting for machine to come up
	I0731 18:07:27.052512   73696 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.188157854s)
	I0731 18:07:27.052542   73696 crio.go:469] duration metric: took 2.188269884s to extract the tarball
	I0731 18:07:27.052557   73696 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 18:07:27.089250   73696 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 18:07:27.130507   73696 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 18:07:27.130536   73696 cache_images.go:84] Images are preloaded, skipping loading
	I0731 18:07:27.130546   73696 kubeadm.go:934] updating node { 192.168.72.197 8444 v1.30.3 crio true true} ...
	I0731 18:07:27.130666   73696 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-094310 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.197
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-094310 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 18:07:27.130751   73696 ssh_runner.go:195] Run: crio config
	I0731 18:07:27.176571   73696 cni.go:84] Creating CNI manager for ""
	I0731 18:07:27.176598   73696 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:07:27.176614   73696 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 18:07:27.176640   73696 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.197 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-094310 NodeName:default-k8s-diff-port-094310 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.197"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.197 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 18:07:27.176821   73696 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.197
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-094310"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.197
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.197"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 18:07:27.176904   73696 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 18:07:27.186582   73696 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 18:07:27.186647   73696 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 18:07:27.195571   73696 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0731 18:07:27.211103   73696 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 18:07:27.226226   73696 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0731 18:07:27.241763   73696 ssh_runner.go:195] Run: grep 192.168.72.197	control-plane.minikube.internal$ /etc/hosts
	I0731 18:07:27.245286   73696 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.197	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:07:27.256317   73696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:07:27.377904   73696 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 18:07:27.394151   73696 certs.go:68] Setting up /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/default-k8s-diff-port-094310 for IP: 192.168.72.197
	I0731 18:07:27.394181   73696 certs.go:194] generating shared ca certs ...
	I0731 18:07:27.394201   73696 certs.go:226] acquiring lock for ca certs: {Name:mk49eed439085f9c30746706460ce213570d1997 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:07:27.394382   73696 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key
	I0731 18:07:27.394451   73696 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key
	I0731 18:07:27.394465   73696 certs.go:256] generating profile certs ...
	I0731 18:07:27.394577   73696 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/default-k8s-diff-port-094310/client.key
	I0731 18:07:27.394656   73696 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/default-k8s-diff-port-094310/apiserver.key.5264b27d
	I0731 18:07:27.394703   73696 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/default-k8s-diff-port-094310/proxy-client.key
	I0731 18:07:27.394851   73696 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem (1338 bytes)
	W0731 18:07:27.394896   73696 certs.go:480] ignoring /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259_empty.pem, impossibly tiny 0 bytes
	I0731 18:07:27.394908   73696 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 18:07:27.394935   73696 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem (1082 bytes)
	I0731 18:07:27.394969   73696 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem (1123 bytes)
	I0731 18:07:27.394990   73696 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem (1675 bytes)
	I0731 18:07:27.395028   73696 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem (1708 bytes)
	I0731 18:07:27.395749   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 18:07:27.425292   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 18:07:27.452753   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 18:07:27.481508   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 18:07:27.506990   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/default-k8s-diff-port-094310/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0731 18:07:27.544385   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/default-k8s-diff-port-094310/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 18:07:27.572947   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/default-k8s-diff-port-094310/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 18:07:27.597895   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/default-k8s-diff-port-094310/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 18:07:27.619324   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /usr/share/ca-certificates/152592.pem (1708 bytes)
	I0731 18:07:27.641000   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 18:07:27.662483   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem --> /usr/share/ca-certificates/15259.pem (1338 bytes)
	I0731 18:07:27.684400   73696 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 18:07:27.700058   73696 ssh_runner.go:195] Run: openssl version
	I0731 18:07:27.705637   73696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/152592.pem && ln -fs /usr/share/ca-certificates/152592.pem /etc/ssl/certs/152592.pem"
	I0731 18:07:27.715558   73696 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/152592.pem
	I0731 18:07:27.719545   73696 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 16:54 /usr/share/ca-certificates/152592.pem
	I0731 18:07:27.719611   73696 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/152592.pem
	I0731 18:07:27.725076   73696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/152592.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 18:07:27.736589   73696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 18:07:27.747908   73696 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:07:27.752392   73696 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 16:42 /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:07:27.752448   73696 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:07:27.757939   73696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 18:07:27.769571   73696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15259.pem && ln -fs /usr/share/ca-certificates/15259.pem /etc/ssl/certs/15259.pem"
	I0731 18:07:27.780730   73696 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15259.pem
	I0731 18:07:27.785059   73696 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 16:54 /usr/share/ca-certificates/15259.pem
	I0731 18:07:27.785112   73696 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15259.pem
	I0731 18:07:27.790477   73696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15259.pem /etc/ssl/certs/51391683.0"
	I0731 18:07:27.801519   73696 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 18:07:27.805654   73696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 18:07:27.811381   73696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 18:07:27.816786   73696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 18:07:27.822643   73696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 18:07:27.828371   73696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 18:07:27.833908   73696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 18:07:27.839455   73696 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-094310 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-094310 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.197 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:07:27.839537   73696 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 18:07:27.839605   73696 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 18:07:27.882993   73696 cri.go:89] found id: ""
	I0731 18:07:27.883055   73696 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 18:07:27.894363   73696 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 18:07:27.894386   73696 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 18:07:27.894431   73696 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 18:07:27.905192   73696 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 18:07:27.906138   73696 kubeconfig.go:125] found "default-k8s-diff-port-094310" server: "https://192.168.72.197:8444"
	I0731 18:07:27.908339   73696 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 18:07:27.918565   73696 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.197
	I0731 18:07:27.918603   73696 kubeadm.go:1160] stopping kube-system containers ...
	I0731 18:07:27.918613   73696 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 18:07:27.918663   73696 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 18:07:27.955675   73696 cri.go:89] found id: ""
	I0731 18:07:27.955744   73696 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 18:07:27.972234   73696 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 18:07:27.981273   73696 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 18:07:27.981289   73696 kubeadm.go:157] found existing configuration files:
	
	I0731 18:07:27.981323   73696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0731 18:07:27.989775   73696 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 18:07:27.989837   73696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 18:07:27.998816   73696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0731 18:07:28.007142   73696 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 18:07:28.007197   73696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 18:07:28.016124   73696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0731 18:07:28.024471   73696 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 18:07:28.024519   73696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 18:07:28.033105   73696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0731 18:07:28.041306   73696 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 18:07:28.041355   73696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 18:07:28.049958   73696 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 18:07:28.058718   73696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:28.167720   73696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:29.013539   73696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:29.225696   73696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:29.300822   73696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:29.403471   73696 api_server.go:52] waiting for apiserver process to appear ...
	I0731 18:07:29.403567   73696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:07:29.903755   73696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:07:30.403896   73696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:07:30.904160   73696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:07:29.683622   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:29.684148   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:29.684180   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:29.684088   74994 retry.go:31] will retry after 1.813922887s: waiting for machine to come up
	I0731 18:07:31.500225   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:31.500738   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:31.500769   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:31.500694   74994 retry.go:31] will retry after 2.381670698s: waiting for machine to come up
	I0731 18:07:33.884129   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:33.884564   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:33.884587   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:33.884539   74994 retry.go:31] will retry after 3.269400744s: waiting for machine to come up
	I0731 18:07:31.404093   73696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:07:31.417483   73696 api_server.go:72] duration metric: took 2.014013675s to wait for apiserver process to appear ...
	I0731 18:07:31.417511   73696 api_server.go:88] waiting for apiserver healthz status ...
	I0731 18:07:31.417533   73696 api_server.go:253] Checking apiserver healthz at https://192.168.72.197:8444/healthz ...
	I0731 18:07:34.340211   73696 api_server.go:279] https://192.168.72.197:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 18:07:34.340240   73696 api_server.go:103] status: https://192.168.72.197:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 18:07:34.340274   73696 api_server.go:253] Checking apiserver healthz at https://192.168.72.197:8444/healthz ...
	I0731 18:07:34.426446   73696 api_server.go:279] https://192.168.72.197:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 18:07:34.426504   73696 api_server.go:103] status: https://192.168.72.197:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 18:07:34.426522   73696 api_server.go:253] Checking apiserver healthz at https://192.168.72.197:8444/healthz ...
	I0731 18:07:34.436383   73696 api_server.go:279] https://192.168.72.197:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 18:07:34.436416   73696 api_server.go:103] status: https://192.168.72.197:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 18:07:34.918371   73696 api_server.go:253] Checking apiserver healthz at https://192.168.72.197:8444/healthz ...
	I0731 18:07:34.922668   73696 api_server.go:279] https://192.168.72.197:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 18:07:34.922699   73696 api_server.go:103] status: https://192.168.72.197:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 18:07:35.418265   73696 api_server.go:253] Checking apiserver healthz at https://192.168.72.197:8444/healthz ...
	I0731 18:07:35.435931   73696 api_server.go:279] https://192.168.72.197:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 18:07:35.435966   73696 api_server.go:103] status: https://192.168.72.197:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 18:07:35.918570   73696 api_server.go:253] Checking apiserver healthz at https://192.168.72.197:8444/healthz ...
	I0731 18:07:35.923674   73696 api_server.go:279] https://192.168.72.197:8444/healthz returned 200:
	ok
	I0731 18:07:35.929781   73696 api_server.go:141] control plane version: v1.30.3
	I0731 18:07:35.929809   73696 api_server.go:131] duration metric: took 4.512290009s to wait for apiserver health ...
	I0731 18:07:35.929820   73696 cni.go:84] Creating CNI manager for ""
	I0731 18:07:35.929827   73696 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:07:35.931827   73696 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 18:07:35.933104   73696 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 18:07:35.943548   73696 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 18:07:35.961932   73696 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 18:07:35.977855   73696 system_pods.go:59] 8 kube-system pods found
	I0731 18:07:35.977894   73696 system_pods.go:61] "coredns-7db6d8ff4d-kvxmb" [df8cf19b-5e62-4c38-9124-3257fea48fbb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:07:35.977905   73696 system_pods.go:61] "etcd-default-k8s-diff-port-094310" [fe526f06-bd6c-4708-a0f3-e49b731e3a61] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 18:07:35.977915   73696 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-094310" [f0191941-87ad-4934-a02a-75b07649d5dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 18:07:35.977924   73696 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-094310" [28b4bdc4-4eea-41c0-9182-b07034d7363e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 18:07:35.977936   73696 system_pods.go:61] "kube-proxy-8bgl7" [577052d5-fe7d-4547-bfbf-d3c938884767] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 18:07:35.977946   73696 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-094310" [df25971f-b25a-4344-a91e-c4b0c9ee5282] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 18:07:35.977964   73696 system_pods.go:61] "metrics-server-569cc877fc-64hp4" [847243bf-6568-41ff-a1e4-70b0a89c63dd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 18:07:35.977978   73696 system_pods.go:61] "storage-provisioner" [6493bfa6-e40b-405c-93b6-ee5053efbdf6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 18:07:35.977991   73696 system_pods.go:74] duration metric: took 16.038231ms to wait for pod list to return data ...
	I0731 18:07:35.978003   73696 node_conditions.go:102] verifying NodePressure condition ...
	I0731 18:07:35.983206   73696 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 18:07:35.983234   73696 node_conditions.go:123] node cpu capacity is 2
	I0731 18:07:35.983251   73696 node_conditions.go:105] duration metric: took 5.239492ms to run NodePressure ...
	I0731 18:07:35.983270   73696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:37.155307   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:37.155787   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:37.155822   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:37.155717   74994 retry.go:31] will retry after 3.095991533s: waiting for machine to come up
	I0731 18:07:36.249072   73696 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 18:07:36.253639   73696 kubeadm.go:739] kubelet initialised
	I0731 18:07:36.253661   73696 kubeadm.go:740] duration metric: took 4.559461ms waiting for restarted kubelet to initialise ...
	I0731 18:07:36.253669   73696 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:07:36.258632   73696 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-kvxmb" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:36.262785   73696 pod_ready.go:97] node "default-k8s-diff-port-094310" hosting pod "coredns-7db6d8ff4d-kvxmb" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-094310" has status "Ready":"False"
	I0731 18:07:36.262811   73696 pod_ready.go:81] duration metric: took 4.157359ms for pod "coredns-7db6d8ff4d-kvxmb" in "kube-system" namespace to be "Ready" ...
	E0731 18:07:36.262823   73696 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-094310" hosting pod "coredns-7db6d8ff4d-kvxmb" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-094310" has status "Ready":"False"
	I0731 18:07:36.262831   73696 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:36.269224   73696 pod_ready.go:97] node "default-k8s-diff-port-094310" hosting pod "etcd-default-k8s-diff-port-094310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-094310" has status "Ready":"False"
	I0731 18:07:36.269250   73696 pod_ready.go:81] duration metric: took 6.406018ms for pod "etcd-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	E0731 18:07:36.269263   73696 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-094310" hosting pod "etcd-default-k8s-diff-port-094310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-094310" has status "Ready":"False"
	I0731 18:07:36.269270   73696 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:36.273379   73696 pod_ready.go:97] node "default-k8s-diff-port-094310" hosting pod "kube-apiserver-default-k8s-diff-port-094310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-094310" has status "Ready":"False"
	I0731 18:07:36.273400   73696 pod_ready.go:81] duration metric: took 4.119945ms for pod "kube-apiserver-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	E0731 18:07:36.273408   73696 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-094310" hosting pod "kube-apiserver-default-k8s-diff-port-094310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-094310" has status "Ready":"False"
	I0731 18:07:36.273414   73696 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:36.365153   73696 pod_ready.go:97] node "default-k8s-diff-port-094310" hosting pod "kube-controller-manager-default-k8s-diff-port-094310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-094310" has status "Ready":"False"
	I0731 18:07:36.365183   73696 pod_ready.go:81] duration metric: took 91.758203ms for pod "kube-controller-manager-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	E0731 18:07:36.365195   73696 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-094310" hosting pod "kube-controller-manager-default-k8s-diff-port-094310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-094310" has status "Ready":"False"
	I0731 18:07:36.365201   73696 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8bgl7" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:36.765371   73696 pod_ready.go:92] pod "kube-proxy-8bgl7" in "kube-system" namespace has status "Ready":"True"
	I0731 18:07:36.765393   73696 pod_ready.go:81] duration metric: took 400.181854ms for pod "kube-proxy-8bgl7" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:36.765405   73696 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:38.770757   73696 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-094310" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:40.772702   73696 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-094310" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:41.552094   74203 start.go:364] duration metric: took 3m46.106649241s to acquireMachinesLock for "old-k8s-version-276459"
	I0731 18:07:41.552166   74203 start.go:96] Skipping create...Using existing machine configuration
	I0731 18:07:41.552174   74203 fix.go:54] fixHost starting: 
	I0731 18:07:41.552553   74203 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:07:41.552595   74203 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:07:41.569965   74203 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43329
	I0731 18:07:41.570361   74203 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:07:41.570884   74203 main.go:141] libmachine: Using API Version  1
	I0731 18:07:41.570905   74203 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:07:41.571247   74203 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:07:41.571454   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:07:41.571605   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetState
	I0731 18:07:41.573081   74203 fix.go:112] recreateIfNeeded on old-k8s-version-276459: state=Stopped err=<nil>
	I0731 18:07:41.573114   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	W0731 18:07:41.573276   74203 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 18:07:41.575254   74203 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-276459" ...
	I0731 18:07:40.254868   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.255367   73800 main.go:141] libmachine: (embed-certs-436067) Found IP for machine: 192.168.50.86
	I0731 18:07:40.255385   73800 main.go:141] libmachine: (embed-certs-436067) Reserving static IP address...
	I0731 18:07:40.255405   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has current primary IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.255798   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "embed-certs-436067", mac: "52:54:00:87:1e:25", ip: "192.168.50.86"} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:40.255822   73800 main.go:141] libmachine: (embed-certs-436067) Reserved static IP address: 192.168.50.86
	I0731 18:07:40.255839   73800 main.go:141] libmachine: (embed-certs-436067) DBG | skip adding static IP to network mk-embed-certs-436067 - found existing host DHCP lease matching {name: "embed-certs-436067", mac: "52:54:00:87:1e:25", ip: "192.168.50.86"}
	I0731 18:07:40.255853   73800 main.go:141] libmachine: (embed-certs-436067) DBG | Getting to WaitForSSH function...
	I0731 18:07:40.255865   73800 main.go:141] libmachine: (embed-certs-436067) Waiting for SSH to be available...
	I0731 18:07:40.257994   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.258304   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:40.258331   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.258475   73800 main.go:141] libmachine: (embed-certs-436067) DBG | Using SSH client type: external
	I0731 18:07:40.258492   73800 main.go:141] libmachine: (embed-certs-436067) DBG | Using SSH private key: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/embed-certs-436067/id_rsa (-rw-------)
	I0731 18:07:40.258594   73800 main.go:141] libmachine: (embed-certs-436067) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.86 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19349-8084/.minikube/machines/embed-certs-436067/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 18:07:40.258625   73800 main.go:141] libmachine: (embed-certs-436067) DBG | About to run SSH command:
	I0731 18:07:40.258644   73800 main.go:141] libmachine: (embed-certs-436067) DBG | exit 0
	I0731 18:07:40.387051   73800 main.go:141] libmachine: (embed-certs-436067) DBG | SSH cmd err, output: <nil>: 
	I0731 18:07:40.387459   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetConfigRaw
	I0731 18:07:40.388093   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetIP
	I0731 18:07:40.390805   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.391260   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:40.391306   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.391534   73800 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/embed-certs-436067/config.json ...
	I0731 18:07:40.391769   73800 machine.go:94] provisionDockerMachine start ...
	I0731 18:07:40.391793   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:07:40.392012   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:07:40.394412   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.394809   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:40.394839   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.395029   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:07:40.395209   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:40.395372   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:40.395480   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:07:40.395624   73800 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:40.395808   73800 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.86 22 <nil> <nil>}
	I0731 18:07:40.395817   73800 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 18:07:40.503041   73800 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 18:07:40.503073   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetMachineName
	I0731 18:07:40.503326   73800 buildroot.go:166] provisioning hostname "embed-certs-436067"
	I0731 18:07:40.503352   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetMachineName
	I0731 18:07:40.503539   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:07:40.506604   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.506940   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:40.506967   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.507124   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:07:40.507296   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:40.507438   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:40.507577   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:07:40.507752   73800 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:40.507912   73800 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.86 22 <nil> <nil>}
	I0731 18:07:40.507927   73800 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-436067 && echo "embed-certs-436067" | sudo tee /etc/hostname
	I0731 18:07:40.632627   73800 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-436067
	
	I0731 18:07:40.632678   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:07:40.635632   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.635989   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:40.636017   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.636168   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:07:40.636386   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:40.636554   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:40.636751   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:07:40.636963   73800 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:40.637192   73800 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.86 22 <nil> <nil>}
	I0731 18:07:40.637213   73800 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-436067' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-436067/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-436067' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 18:07:40.755249   73800 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 18:07:40.755273   73800 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19349-8084/.minikube CaCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19349-8084/.minikube}
	I0731 18:07:40.755291   73800 buildroot.go:174] setting up certificates
	I0731 18:07:40.755301   73800 provision.go:84] configureAuth start
	I0731 18:07:40.755310   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetMachineName
	I0731 18:07:40.755602   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetIP
	I0731 18:07:40.758306   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.758705   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:40.758731   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.758865   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:07:40.760790   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.761061   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:40.761090   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.761244   73800 provision.go:143] copyHostCerts
	I0731 18:07:40.761299   73800 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem, removing ...
	I0731 18:07:40.761323   73800 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem
	I0731 18:07:40.761376   73800 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem (1082 bytes)
	I0731 18:07:40.761479   73800 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem, removing ...
	I0731 18:07:40.761488   73800 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem
	I0731 18:07:40.761509   73800 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem (1123 bytes)
	I0731 18:07:40.761562   73800 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem, removing ...
	I0731 18:07:40.761569   73800 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem
	I0731 18:07:40.761586   73800 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem (1675 bytes)
	I0731 18:07:40.761635   73800 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem org=jenkins.embed-certs-436067 san=[127.0.0.1 192.168.50.86 embed-certs-436067 localhost minikube]
	I0731 18:07:40.874612   73800 provision.go:177] copyRemoteCerts
	I0731 18:07:40.874666   73800 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 18:07:40.874691   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:07:40.877623   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.878044   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:40.878075   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.878206   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:07:40.878403   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:40.878556   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:07:40.878706   73800 sshutil.go:53] new ssh client: &{IP:192.168.50.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/embed-certs-436067/id_rsa Username:docker}
	I0731 18:07:40.965720   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 18:07:40.987836   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0731 18:07:41.012423   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 18:07:41.036366   73800 provision.go:87] duration metric: took 281.054266ms to configureAuth
	I0731 18:07:41.036392   73800 buildroot.go:189] setting minikube options for container-runtime
	I0731 18:07:41.036561   73800 config.go:182] Loaded profile config "embed-certs-436067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:07:41.036626   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:07:41.039204   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.039587   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:41.039615   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.039814   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:07:41.040021   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:41.040162   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:41.040293   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:07:41.040462   73800 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:41.040642   73800 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.86 22 <nil> <nil>}
	I0731 18:07:41.040663   73800 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 18:07:41.307915   73800 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 18:07:41.307945   73800 machine.go:97] duration metric: took 916.161297ms to provisionDockerMachine
	I0731 18:07:41.307958   73800 start.go:293] postStartSetup for "embed-certs-436067" (driver="kvm2")
	I0731 18:07:41.307971   73800 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 18:07:41.307990   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:07:41.308383   73800 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 18:07:41.308409   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:07:41.311172   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.311532   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:41.311559   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.311712   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:07:41.311940   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:41.312132   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:07:41.312251   73800 sshutil.go:53] new ssh client: &{IP:192.168.50.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/embed-certs-436067/id_rsa Username:docker}
	I0731 18:07:41.397229   73800 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 18:07:41.401356   73800 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 18:07:41.401380   73800 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/addons for local assets ...
	I0731 18:07:41.401458   73800 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/files for local assets ...
	I0731 18:07:41.401571   73800 filesync.go:149] local asset: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem -> 152592.pem in /etc/ssl/certs
	I0731 18:07:41.401696   73800 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 18:07:41.410540   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /etc/ssl/certs/152592.pem (1708 bytes)
	I0731 18:07:41.434298   73800 start.go:296] duration metric: took 126.324424ms for postStartSetup
	I0731 18:07:41.434342   73800 fix.go:56] duration metric: took 19.306472215s for fixHost
	I0731 18:07:41.434363   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:07:41.437502   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.438007   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:41.438038   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.438221   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:07:41.438435   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:41.438613   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:41.438752   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:07:41.438932   73800 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:41.439086   73800 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.86 22 <nil> <nil>}
	I0731 18:07:41.439095   73800 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 18:07:41.551915   73800 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722449261.529568895
	
	I0731 18:07:41.551937   73800 fix.go:216] guest clock: 1722449261.529568895
	I0731 18:07:41.551944   73800 fix.go:229] Guest: 2024-07-31 18:07:41.529568895 +0000 UTC Remote: 2024-07-31 18:07:41.434346377 +0000 UTC m=+286.960766339 (delta=95.222518ms)
	I0731 18:07:41.551999   73800 fix.go:200] guest clock delta is within tolerance: 95.222518ms
	I0731 18:07:41.552010   73800 start.go:83] releasing machines lock for "embed-certs-436067", held for 19.42417291s
	I0731 18:07:41.552036   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:07:41.552377   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetIP
	I0731 18:07:41.554945   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.555385   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:41.555415   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.555583   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:07:41.556139   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:07:41.556362   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:07:41.556448   73800 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 18:07:41.556507   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:07:41.556619   73800 ssh_runner.go:195] Run: cat /version.json
	I0731 18:07:41.556634   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:07:41.559700   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.559847   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.560160   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:41.560227   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.560277   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:07:41.560374   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:41.560440   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.560582   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:41.560652   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:07:41.560697   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:07:41.560745   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:41.560833   73800 sshutil.go:53] new ssh client: &{IP:192.168.50.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/embed-certs-436067/id_rsa Username:docker}
	I0731 18:07:41.560909   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:07:41.561060   73800 sshutil.go:53] new ssh client: &{IP:192.168.50.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/embed-certs-436067/id_rsa Username:docker}
	I0731 18:07:41.640796   73800 ssh_runner.go:195] Run: systemctl --version
	I0731 18:07:41.671461   73800 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 18:07:41.820881   73800 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 18:07:41.826610   73800 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 18:07:41.826673   73800 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 18:07:41.841766   73800 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 18:07:41.841789   73800 start.go:495] detecting cgroup driver to use...
	I0731 18:07:41.841872   73800 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 18:07:41.858636   73800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 18:07:41.873090   73800 docker.go:217] disabling cri-docker service (if available) ...
	I0731 18:07:41.873152   73800 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 18:07:41.890967   73800 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 18:07:41.907886   73800 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 18:07:42.022724   73800 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 18:07:42.173885   73800 docker.go:233] disabling docker service ...
	I0731 18:07:42.173969   73800 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 18:07:42.190959   73800 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 18:07:42.205274   73800 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 18:07:42.358130   73800 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 18:07:42.497981   73800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 18:07:42.513774   73800 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 18:07:42.532713   73800 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 18:07:42.532808   73800 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:42.544367   73800 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 18:07:42.544427   73800 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:42.556427   73800 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:42.566399   73800 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:42.576633   73800 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 18:07:42.588508   73800 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:42.600011   73800 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:42.618858   73800 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:42.630437   73800 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 18:07:42.641459   73800 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 18:07:42.641528   73800 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 18:07:42.655000   73800 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 18:07:42.664912   73800 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:07:42.791781   73800 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 18:07:42.936709   73800 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 18:07:42.936778   73800 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 18:07:42.941132   73800 start.go:563] Will wait 60s for crictl version
	I0731 18:07:42.941189   73800 ssh_runner.go:195] Run: which crictl
	I0731 18:07:42.944870   73800 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 18:07:42.983069   73800 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 18:07:42.983181   73800 ssh_runner.go:195] Run: crio --version
	I0731 18:07:43.011636   73800 ssh_runner.go:195] Run: crio --version
	I0731 18:07:43.043295   73800 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 18:07:43.044545   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetIP
	I0731 18:07:43.047635   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:43.048049   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:43.048080   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:43.048330   73800 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0731 18:07:43.052269   73800 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:07:43.064116   73800 kubeadm.go:883] updating cluster {Name:embed-certs-436067 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-436067 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.86 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 18:07:43.064283   73800 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 18:07:43.064361   73800 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 18:07:43.100437   73800 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 18:07:43.100516   73800 ssh_runner.go:195] Run: which lz4
	I0731 18:07:43.104627   73800 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 18:07:43.108552   73800 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 18:07:43.108586   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 18:07:44.368238   73800 crio.go:462] duration metric: took 1.263636259s to copy over tarball
	I0731 18:07:44.368322   73800 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 18:07:41.576648   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .Start
	I0731 18:07:41.576823   74203 main.go:141] libmachine: (old-k8s-version-276459) Ensuring networks are active...
	I0731 18:07:41.577511   74203 main.go:141] libmachine: (old-k8s-version-276459) Ensuring network default is active
	I0731 18:07:41.578015   74203 main.go:141] libmachine: (old-k8s-version-276459) Ensuring network mk-old-k8s-version-276459 is active
	I0731 18:07:41.578469   74203 main.go:141] libmachine: (old-k8s-version-276459) Getting domain xml...
	I0731 18:07:41.579474   74203 main.go:141] libmachine: (old-k8s-version-276459) Creating domain...
	I0731 18:07:42.876409   74203 main.go:141] libmachine: (old-k8s-version-276459) Waiting to get IP...
	I0731 18:07:42.877345   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:42.877788   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:42.877841   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:42.877763   75164 retry.go:31] will retry after 218.764988ms: waiting for machine to come up
	I0731 18:07:43.098230   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:43.098697   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:43.098722   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:43.098650   75164 retry.go:31] will retry after 285.579707ms: waiting for machine to come up
	I0731 18:07:43.386356   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:43.386897   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:43.386928   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:43.386852   75164 retry.go:31] will retry after 389.197253ms: waiting for machine to come up
	I0731 18:07:43.778183   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:43.778672   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:43.778698   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:43.778622   75164 retry.go:31] will retry after 484.5108ms: waiting for machine to come up
	I0731 18:07:44.264412   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:44.265042   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:44.265073   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:44.264955   75164 retry.go:31] will retry after 621.551625ms: waiting for machine to come up
	I0731 18:07:44.887986   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:44.888534   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:44.888563   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:44.888489   75164 retry.go:31] will retry after 610.567971ms: waiting for machine to come up
	I0731 18:07:42.773583   73696 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-094310" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:44.272853   73696 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-094310" in "kube-system" namespace has status "Ready":"True"
	I0731 18:07:44.272874   73696 pod_ready.go:81] duration metric: took 7.507462023s for pod "kube-scheduler-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:44.272886   73696 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:46.689701   73800 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.321340678s)
	I0731 18:07:46.689730   73800 crio.go:469] duration metric: took 2.321463484s to extract the tarball
	I0731 18:07:46.689738   73800 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 18:07:46.749205   73800 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 18:07:46.805950   73800 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 18:07:46.805979   73800 cache_images.go:84] Images are preloaded, skipping loading
	I0731 18:07:46.805990   73800 kubeadm.go:934] updating node { 192.168.50.86 8443 v1.30.3 crio true true} ...
	I0731 18:07:46.806135   73800 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-436067 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.86
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-436067 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 18:07:46.806233   73800 ssh_runner.go:195] Run: crio config
	I0731 18:07:46.865815   73800 cni.go:84] Creating CNI manager for ""
	I0731 18:07:46.865838   73800 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:07:46.865852   73800 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 18:07:46.865873   73800 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.86 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-436067 NodeName:embed-certs-436067 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.86"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.86 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 18:07:46.866048   73800 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.86
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-436067"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.86
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.86"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 18:07:46.866121   73800 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 18:07:46.875722   73800 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 18:07:46.875786   73800 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 18:07:46.885107   73800 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0731 18:07:46.903868   73800 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 18:07:46.919585   73800 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0731 18:07:46.939034   73800 ssh_runner.go:195] Run: grep 192.168.50.86	control-plane.minikube.internal$ /etc/hosts
	I0731 18:07:46.943460   73800 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.86	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:07:46.957699   73800 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:07:47.065714   73800 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 18:07:47.080655   73800 certs.go:68] Setting up /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/embed-certs-436067 for IP: 192.168.50.86
	I0731 18:07:47.080681   73800 certs.go:194] generating shared ca certs ...
	I0731 18:07:47.080717   73800 certs.go:226] acquiring lock for ca certs: {Name:mk49eed439085f9c30746706460ce213570d1997 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:07:47.080879   73800 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key
	I0731 18:07:47.080938   73800 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key
	I0731 18:07:47.080950   73800 certs.go:256] generating profile certs ...
	I0731 18:07:47.081046   73800 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/embed-certs-436067/client.key
	I0731 18:07:47.081113   73800 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/embed-certs-436067/apiserver.key.7b8160da
	I0731 18:07:47.081168   73800 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/embed-certs-436067/proxy-client.key
	I0731 18:07:47.081312   73800 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem (1338 bytes)
	W0731 18:07:47.081367   73800 certs.go:480] ignoring /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259_empty.pem, impossibly tiny 0 bytes
	I0731 18:07:47.081380   73800 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 18:07:47.081413   73800 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem (1082 bytes)
	I0731 18:07:47.081438   73800 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem (1123 bytes)
	I0731 18:07:47.081468   73800 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem (1675 bytes)
	I0731 18:07:47.081508   73800 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem (1708 bytes)
	I0731 18:07:47.082355   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 18:07:47.130037   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 18:07:47.171218   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 18:07:47.215745   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 18:07:47.244883   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/embed-certs-436067/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0731 18:07:47.270032   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/embed-certs-436067/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 18:07:47.294900   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/embed-certs-436067/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 18:07:47.317285   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/embed-certs-436067/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 18:07:47.343000   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 18:07:47.369906   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem --> /usr/share/ca-certificates/15259.pem (1338 bytes)
	I0731 18:07:47.392022   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /usr/share/ca-certificates/152592.pem (1708 bytes)
	I0731 18:07:47.414219   73800 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 18:07:47.431931   73800 ssh_runner.go:195] Run: openssl version
	I0731 18:07:47.437602   73800 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 18:07:47.447585   73800 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:07:47.451779   73800 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 16:42 /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:07:47.451833   73800 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:07:47.457309   73800 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 18:07:47.466917   73800 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15259.pem && ln -fs /usr/share/ca-certificates/15259.pem /etc/ssl/certs/15259.pem"
	I0731 18:07:47.476211   73800 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15259.pem
	I0731 18:07:47.480149   73800 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 16:54 /usr/share/ca-certificates/15259.pem
	I0731 18:07:47.480215   73800 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15259.pem
	I0731 18:07:47.485412   73800 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15259.pem /etc/ssl/certs/51391683.0"
	I0731 18:07:47.494852   73800 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/152592.pem && ln -fs /usr/share/ca-certificates/152592.pem /etc/ssl/certs/152592.pem"
	I0731 18:07:47.504407   73800 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/152592.pem
	I0731 18:07:47.509594   73800 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 16:54 /usr/share/ca-certificates/152592.pem
	I0731 18:07:47.509658   73800 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/152592.pem
	I0731 18:07:47.515728   73800 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/152592.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 18:07:47.525660   73800 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 18:07:47.529953   73800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 18:07:47.535576   73800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 18:07:47.541158   73800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 18:07:47.546633   73800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 18:07:47.551827   73800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 18:07:47.557100   73800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 18:07:47.562447   73800 kubeadm.go:392] StartCluster: {Name:embed-certs-436067 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-436067 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.86 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:07:47.562551   73800 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 18:07:47.562616   73800 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 18:07:47.610318   73800 cri.go:89] found id: ""
	I0731 18:07:47.610382   73800 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 18:07:47.623036   73800 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 18:07:47.623053   73800 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 18:07:47.623101   73800 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 18:07:47.631709   73800 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 18:07:47.632699   73800 kubeconfig.go:125] found "embed-certs-436067" server: "https://192.168.50.86:8443"
	I0731 18:07:47.634724   73800 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 18:07:47.643183   73800 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.86
	I0731 18:07:47.643207   73800 kubeadm.go:1160] stopping kube-system containers ...
	I0731 18:07:47.643218   73800 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 18:07:47.643264   73800 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 18:07:47.677438   73800 cri.go:89] found id: ""
	I0731 18:07:47.677527   73800 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 18:07:47.693427   73800 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 18:07:47.702889   73800 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 18:07:47.702907   73800 kubeadm.go:157] found existing configuration files:
	
	I0731 18:07:47.702956   73800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 18:07:47.713958   73800 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 18:07:47.714017   73800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 18:07:47.723931   73800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 18:07:47.732615   73800 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 18:07:47.732673   73800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 18:07:47.741168   73800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 18:07:47.749164   73800 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 18:07:47.749217   73800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 18:07:47.757691   73800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 18:07:47.765479   73800 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 18:07:47.765530   73800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 18:07:47.774002   73800 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 18:07:47.783757   73800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:47.890835   73800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:48.951421   73800 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.060547503s)
	I0731 18:07:48.951466   73800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:49.152745   73800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:49.224334   73800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:49.341066   73800 api_server.go:52] waiting for apiserver process to appear ...
	I0731 18:07:49.341147   73800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:07:45.500400   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:45.500938   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:45.500966   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:45.500890   75164 retry.go:31] will retry after 1.069889786s: waiting for machine to come up
	I0731 18:07:46.572634   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:46.573085   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:46.573128   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:46.572979   75164 retry.go:31] will retry after 1.047722466s: waiting for machine to come up
	I0731 18:07:47.622035   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:47.622479   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:47.622507   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:47.622435   75164 retry.go:31] will retry after 1.292658555s: waiting for machine to come up
	I0731 18:07:48.916255   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:48.916755   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:48.916778   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:48.916701   75164 retry.go:31] will retry after 2.006539925s: waiting for machine to come up
	I0731 18:07:46.281654   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:49.189881   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:49.841397   73800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:07:50.341264   73800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:07:50.409398   73800 api_server.go:72] duration metric: took 1.068329172s to wait for apiserver process to appear ...
	I0731 18:07:50.409432   73800 api_server.go:88] waiting for apiserver healthz status ...
	I0731 18:07:50.409457   73800 api_server.go:253] Checking apiserver healthz at https://192.168.50.86:8443/healthz ...
	I0731 18:07:50.410135   73800 api_server.go:269] stopped: https://192.168.50.86:8443/healthz: Get "https://192.168.50.86:8443/healthz": dial tcp 192.168.50.86:8443: connect: connection refused
	I0731 18:07:50.909802   73800 api_server.go:253] Checking apiserver healthz at https://192.168.50.86:8443/healthz ...
	I0731 18:07:52.636930   73800 api_server.go:279] https://192.168.50.86:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 18:07:52.636972   73800 api_server.go:103] status: https://192.168.50.86:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 18:07:52.636989   73800 api_server.go:253] Checking apiserver healthz at https://192.168.50.86:8443/healthz ...
	I0731 18:07:52.666947   73800 api_server.go:279] https://192.168.50.86:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 18:07:52.666980   73800 api_server.go:103] status: https://192.168.50.86:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 18:07:52.910391   73800 api_server.go:253] Checking apiserver healthz at https://192.168.50.86:8443/healthz ...
	I0731 18:07:52.916305   73800 api_server.go:279] https://192.168.50.86:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 18:07:52.916342   73800 api_server.go:103] status: https://192.168.50.86:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 18:07:53.409623   73800 api_server.go:253] Checking apiserver healthz at https://192.168.50.86:8443/healthz ...
	I0731 18:07:53.419159   73800 api_server.go:279] https://192.168.50.86:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 18:07:53.419205   73800 api_server.go:103] status: https://192.168.50.86:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 18:07:53.909654   73800 api_server.go:253] Checking apiserver healthz at https://192.168.50.86:8443/healthz ...
	I0731 18:07:53.913518   73800 api_server.go:279] https://192.168.50.86:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 18:07:53.913541   73800 api_server.go:103] status: https://192.168.50.86:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 18:07:54.409879   73800 api_server.go:253] Checking apiserver healthz at https://192.168.50.86:8443/healthz ...
	I0731 18:07:54.413948   73800 api_server.go:279] https://192.168.50.86:8443/healthz returned 200:
	ok
	I0731 18:07:54.422414   73800 api_server.go:141] control plane version: v1.30.3
	I0731 18:07:54.422444   73800 api_server.go:131] duration metric: took 4.013003689s to wait for apiserver health ...
	I0731 18:07:54.422458   73800 cni.go:84] Creating CNI manager for ""
	I0731 18:07:54.422467   73800 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:07:54.424680   73800 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 18:07:54.425887   73800 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 18:07:54.436394   73800 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 18:07:54.454533   73800 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 18:07:54.464268   73800 system_pods.go:59] 8 kube-system pods found
	I0731 18:07:54.464304   73800 system_pods.go:61] "coredns-7db6d8ff4d-h6ckp" [84faf557-0c8d-4026-b620-37265e017ed0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:07:54.464315   73800 system_pods.go:61] "etcd-embed-certs-436067" [787466df-6e3f-4209-a996-037875d63dc8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 18:07:54.464326   73800 system_pods.go:61] "kube-apiserver-embed-certs-436067" [6366e38e-21f3-41a4-af7a-433953b70eaa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 18:07:54.464335   73800 system_pods.go:61] "kube-controller-manager-embed-certs-436067" [a97f6a49-40cf-433a-8196-c433e3cda8e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 18:07:54.464341   73800 system_pods.go:61] "kube-proxy-tl9pj" [0124eb62-5c00-4f75-a73f-c3e92ddc4a42] Running
	I0731 18:07:54.464354   73800 system_pods.go:61] "kube-scheduler-embed-certs-436067" [afbb9117-f229-44ea-8939-d28c4a402c6b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 18:07:54.464366   73800 system_pods.go:61] "metrics-server-569cc877fc-fzxrw" [2ecdab2a-8ce8-4771-bd94-4e24dee34386] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 18:07:54.464374   73800 system_pods.go:61] "storage-provisioner" [29b17f6d-f9e4-4272-b6da-368431264701] Running
	I0731 18:07:54.464382   73800 system_pods.go:74] duration metric: took 9.82125ms to wait for pod list to return data ...
	I0731 18:07:54.464395   73800 node_conditions.go:102] verifying NodePressure condition ...
	I0731 18:07:54.467718   73800 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 18:07:54.467748   73800 node_conditions.go:123] node cpu capacity is 2
	I0731 18:07:54.467761   73800 node_conditions.go:105] duration metric: took 3.3602ms to run NodePressure ...
	I0731 18:07:54.467779   73800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:50.925369   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:50.925835   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:50.925856   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:50.925790   75164 retry.go:31] will retry after 2.875577792s: waiting for machine to come up
	I0731 18:07:53.802729   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:53.803164   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:53.803192   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:53.803122   75164 retry.go:31] will retry after 2.352020729s: waiting for machine to come up
	I0731 18:07:51.279883   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:53.279992   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:55.778812   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:54.732921   73800 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 18:07:54.736779   73800 kubeadm.go:739] kubelet initialised
	I0731 18:07:54.736798   73800 kubeadm.go:740] duration metric: took 3.850446ms waiting for restarted kubelet to initialise ...
	I0731 18:07:54.736809   73800 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:07:54.741733   73800 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-h6ckp" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:54.745722   73800 pod_ready.go:97] node "embed-certs-436067" hosting pod "coredns-7db6d8ff4d-h6ckp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436067" has status "Ready":"False"
	I0731 18:07:54.745742   73800 pod_ready.go:81] duration metric: took 3.986968ms for pod "coredns-7db6d8ff4d-h6ckp" in "kube-system" namespace to be "Ready" ...
	E0731 18:07:54.745751   73800 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-436067" hosting pod "coredns-7db6d8ff4d-h6ckp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436067" has status "Ready":"False"
	I0731 18:07:54.745757   73800 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:54.749650   73800 pod_ready.go:97] node "embed-certs-436067" hosting pod "etcd-embed-certs-436067" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436067" has status "Ready":"False"
	I0731 18:07:54.749666   73800 pod_ready.go:81] duration metric: took 3.895483ms for pod "etcd-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	E0731 18:07:54.749673   73800 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-436067" hosting pod "etcd-embed-certs-436067" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436067" has status "Ready":"False"
	I0731 18:07:54.749679   73800 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:54.753326   73800 pod_ready.go:97] node "embed-certs-436067" hosting pod "kube-apiserver-embed-certs-436067" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436067" has status "Ready":"False"
	I0731 18:07:54.753351   73800 pod_ready.go:81] duration metric: took 3.66496ms for pod "kube-apiserver-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	E0731 18:07:54.753362   73800 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-436067" hosting pod "kube-apiserver-embed-certs-436067" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436067" has status "Ready":"False"
	I0731 18:07:54.753370   73800 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:54.857956   73800 pod_ready.go:97] node "embed-certs-436067" hosting pod "kube-controller-manager-embed-certs-436067" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436067" has status "Ready":"False"
	I0731 18:07:54.857978   73800 pod_ready.go:81] duration metric: took 104.599259ms for pod "kube-controller-manager-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	E0731 18:07:54.857988   73800 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-436067" hosting pod "kube-controller-manager-embed-certs-436067" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436067" has status "Ready":"False"
	I0731 18:07:54.857995   73800 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-tl9pj" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:55.257589   73800 pod_ready.go:92] pod "kube-proxy-tl9pj" in "kube-system" namespace has status "Ready":"True"
	I0731 18:07:55.257621   73800 pod_ready.go:81] duration metric: took 399.617003ms for pod "kube-proxy-tl9pj" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:55.257630   73800 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:57.262770   73800 pod_ready.go:102] pod "kube-scheduler-embed-certs-436067" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:59.271094   73800 pod_ready.go:102] pod "kube-scheduler-embed-certs-436067" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:56.157721   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:56.158176   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:56.158216   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:56.158110   75164 retry.go:31] will retry after 3.552824334s: waiting for machine to come up
	I0731 18:07:59.712249   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.712759   74203 main.go:141] libmachine: (old-k8s-version-276459) Found IP for machine: 192.168.39.26
	I0731 18:07:59.712783   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has current primary IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.712793   74203 main.go:141] libmachine: (old-k8s-version-276459) Reserving static IP address...
	I0731 18:07:59.713268   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "old-k8s-version-276459", mac: "52:54:00:79:9d:96", ip: "192.168.39.26"} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:07:59.713297   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | skip adding static IP to network mk-old-k8s-version-276459 - found existing host DHCP lease matching {name: "old-k8s-version-276459", mac: "52:54:00:79:9d:96", ip: "192.168.39.26"}
	I0731 18:07:59.713320   74203 main.go:141] libmachine: (old-k8s-version-276459) Reserved static IP address: 192.168.39.26
	I0731 18:07:59.713343   74203 main.go:141] libmachine: (old-k8s-version-276459) Waiting for SSH to be available...
	I0731 18:07:59.713355   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | Getting to WaitForSSH function...
	I0731 18:07:59.716068   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.716456   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:07:59.716490   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.716701   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | Using SSH client type: external
	I0731 18:07:59.716725   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | Using SSH private key: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459/id_rsa (-rw-------)
	I0731 18:07:59.716762   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.26 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 18:07:59.716776   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | About to run SSH command:
	I0731 18:07:59.716792   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | exit 0
	I0731 18:07:59.847720   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | SSH cmd err, output: <nil>: 
	I0731 18:07:59.848089   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetConfigRaw
	I0731 18:07:59.848847   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetIP
	I0731 18:07:59.851632   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.852004   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:07:59.852030   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.852321   74203 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/config.json ...
	I0731 18:07:59.852505   74203 machine.go:94] provisionDockerMachine start ...
	I0731 18:07:59.852524   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:07:59.852752   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:07:59.855198   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.855596   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:07:59.855626   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.855756   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:07:59.855920   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:07:59.856071   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:07:59.856208   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:07:59.856372   74203 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:59.856601   74203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0731 18:07:59.856614   74203 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 18:07:59.963492   74203 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 18:07:59.963524   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetMachineName
	I0731 18:07:59.963762   74203 buildroot.go:166] provisioning hostname "old-k8s-version-276459"
	I0731 18:07:59.963794   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetMachineName
	I0731 18:07:59.963992   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:07:59.967261   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.967725   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:07:59.967762   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.967938   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:07:59.968131   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:07:59.968316   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:07:59.968487   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:07:59.968687   74203 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:59.968872   74203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0731 18:07:59.968890   74203 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-276459 && echo "old-k8s-version-276459" | sudo tee /etc/hostname
	I0731 18:08:00.084360   74203 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-276459
	
	I0731 18:08:00.084390   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:08:00.087433   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.087833   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.087862   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.088016   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:08:00.088187   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.088371   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.088521   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:08:00.088719   74203 main.go:141] libmachine: Using SSH client type: native
	I0731 18:08:00.088893   74203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0731 18:08:00.088915   74203 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-276459' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-276459/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-276459' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 18:08:00.200012   74203 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 18:08:00.200038   74203 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19349-8084/.minikube CaCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19349-8084/.minikube}
	I0731 18:08:00.200069   74203 buildroot.go:174] setting up certificates
	I0731 18:08:00.200081   74203 provision.go:84] configureAuth start
	I0731 18:08:00.200093   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetMachineName
	I0731 18:08:00.200360   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetIP
	I0731 18:08:00.203352   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.203694   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.203721   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.203951   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:08:00.206061   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.206398   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.206432   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.206510   74203 provision.go:143] copyHostCerts
	I0731 18:08:00.206580   74203 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem, removing ...
	I0731 18:08:00.206591   74203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem
	I0731 18:08:00.206654   74203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem (1082 bytes)
	I0731 18:08:00.206759   74203 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem, removing ...
	I0731 18:08:00.206769   74203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem
	I0731 18:08:00.206799   74203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem (1123 bytes)
	I0731 18:08:00.206876   74203 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem, removing ...
	I0731 18:08:00.206885   74203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem
	I0731 18:08:00.206913   74203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem (1675 bytes)
	I0731 18:08:00.207047   74203 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-276459 san=[127.0.0.1 192.168.39.26 localhost minikube old-k8s-version-276459]
	I0731 18:08:00.279363   74203 provision.go:177] copyRemoteCerts
	I0731 18:08:00.279423   74203 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 18:08:00.279456   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:08:00.282234   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.282601   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.282630   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.282751   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:08:00.283004   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.283178   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:08:00.283361   74203 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459/id_rsa Username:docker}
	I0731 18:08:00.935990   73479 start.go:364] duration metric: took 51.517312901s to acquireMachinesLock for "no-preload-673754"
	I0731 18:08:00.936054   73479 start.go:96] Skipping create...Using existing machine configuration
	I0731 18:08:00.936066   73479 fix.go:54] fixHost starting: 
	I0731 18:08:00.936534   73479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:08:00.936589   73479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:08:00.954868   73479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39013
	I0731 18:08:00.955405   73479 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:08:00.955980   73479 main.go:141] libmachine: Using API Version  1
	I0731 18:08:00.956012   73479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:08:00.956386   73479 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:08:00.956589   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 18:08:00.956752   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetState
	I0731 18:08:00.958461   73479 fix.go:112] recreateIfNeeded on no-preload-673754: state=Stopped err=<nil>
	I0731 18:08:00.958485   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	W0731 18:08:00.958655   73479 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 18:08:00.960117   73479 out.go:177] * Restarting existing kvm2 VM for "no-preload-673754" ...
	I0731 18:07:57.779258   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:59.780834   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:00.961340   73479 main.go:141] libmachine: (no-preload-673754) Calling .Start
	I0731 18:08:00.961543   73479 main.go:141] libmachine: (no-preload-673754) Ensuring networks are active...
	I0731 18:08:00.962332   73479 main.go:141] libmachine: (no-preload-673754) Ensuring network default is active
	I0731 18:08:00.962661   73479 main.go:141] libmachine: (no-preload-673754) Ensuring network mk-no-preload-673754 is active
	I0731 18:08:00.963165   73479 main.go:141] libmachine: (no-preload-673754) Getting domain xml...
	I0731 18:08:00.963982   73479 main.go:141] libmachine: (no-preload-673754) Creating domain...
	I0731 18:08:00.365254   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 18:08:00.389729   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0731 18:08:00.413143   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 18:08:00.436040   74203 provision.go:87] duration metric: took 235.932619ms to configureAuth
	I0731 18:08:00.436080   74203 buildroot.go:189] setting minikube options for container-runtime
	I0731 18:08:00.436288   74203 config.go:182] Loaded profile config "old-k8s-version-276459": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0731 18:08:00.436403   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:08:00.439184   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.439543   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.439575   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.439734   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:08:00.439898   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.440093   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.440271   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:08:00.440450   74203 main.go:141] libmachine: Using SSH client type: native
	I0731 18:08:00.440661   74203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0731 18:08:00.440679   74203 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 18:08:00.707438   74203 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 18:08:00.707467   74203 machine.go:97] duration metric: took 854.948491ms to provisionDockerMachine
	I0731 18:08:00.707482   74203 start.go:293] postStartSetup for "old-k8s-version-276459" (driver="kvm2")
	I0731 18:08:00.707494   74203 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 18:08:00.707510   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:08:00.707811   74203 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 18:08:00.707837   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:08:00.710726   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.711285   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.711315   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.711458   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:08:00.711703   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.711895   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:08:00.712049   74203 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459/id_rsa Username:docker}
	I0731 18:08:00.793719   74203 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 18:08:00.797858   74203 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 18:08:00.797888   74203 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/addons for local assets ...
	I0731 18:08:00.797960   74203 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/files for local assets ...
	I0731 18:08:00.798038   74203 filesync.go:149] local asset: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem -> 152592.pem in /etc/ssl/certs
	I0731 18:08:00.798130   74203 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 18:08:00.807013   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /etc/ssl/certs/152592.pem (1708 bytes)
	I0731 18:08:00.829440   74203 start.go:296] duration metric: took 121.944271ms for postStartSetup
	I0731 18:08:00.829487   74203 fix.go:56] duration metric: took 19.277312964s for fixHost
	I0731 18:08:00.829518   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:08:00.832718   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.833048   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.833082   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.833317   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:08:00.833533   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.833752   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.833887   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:08:00.834189   74203 main.go:141] libmachine: Using SSH client type: native
	I0731 18:08:00.834364   74203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0731 18:08:00.834377   74203 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 18:08:00.935834   74203 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722449280.899364873
	
	I0731 18:08:00.935853   74203 fix.go:216] guest clock: 1722449280.899364873
	I0731 18:08:00.935860   74203 fix.go:229] Guest: 2024-07-31 18:08:00.899364873 +0000 UTC Remote: 2024-07-31 18:08:00.829491013 +0000 UTC m=+245.518063325 (delta=69.87386ms)
	I0731 18:08:00.935894   74203 fix.go:200] guest clock delta is within tolerance: 69.87386ms
	I0731 18:08:00.935899   74203 start.go:83] releasing machines lock for "old-k8s-version-276459", held for 19.38376262s
	I0731 18:08:00.935937   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:08:00.936220   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetIP
	I0731 18:08:00.939282   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.939691   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.939722   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.939911   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:08:00.940506   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:08:00.940704   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:08:00.940790   74203 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 18:08:00.940831   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:08:00.940960   74203 ssh_runner.go:195] Run: cat /version.json
	I0731 18:08:00.941043   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:08:00.943883   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.943909   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.944361   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.944405   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.944429   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.944442   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.944542   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:08:00.944639   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:08:00.944766   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.944817   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.944899   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:08:00.944979   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:08:00.945039   74203 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459/id_rsa Username:docker}
	I0731 18:08:00.945110   74203 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459/id_rsa Username:docker}
	I0731 18:08:01.023818   74203 ssh_runner.go:195] Run: systemctl --version
	I0731 18:08:01.063390   74203 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 18:08:01.205084   74203 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 18:08:01.210972   74203 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 18:08:01.211049   74203 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 18:08:01.226156   74203 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 18:08:01.226180   74203 start.go:495] detecting cgroup driver to use...
	I0731 18:08:01.226257   74203 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 18:08:01.241506   74203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 18:08:01.256615   74203 docker.go:217] disabling cri-docker service (if available) ...
	I0731 18:08:01.256671   74203 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 18:08:01.271515   74203 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 18:08:01.287213   74203 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 18:08:01.415827   74203 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 18:08:01.578122   74203 docker.go:233] disabling docker service ...
	I0731 18:08:01.578208   74203 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 18:08:01.596564   74203 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 18:08:01.611984   74203 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 18:08:01.748972   74203 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 18:08:01.896911   74203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 18:08:01.912921   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 18:08:01.931671   74203 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0731 18:08:01.931749   74203 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:01.943737   74203 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 18:08:01.943798   74203 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:01.954571   74203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:01.964733   74203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:01.976087   74203 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 18:08:01.987193   74203 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 18:08:01.996620   74203 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 18:08:01.996670   74203 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 18:08:02.011046   74203 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 18:08:02.022199   74203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:08:02.147855   74203 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 18:08:02.309868   74203 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 18:08:02.309940   74203 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 18:08:02.314966   74203 start.go:563] Will wait 60s for crictl version
	I0731 18:08:02.315031   74203 ssh_runner.go:195] Run: which crictl
	I0731 18:08:02.318685   74203 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 18:08:02.359361   74203 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 18:08:02.359460   74203 ssh_runner.go:195] Run: crio --version
	I0731 18:08:02.387053   74203 ssh_runner.go:195] Run: crio --version
	I0731 18:08:02.417054   74203 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0731 18:08:01.265323   73800 pod_ready.go:92] pod "kube-scheduler-embed-certs-436067" in "kube-system" namespace has status "Ready":"True"
	I0731 18:08:01.265363   73800 pod_ready.go:81] duration metric: took 6.007715949s for pod "kube-scheduler-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:01.265376   73800 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:03.271693   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:02.418272   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetIP
	I0731 18:08:02.421211   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:02.421714   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:02.421743   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:02.421949   74203 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 18:08:02.425878   74203 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:08:02.438082   74203 kubeadm.go:883] updating cluster {Name:old-k8s-version-276459 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-276459 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 18:08:02.438222   74203 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 18:08:02.438293   74203 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 18:08:02.484113   74203 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0731 18:08:02.484189   74203 ssh_runner.go:195] Run: which lz4
	I0731 18:08:02.488365   74203 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 18:08:02.492321   74203 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 18:08:02.492352   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0731 18:08:03.946187   74203 crio.go:462] duration metric: took 1.457852426s to copy over tarball
	I0731 18:08:03.946261   74203 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 18:08:01.781606   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:03.781786   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:02.287159   73479 main.go:141] libmachine: (no-preload-673754) Waiting to get IP...
	I0731 18:08:02.288338   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:02.288812   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:02.288879   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:02.288799   75356 retry.go:31] will retry after 229.074083ms: waiting for machine to come up
	I0731 18:08:02.519266   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:02.519697   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:02.519720   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:02.519663   75356 retry.go:31] will retry after 328.345922ms: waiting for machine to come up
	I0731 18:08:02.849290   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:02.849839   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:02.849871   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:02.849787   75356 retry.go:31] will retry after 339.030371ms: waiting for machine to come up
	I0731 18:08:03.190065   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:03.190587   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:03.190620   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:03.190539   75356 retry.go:31] will retry after 514.955663ms: waiting for machine to come up
	I0731 18:08:03.707808   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:03.708382   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:03.708418   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:03.708349   75356 retry.go:31] will retry after 543.558992ms: waiting for machine to come up
	I0731 18:08:04.253224   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:04.253760   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:04.253781   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:04.253708   75356 retry.go:31] will retry after 925.348689ms: waiting for machine to come up
	I0731 18:08:05.180439   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:05.180833   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:05.180857   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:05.180786   75356 retry.go:31] will retry after 1.014666798s: waiting for machine to come up
	I0731 18:08:06.196879   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:06.197321   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:06.197355   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:06.197258   75356 retry.go:31] will retry after 1.163649074s: waiting for machine to come up
	I0731 18:08:05.278001   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:07.771870   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:06.945760   74203 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.99946679s)
	I0731 18:08:06.945790   74203 crio.go:469] duration metric: took 2.999576832s to extract the tarball
	I0731 18:08:06.945800   74203 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 18:08:06.989081   74203 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 18:08:07.024521   74203 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0731 18:08:07.024545   74203 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 18:08:07.024615   74203 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:08:07.024645   74203 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 18:08:07.024695   74203 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 18:08:07.024729   74203 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 18:08:07.024718   74203 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0731 18:08:07.024780   74203 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0731 18:08:07.024822   74203 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0731 18:08:07.024716   74203 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 18:08:07.026228   74203 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 18:08:07.026237   74203 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 18:08:07.026242   74203 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0731 18:08:07.026263   74203 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:08:07.026231   74203 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 18:08:07.026231   74203 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0731 18:08:07.026863   74203 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 18:08:07.027091   74203 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0731 18:08:07.282735   74203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0731 18:08:07.284464   74203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0731 18:08:07.287001   74203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 18:08:07.305873   74203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0731 18:08:07.307144   74203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0731 18:08:07.311401   74203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0731 18:08:07.318119   74203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0731 18:08:07.366929   74203 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0731 18:08:07.366979   74203 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0731 18:08:07.367041   74203 ssh_runner.go:195] Run: which crictl
	I0731 18:08:07.393481   74203 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0731 18:08:07.393534   74203 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0731 18:08:07.393594   74203 ssh_runner.go:195] Run: which crictl
	I0731 18:08:07.441987   74203 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0731 18:08:07.442036   74203 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 18:08:07.442083   74203 ssh_runner.go:195] Run: which crictl
	I0731 18:08:07.449033   74203 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0731 18:08:07.449085   74203 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 18:08:07.449137   74203 ssh_runner.go:195] Run: which crictl
	I0731 18:08:07.465248   74203 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0731 18:08:07.465291   74203 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 18:08:07.465341   74203 ssh_runner.go:195] Run: which crictl
	I0731 18:08:07.476013   74203 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0731 18:08:07.476053   74203 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0731 18:08:07.476074   74203 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 18:08:07.476090   74203 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0731 18:08:07.476111   74203 ssh_runner.go:195] Run: which crictl
	I0731 18:08:07.476129   74203 ssh_runner.go:195] Run: which crictl
	I0731 18:08:07.476146   74203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0731 18:08:07.476111   74203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0731 18:08:07.476196   74203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 18:08:07.476220   74203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0731 18:08:07.476273   74203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0731 18:08:07.592532   74203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0731 18:08:07.592677   74203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0731 18:08:07.592709   74203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0731 18:08:07.592797   74203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0731 18:08:07.637254   74203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0731 18:08:07.637276   74203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0731 18:08:07.637288   74203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0731 18:08:07.637292   74203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0731 18:08:07.640419   74203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0731 18:08:07.860814   74203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:08:08.002115   74203 cache_images.go:92] duration metric: took 977.553376ms to LoadCachedImages
	W0731 18:08:08.002248   74203 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0731 18:08:08.002267   74203 kubeadm.go:934] updating node { 192.168.39.26 8443 v1.20.0 crio true true} ...
	I0731 18:08:08.002404   74203 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-276459 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.26
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-276459 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 18:08:08.002500   74203 ssh_runner.go:195] Run: crio config
	I0731 18:08:08.059237   74203 cni.go:84] Creating CNI manager for ""
	I0731 18:08:08.059264   74203 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:08:08.059281   74203 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 18:08:08.059313   74203 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.26 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-276459 NodeName:old-k8s-version-276459 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.26"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.26 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0731 18:08:08.059503   74203 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.26
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-276459"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.26
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.26"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 18:08:08.059575   74203 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0731 18:08:08.070299   74203 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 18:08:08.070388   74203 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 18:08:08.082083   74203 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0731 18:08:08.101728   74203 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 18:08:08.120721   74203 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0731 18:08:08.137997   74203 ssh_runner.go:195] Run: grep 192.168.39.26	control-plane.minikube.internal$ /etc/hosts
	I0731 18:08:08.141797   74203 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.26	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:08:08.156861   74203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:08:08.287700   74203 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 18:08:08.307598   74203 certs.go:68] Setting up /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459 for IP: 192.168.39.26
	I0731 18:08:08.307623   74203 certs.go:194] generating shared ca certs ...
	I0731 18:08:08.307644   74203 certs.go:226] acquiring lock for ca certs: {Name:mk49eed439085f9c30746706460ce213570d1997 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:08:08.307811   74203 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key
	I0731 18:08:08.307855   74203 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key
	I0731 18:08:08.307868   74203 certs.go:256] generating profile certs ...
	I0731 18:08:08.307987   74203 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/client.key
	I0731 18:08:08.308062   74203 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/apiserver.key.7c620cac
	I0731 18:08:08.308123   74203 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/proxy-client.key
	I0731 18:08:08.308283   74203 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem (1338 bytes)
	W0731 18:08:08.308315   74203 certs.go:480] ignoring /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259_empty.pem, impossibly tiny 0 bytes
	I0731 18:08:08.308324   74203 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 18:08:08.308362   74203 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem (1082 bytes)
	I0731 18:08:08.308382   74203 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem (1123 bytes)
	I0731 18:08:08.308402   74203 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem (1675 bytes)
	I0731 18:08:08.308438   74203 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem (1708 bytes)
	I0731 18:08:08.309095   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 18:08:08.355508   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 18:08:08.391999   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 18:08:08.427937   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 18:08:08.456268   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0731 18:08:08.486991   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 18:08:08.519564   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 18:08:08.557029   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 18:08:08.583971   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /usr/share/ca-certificates/152592.pem (1708 bytes)
	I0731 18:08:08.608505   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 18:08:08.630279   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem --> /usr/share/ca-certificates/15259.pem (1338 bytes)
	I0731 18:08:08.655012   74203 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 18:08:08.671907   74203 ssh_runner.go:195] Run: openssl version
	I0731 18:08:08.677538   74203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/152592.pem && ln -fs /usr/share/ca-certificates/152592.pem /etc/ssl/certs/152592.pem"
	I0731 18:08:08.687877   74203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/152592.pem
	I0731 18:08:08.692201   74203 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 16:54 /usr/share/ca-certificates/152592.pem
	I0731 18:08:08.692258   74203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/152592.pem
	I0731 18:08:08.698563   74203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/152592.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 18:08:08.708986   74203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 18:08:08.719132   74203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:08:08.723242   74203 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 16:42 /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:08:08.723299   74203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:08:08.729032   74203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 18:08:08.739306   74203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15259.pem && ln -fs /usr/share/ca-certificates/15259.pem /etc/ssl/certs/15259.pem"
	I0731 18:08:08.749759   74203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15259.pem
	I0731 18:08:08.754167   74203 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 16:54 /usr/share/ca-certificates/15259.pem
	I0731 18:08:08.754228   74203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15259.pem
	I0731 18:08:08.759786   74203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15259.pem /etc/ssl/certs/51391683.0"
	I0731 18:08:08.770180   74203 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 18:08:08.775414   74203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 18:08:08.781830   74203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 18:08:08.787876   74203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 18:08:08.793927   74203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 18:08:08.800090   74203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 18:08:08.806169   74203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 18:08:08.811895   74203 kubeadm.go:392] StartCluster: {Name:old-k8s-version-276459 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-276459 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:08:08.811983   74203 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 18:08:08.812040   74203 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 18:08:08.853889   74203 cri.go:89] found id: ""
	I0731 18:08:08.853989   74203 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 18:08:08.863817   74203 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 18:08:08.863837   74203 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 18:08:08.863887   74203 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 18:08:08.873411   74203 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 18:08:08.874616   74203 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-276459" does not appear in /home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 18:08:08.875356   74203 kubeconfig.go:62] /home/jenkins/minikube-integration/19349-8084/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-276459" cluster setting kubeconfig missing "old-k8s-version-276459" context setting]
	I0731 18:08:08.876650   74203 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/kubeconfig: {Name:mkf2340bc1ada3c683f132aca37228d7588e1fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:08:08.918433   74203 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 18:08:08.931013   74203 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.26
	I0731 18:08:08.931067   74203 kubeadm.go:1160] stopping kube-system containers ...
	I0731 18:08:08.931083   74203 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 18:08:08.931163   74203 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 18:08:08.964683   74203 cri.go:89] found id: ""
	I0731 18:08:08.964759   74203 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 18:08:08.980459   74203 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 18:08:08.989969   74203 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 18:08:08.989997   74203 kubeadm.go:157] found existing configuration files:
	
	I0731 18:08:08.990049   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 18:08:08.999015   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 18:08:08.999074   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 18:08:09.008055   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 18:08:09.016532   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 18:08:09.016599   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 18:08:09.025791   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 18:08:09.034160   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 18:08:09.034227   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 18:08:09.043381   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 18:08:09.053419   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 18:08:09.053832   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 18:08:09.064966   74203 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 18:08:09.073962   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:09.198503   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:10.048258   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:10.283812   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:06.285091   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:08.779998   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:10.780198   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:07.362756   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:07.363299   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:07.363328   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:07.363231   75356 retry.go:31] will retry after 1.508296616s: waiting for machine to come up
	I0731 18:08:08.873528   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:08.874013   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:08.874051   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:08.873971   75356 retry.go:31] will retry after 2.281343566s: waiting for machine to come up
	I0731 18:08:11.157083   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:11.157578   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:11.157609   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:11.157537   75356 retry.go:31] will retry after 2.49049752s: waiting for machine to come up
	I0731 18:08:09.802010   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:12.271900   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:10.390012   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:10.477969   74203 api_server.go:52] waiting for apiserver process to appear ...
	I0731 18:08:10.478093   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:10.978427   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:11.478715   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:11.978685   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:12.478211   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:12.978218   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:13.478493   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:13.978778   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:14.478489   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:14.978983   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:13.278943   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:15.778760   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:13.650131   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:13.650459   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:13.650480   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:13.650428   75356 retry.go:31] will retry after 3.437877467s: waiting for machine to come up
	I0731 18:08:14.771879   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:17.272673   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:15.478444   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:15.978399   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:16.478641   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:16.979036   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:17.479053   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:17.978819   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:18.478280   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:18.978448   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:19.479056   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:19.978969   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:18.279604   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:20.778532   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:17.089986   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:17.090556   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:17.090590   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:17.090509   75356 retry.go:31] will retry after 2.95036051s: waiting for machine to come up
	I0731 18:08:20.044455   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.044914   73479 main.go:141] libmachine: (no-preload-673754) Found IP for machine: 192.168.61.126
	I0731 18:08:20.044935   73479 main.go:141] libmachine: (no-preload-673754) Reserving static IP address...
	I0731 18:08:20.044948   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has current primary IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.045286   73479 main.go:141] libmachine: (no-preload-673754) Reserved static IP address: 192.168.61.126
	I0731 18:08:20.045308   73479 main.go:141] libmachine: (no-preload-673754) Waiting for SSH to be available...
	I0731 18:08:20.045331   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "no-preload-673754", mac: "52:54:00:5a:ec:78", ip: "192.168.61.126"} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:20.045352   73479 main.go:141] libmachine: (no-preload-673754) DBG | skip adding static IP to network mk-no-preload-673754 - found existing host DHCP lease matching {name: "no-preload-673754", mac: "52:54:00:5a:ec:78", ip: "192.168.61.126"}
	I0731 18:08:20.045367   73479 main.go:141] libmachine: (no-preload-673754) DBG | Getting to WaitForSSH function...
	I0731 18:08:20.047574   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.047913   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:20.047939   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.048069   73479 main.go:141] libmachine: (no-preload-673754) DBG | Using SSH client type: external
	I0731 18:08:20.048106   73479 main.go:141] libmachine: (no-preload-673754) DBG | Using SSH private key: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/no-preload-673754/id_rsa (-rw-------)
	I0731 18:08:20.048150   73479 main.go:141] libmachine: (no-preload-673754) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.126 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19349-8084/.minikube/machines/no-preload-673754/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 18:08:20.048168   73479 main.go:141] libmachine: (no-preload-673754) DBG | About to run SSH command:
	I0731 18:08:20.048181   73479 main.go:141] libmachine: (no-preload-673754) DBG | exit 0
	I0731 18:08:20.175606   73479 main.go:141] libmachine: (no-preload-673754) DBG | SSH cmd err, output: <nil>: 
	I0731 18:08:20.175917   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetConfigRaw
	I0731 18:08:20.176508   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetIP
	I0731 18:08:20.179035   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.179374   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:20.179404   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.179686   73479 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/no-preload-673754/config.json ...
	I0731 18:08:20.179869   73479 machine.go:94] provisionDockerMachine start ...
	I0731 18:08:20.179885   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 18:08:20.180088   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:20.182345   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.182702   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:20.182727   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.182848   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:20.183060   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:20.183227   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:20.183414   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:20.183572   73479 main.go:141] libmachine: Using SSH client type: native
	I0731 18:08:20.183747   73479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I0731 18:08:20.183757   73479 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 18:08:20.295090   73479 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 18:08:20.295149   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetMachineName
	I0731 18:08:20.295424   73479 buildroot.go:166] provisioning hostname "no-preload-673754"
	I0731 18:08:20.295454   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetMachineName
	I0731 18:08:20.295631   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:20.298467   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.298771   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:20.298815   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.298897   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:20.299094   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:20.299276   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:20.299462   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:20.299652   73479 main.go:141] libmachine: Using SSH client type: native
	I0731 18:08:20.299806   73479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I0731 18:08:20.299817   73479 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-673754 && echo "no-preload-673754" | sudo tee /etc/hostname
	I0731 18:08:20.424901   73479 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-673754
	
	I0731 18:08:20.424951   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:20.427679   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.428049   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:20.428083   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.428230   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:20.428419   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:20.428601   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:20.428767   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:20.428965   73479 main.go:141] libmachine: Using SSH client type: native
	I0731 18:08:20.429127   73479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I0731 18:08:20.429142   73479 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-673754' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-673754/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-673754' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 18:08:20.546853   73479 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 18:08:20.546884   73479 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19349-8084/.minikube CaCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19349-8084/.minikube}
	I0731 18:08:20.546938   73479 buildroot.go:174] setting up certificates
	I0731 18:08:20.546955   73479 provision.go:84] configureAuth start
	I0731 18:08:20.546971   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetMachineName
	I0731 18:08:20.547275   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetIP
	I0731 18:08:20.550019   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.550372   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:20.550400   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.550525   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:20.552914   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.553261   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:20.553290   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.553416   73479 provision.go:143] copyHostCerts
	I0731 18:08:20.553479   73479 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem, removing ...
	I0731 18:08:20.553490   73479 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem
	I0731 18:08:20.553547   73479 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem (1082 bytes)
	I0731 18:08:20.553675   73479 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem, removing ...
	I0731 18:08:20.553687   73479 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem
	I0731 18:08:20.553718   73479 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem (1123 bytes)
	I0731 18:08:20.553796   73479 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem, removing ...
	I0731 18:08:20.553806   73479 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem
	I0731 18:08:20.553826   73479 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem (1675 bytes)
	I0731 18:08:20.553883   73479 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem org=jenkins.no-preload-673754 san=[127.0.0.1 192.168.61.126 localhost minikube no-preload-673754]
	I0731 18:08:20.878891   73479 provision.go:177] copyRemoteCerts
	I0731 18:08:20.878963   73479 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 18:08:20.878990   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:20.881529   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.881868   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:20.881900   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.882053   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:20.882245   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:20.882450   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:20.882617   73479 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/no-preload-673754/id_rsa Username:docker}
	I0731 18:08:20.968757   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 18:08:20.992136   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0731 18:08:21.013768   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 18:08:21.035808   73479 provision.go:87] duration metric: took 488.837788ms to configureAuth
	I0731 18:08:21.035839   73479 buildroot.go:189] setting minikube options for container-runtime
	I0731 18:08:21.036018   73479 config.go:182] Loaded profile config "no-preload-673754": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 18:08:21.036099   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:21.038949   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.039335   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:21.039363   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.039556   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:21.039756   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:21.039960   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:21.040071   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:21.040219   73479 main.go:141] libmachine: Using SSH client type: native
	I0731 18:08:21.040380   73479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I0731 18:08:21.040396   73479 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 18:08:21.319623   73479 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 18:08:21.319657   73479 machine.go:97] duration metric: took 1.139776085s to provisionDockerMachine
	I0731 18:08:21.319672   73479 start.go:293] postStartSetup for "no-preload-673754" (driver="kvm2")
	I0731 18:08:21.319689   73479 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 18:08:21.319710   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 18:08:21.320049   73479 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 18:08:21.320076   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:21.322963   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.323436   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:21.323465   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.323634   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:21.323809   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:21.324003   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:21.324127   73479 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/no-preload-673754/id_rsa Username:docker}
	I0731 18:08:21.409076   73479 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 18:08:21.412884   73479 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 18:08:21.412917   73479 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/addons for local assets ...
	I0731 18:08:21.413020   73479 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/files for local assets ...
	I0731 18:08:21.413108   73479 filesync.go:149] local asset: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem -> 152592.pem in /etc/ssl/certs
	I0731 18:08:21.413233   73479 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 18:08:21.421812   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /etc/ssl/certs/152592.pem (1708 bytes)
	I0731 18:08:21.447124   73479 start.go:296] duration metric: took 127.423498ms for postStartSetup
	I0731 18:08:21.447196   73479 fix.go:56] duration metric: took 20.511108968s for fixHost
	I0731 18:08:21.447226   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:21.450022   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.450408   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:21.450431   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.450628   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:21.450846   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:21.451009   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:21.451161   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:21.451327   73479 main.go:141] libmachine: Using SSH client type: native
	I0731 18:08:21.451527   73479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I0731 18:08:21.451541   73479 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 18:08:21.563653   73479 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722449301.536356236
	
	I0731 18:08:21.563672   73479 fix.go:216] guest clock: 1722449301.536356236
	I0731 18:08:21.563679   73479 fix.go:229] Guest: 2024-07-31 18:08:21.536356236 +0000 UTC Remote: 2024-07-31 18:08:21.447206545 +0000 UTC m=+354.621330953 (delta=89.149691ms)
	I0731 18:08:21.563702   73479 fix.go:200] guest clock delta is within tolerance: 89.149691ms
	I0731 18:08:21.563709   73479 start.go:83] releasing machines lock for "no-preload-673754", held for 20.627680156s
	I0731 18:08:21.563734   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 18:08:21.563992   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetIP
	I0731 18:08:21.566875   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.567265   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:21.567290   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.567505   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 18:08:21.568045   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 18:08:21.568237   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 18:08:21.568368   73479 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 18:08:21.568408   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:21.568465   73479 ssh_runner.go:195] Run: cat /version.json
	I0731 18:08:21.568492   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:21.571178   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.571554   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:21.571603   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.571653   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.571729   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:21.571902   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:21.572071   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:21.572213   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:21.572240   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.572256   73479 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/no-preload-673754/id_rsa Username:docker}
	I0731 18:08:21.572373   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:21.572505   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:21.572609   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:21.572739   73479 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/no-preload-673754/id_rsa Username:docker}
	I0731 18:08:21.682894   73479 ssh_runner.go:195] Run: systemctl --version
	I0731 18:08:21.689126   73479 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 18:08:21.829572   73479 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 18:08:21.836507   73479 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 18:08:21.836589   73479 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 18:08:21.855127   73479 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 18:08:21.855176   73479 start.go:495] detecting cgroup driver to use...
	I0731 18:08:21.855256   73479 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 18:08:21.870886   73479 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 18:08:21.884762   73479 docker.go:217] disabling cri-docker service (if available) ...
	I0731 18:08:21.884833   73479 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 18:08:21.899480   73479 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 18:08:21.912438   73479 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 18:08:22.024528   73479 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 18:08:22.177400   73479 docker.go:233] disabling docker service ...
	I0731 18:08:22.177500   73479 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 18:08:22.191225   73479 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 18:08:22.204004   73479 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 18:08:22.327408   73479 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 18:08:22.449116   73479 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 18:08:22.463031   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 18:08:22.481864   73479 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0731 18:08:22.481935   73479 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:22.491687   73479 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 18:08:22.491768   73479 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:22.501686   73479 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:22.511207   73479 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:22.521390   73479 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 18:08:22.531355   73479 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:22.541544   73479 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:22.556829   73479 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:22.566012   73479 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 18:08:22.574865   73479 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 18:08:22.574938   73479 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 18:08:22.588125   73479 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 18:08:22.597257   73479 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:08:22.716379   73479 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 18:08:22.855465   73479 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 18:08:22.855526   73479 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 18:08:22.860016   73479 start.go:563] Will wait 60s for crictl version
	I0731 18:08:22.860088   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:08:22.863395   73479 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 18:08:22.904523   73479 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 18:08:22.904611   73479 ssh_runner.go:195] Run: crio --version
	I0731 18:08:22.934571   73479 ssh_runner.go:195] Run: crio --version
	I0731 18:08:22.965884   73479 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0731 18:08:19.771740   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:22.272491   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:20.478866   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:20.978311   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:21.478333   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:21.978289   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:22.479138   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:22.978406   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:23.478138   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:23.979189   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:24.478688   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:24.978795   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:22.779215   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:24.782366   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:22.967087   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetIP
	I0731 18:08:22.969442   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:22.969722   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:22.969746   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:22.970005   73479 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0731 18:08:22.974229   73479 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:08:22.986153   73479 kubeadm.go:883] updating cluster {Name:no-preload-673754 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-673754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.126 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 18:08:22.986292   73479 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0731 18:08:22.986321   73479 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 18:08:23.020129   73479 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0731 18:08:23.020153   73479 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 18:08:23.020215   73479 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:08:23.020234   73479 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 18:08:23.020266   73479 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 18:08:23.020322   73479 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 18:08:23.020337   73479 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0731 18:08:23.020390   73479 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0731 18:08:23.020431   73479 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 18:08:23.020457   73479 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 18:08:23.021820   73479 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:08:23.021821   73479 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0731 18:08:23.021820   73479 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 18:08:23.021901   73479 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0731 18:08:23.021978   73479 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 18:08:23.021833   73479 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 18:08:23.021826   73479 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 18:08:23.021821   73479 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 18:08:23.254700   73479 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 18:08:23.268999   73479 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0731 18:08:23.271466   73479 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0731 18:08:23.272011   73479 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 18:08:23.275695   73479 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 18:08:23.298363   73479 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 18:08:23.320031   73479 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0731 18:08:23.340960   73479 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0731 18:08:23.341004   73479 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 18:08:23.341050   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:08:23.381391   73479 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0731 18:08:23.381441   73479 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 18:08:23.381511   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:08:23.508590   73479 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0731 18:08:23.508650   73479 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 18:08:23.508676   73479 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0731 18:08:23.508702   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:08:23.508716   73479 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 18:08:23.508729   73479 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0731 18:08:23.508751   73479 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 18:08:23.508772   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:08:23.508781   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:08:23.508800   73479 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0731 18:08:23.508830   73479 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0731 18:08:23.508838   73479 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 18:08:23.508860   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:08:23.508879   73479 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0731 18:08:23.519809   73479 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 18:08:23.519834   73479 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 18:08:23.519907   73479 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 18:08:23.595474   73479 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0731 18:08:23.595484   73479 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0731 18:08:23.595590   73479 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0731 18:08:23.595628   73479 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0731 18:08:23.595683   73479 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0731 18:08:23.622893   73479 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0731 18:08:23.623024   73479 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0731 18:08:23.629140   73479 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0731 18:08:23.629173   73479 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0731 18:08:23.629242   73479 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0731 18:08:23.629246   73479 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0731 18:08:23.659281   73479 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0731 18:08:23.659321   73479 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0731 18:08:23.659336   73479 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0731 18:08:23.659379   73479 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0731 18:08:23.659385   73479 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0731 18:08:23.659425   73479 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0731 18:08:23.659381   73479 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0731 18:08:23.659465   73479 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0731 18:08:23.659494   73479 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0731 18:08:23.857129   73479 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:08:26.136212   73479 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.476802709s)
	I0731 18:08:26.136251   73479 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0731 18:08:26.136264   73479 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0: (2.476807388s)
	I0731 18:08:26.136276   73479 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0731 18:08:26.136293   73479 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0731 18:08:26.136329   73479 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0731 18:08:26.136366   73479 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.279204335s)
	I0731 18:08:26.136423   73479 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0731 18:08:26.136474   73479 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:08:26.136521   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:08:24.770974   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:26.771954   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:29.274931   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:25.478432   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:25.978823   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:26.478416   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:26.979075   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:27.478228   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:27.978970   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:28.478319   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:28.979028   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:29.479060   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:29.978544   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:27.278482   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:29.279820   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:27.993828   73479 ssh_runner.go:235] Completed: which crictl: (1.857279777s)
	I0731 18:08:27.993908   73479 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:08:27.993918   73479 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.857561411s)
	I0731 18:08:27.993947   73479 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0731 18:08:27.993981   73479 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0731 18:08:27.994029   73479 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0731 18:08:28.037163   73479 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0731 18:08:28.037288   73479 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0731 18:08:29.880343   73479 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.843037657s)
	I0731 18:08:29.880392   73479 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0731 18:08:29.880339   73479 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.886261639s)
	I0731 18:08:29.880412   73479 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0731 18:08:29.880442   73479 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0731 18:08:29.880509   73479 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0731 18:08:31.229448   73479 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.348909634s)
	I0731 18:08:31.229478   73479 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0731 18:08:31.229512   73479 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0731 18:08:31.229575   73479 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0731 18:08:31.771695   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:34.271817   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:30.478387   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:30.978443   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:31.478484   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:31.979231   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:32.478928   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:32.978790   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:33.478426   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:33.978839   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:34.478411   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:34.978378   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:31.280261   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:33.780411   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:35.783181   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:33.084098   73479 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.854499641s)
	I0731 18:08:33.084136   73479 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0731 18:08:33.084175   73479 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0731 18:08:33.084255   73479 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0731 18:08:36.378466   73479 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.294181026s)
	I0731 18:08:36.378501   73479 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0731 18:08:36.378530   73479 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0731 18:08:36.378575   73479 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0731 18:08:36.772963   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:39.270915   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:35.478287   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:35.978546   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:36.479138   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:36.979173   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:37.478319   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:37.978768   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:38.479161   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:38.979129   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:39.478128   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:39.979147   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:38.278970   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:40.279298   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:37.022757   73479 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0731 18:08:37.022807   73479 cache_images.go:123] Successfully loaded all cached images
	I0731 18:08:37.022815   73479 cache_images.go:92] duration metric: took 14.002647196s to LoadCachedImages
	I0731 18:08:37.022829   73479 kubeadm.go:934] updating node { 192.168.61.126 8443 v1.31.0-beta.0 crio true true} ...
	I0731 18:08:37.022954   73479 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-673754 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.126
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-673754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 18:08:37.023035   73479 ssh_runner.go:195] Run: crio config
	I0731 18:08:37.064803   73479 cni.go:84] Creating CNI manager for ""
	I0731 18:08:37.064825   73479 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:08:37.064834   73479 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 18:08:37.064856   73479 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.126 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-673754 NodeName:no-preload-673754 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.126"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.126 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 18:08:37.065028   73479 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.126
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-673754"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.126
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.126"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 18:08:37.065108   73479 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0731 18:08:37.077141   73479 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 18:08:37.077215   73479 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 18:08:37.086553   73479 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0731 18:08:37.102646   73479 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0731 18:08:37.118113   73479 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0731 18:08:37.134702   73479 ssh_runner.go:195] Run: grep 192.168.61.126	control-plane.minikube.internal$ /etc/hosts
	I0731 18:08:37.138593   73479 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.126	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:08:37.151319   73479 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:08:37.270019   73479 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 18:08:37.287378   73479 certs.go:68] Setting up /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/no-preload-673754 for IP: 192.168.61.126
	I0731 18:08:37.287400   73479 certs.go:194] generating shared ca certs ...
	I0731 18:08:37.287413   73479 certs.go:226] acquiring lock for ca certs: {Name:mk49eed439085f9c30746706460ce213570d1997 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:08:37.287540   73479 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key
	I0731 18:08:37.287577   73479 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key
	I0731 18:08:37.287584   73479 certs.go:256] generating profile certs ...
	I0731 18:08:37.287692   73479 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/no-preload-673754/client.key
	I0731 18:08:37.287761   73479 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/no-preload-673754/apiserver.key.3fff3ffc
	I0731 18:08:37.287803   73479 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/no-preload-673754/proxy-client.key
	I0731 18:08:37.287938   73479 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem (1338 bytes)
	W0731 18:08:37.287973   73479 certs.go:480] ignoring /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259_empty.pem, impossibly tiny 0 bytes
	I0731 18:08:37.287985   73479 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 18:08:37.288020   73479 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem (1082 bytes)
	I0731 18:08:37.288049   73479 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem (1123 bytes)
	I0731 18:08:37.288079   73479 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem (1675 bytes)
	I0731 18:08:37.288143   73479 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem (1708 bytes)
	I0731 18:08:37.288831   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 18:08:37.334317   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 18:08:37.370553   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 18:08:37.403436   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 18:08:37.449133   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/no-preload-673754/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0731 18:08:37.486169   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/no-preload-673754/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 18:08:37.517241   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/no-preload-673754/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 18:08:37.541089   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/no-preload-673754/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 18:08:37.563068   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem --> /usr/share/ca-certificates/15259.pem (1338 bytes)
	I0731 18:08:37.585396   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /usr/share/ca-certificates/152592.pem (1708 bytes)
	I0731 18:08:37.608142   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 18:08:37.630178   73479 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 18:08:37.645994   73479 ssh_runner.go:195] Run: openssl version
	I0731 18:08:37.651663   73479 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15259.pem && ln -fs /usr/share/ca-certificates/15259.pem /etc/ssl/certs/15259.pem"
	I0731 18:08:37.661494   73479 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15259.pem
	I0731 18:08:37.665519   73479 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 16:54 /usr/share/ca-certificates/15259.pem
	I0731 18:08:37.665575   73479 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15259.pem
	I0731 18:08:37.671143   73479 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15259.pem /etc/ssl/certs/51391683.0"
	I0731 18:08:37.681076   73479 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/152592.pem && ln -fs /usr/share/ca-certificates/152592.pem /etc/ssl/certs/152592.pem"
	I0731 18:08:37.692253   73479 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/152592.pem
	I0731 18:08:37.696802   73479 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 16:54 /usr/share/ca-certificates/152592.pem
	I0731 18:08:37.696850   73479 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/152592.pem
	I0731 18:08:37.702282   73479 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/152592.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 18:08:37.713051   73479 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 18:08:37.723644   73479 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:08:37.728170   73479 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 16:42 /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:08:37.728225   73479 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:08:37.733912   73479 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 18:08:37.744004   73479 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 18:08:37.748076   73479 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 18:08:37.753645   73479 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 18:08:37.759077   73479 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 18:08:37.764344   73479 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 18:08:37.769735   73479 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 18:08:37.775894   73479 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 18:08:37.781699   73479 kubeadm.go:392] StartCluster: {Name:no-preload-673754 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-673754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.126 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:08:37.781771   73479 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 18:08:37.781833   73479 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 18:08:37.825614   73479 cri.go:89] found id: ""
	I0731 18:08:37.825685   73479 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 18:08:37.835584   73479 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 18:08:37.835604   73479 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 18:08:37.835659   73479 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 18:08:37.844529   73479 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 18:08:37.845534   73479 kubeconfig.go:125] found "no-preload-673754" server: "https://192.168.61.126:8443"
	I0731 18:08:37.847698   73479 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 18:08:37.856360   73479 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.126
	I0731 18:08:37.856386   73479 kubeadm.go:1160] stopping kube-system containers ...
	I0731 18:08:37.856396   73479 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 18:08:37.856440   73479 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 18:08:37.894614   73479 cri.go:89] found id: ""
	I0731 18:08:37.894689   73479 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 18:08:37.910921   73479 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 18:08:37.919796   73479 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 18:08:37.919814   73479 kubeadm.go:157] found existing configuration files:
	
	I0731 18:08:37.919859   73479 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 18:08:37.928562   73479 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 18:08:37.928617   73479 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 18:08:37.937099   73479 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 18:08:37.945298   73479 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 18:08:37.945378   73479 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 18:08:37.953976   73479 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 18:08:37.962069   73479 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 18:08:37.962119   73479 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 18:08:37.970719   73479 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 18:08:37.979265   73479 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 18:08:37.979318   73479 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 18:08:37.988286   73479 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 18:08:37.997742   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:38.105503   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:39.403672   73479 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.298131314s)
	I0731 18:08:39.403710   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:39.609739   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:39.677484   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:39.773387   73479 api_server.go:52] waiting for apiserver process to appear ...
	I0731 18:08:39.773469   73479 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:40.274185   73479 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:40.774562   73479 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:40.792346   73479 api_server.go:72] duration metric: took 1.018961231s to wait for apiserver process to appear ...
	I0731 18:08:40.792368   73479 api_server.go:88] waiting for apiserver healthz status ...
	I0731 18:08:40.792384   73479 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0731 18:08:41.271890   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:43.771546   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:43.476911   73479 api_server.go:279] https://192.168.61.126:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 18:08:43.476938   73479 api_server.go:103] status: https://192.168.61.126:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 18:08:43.476952   73479 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0731 18:08:43.536762   73479 api_server.go:279] https://192.168.61.126:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 18:08:43.536794   73479 api_server.go:103] status: https://192.168.61.126:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 18:08:43.793157   73479 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0731 18:08:43.798895   73479 api_server.go:279] https://192.168.61.126:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 18:08:43.798924   73479 api_server.go:103] status: https://192.168.61.126:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 18:08:44.292527   73479 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0731 18:08:44.300596   73479 api_server.go:279] https://192.168.61.126:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 18:08:44.300632   73479 api_server.go:103] status: https://192.168.61.126:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 18:08:44.793206   73479 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0731 18:08:44.797982   73479 api_server.go:279] https://192.168.61.126:8443/healthz returned 200:
	ok
	I0731 18:08:44.806150   73479 api_server.go:141] control plane version: v1.31.0-beta.0
	I0731 18:08:44.806172   73479 api_server.go:131] duration metric: took 4.013797537s to wait for apiserver health ...
	I0731 18:08:44.806183   73479 cni.go:84] Creating CNI manager for ""
	I0731 18:08:44.806191   73479 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:08:44.807774   73479 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 18:08:40.478967   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:40.978610   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:41.479192   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:41.978737   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:42.479051   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:42.978274   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:43.478957   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:43.978973   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:44.478269   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:44.978737   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:42.778330   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:44.779163   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:44.809068   73479 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 18:08:44.823284   73479 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 18:08:44.878894   73479 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 18:08:44.892969   73479 system_pods.go:59] 8 kube-system pods found
	I0731 18:08:44.893020   73479 system_pods.go:61] "coredns-5cfdc65f69-k7clq" [e4be77b6-aa7c-45e2-90a1-6a8264fd5101] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:08:44.893031   73479 system_pods.go:61] "etcd-no-preload-673754" [d64e9194-dd33-4a28-b8c3-d25462f1dae8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 18:08:44.893042   73479 system_pods.go:61] "kube-apiserver-no-preload-673754" [4497614d-51de-4a5f-93dc-1397446fe4c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 18:08:44.893055   73479 system_pods.go:61] "kube-controller-manager-no-preload-673754" [fe7f714e-d778-49e7-b4ef-44e6efb5bc33] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 18:08:44.893067   73479 system_pods.go:61] "kube-proxy-hqxh6" [4fb623c0-4be3-421c-85d6-1d76a90b874f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 18:08:44.893078   73479 system_pods.go:61] "kube-scheduler-no-preload-673754" [558e9b78-a6a9-46b8-9db8-004126359c74] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 18:08:44.893088   73479 system_pods.go:61] "metrics-server-78fcd8795b-27pkr" [a63a156b-0446-4bd9-8619-de75edaeb481] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 18:08:44.893098   73479 system_pods.go:61] "storage-provisioner" [a9f0ab70-18f4-4fac-b858-a9177077fe29] Running
	I0731 18:08:44.893109   73479 system_pods.go:74] duration metric: took 14.191984ms to wait for pod list to return data ...
	I0731 18:08:44.893120   73479 node_conditions.go:102] verifying NodePressure condition ...
	I0731 18:08:44.908236   73479 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 18:08:44.908270   73479 node_conditions.go:123] node cpu capacity is 2
	I0731 18:08:44.908283   73479 node_conditions.go:105] duration metric: took 15.154491ms to run NodePressure ...
	I0731 18:08:44.908307   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:45.248571   73479 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 18:08:45.252305   73479 kubeadm.go:739] kubelet initialised
	I0731 18:08:45.252332   73479 kubeadm.go:740] duration metric: took 3.734022ms waiting for restarted kubelet to initialise ...
	I0731 18:08:45.252342   73479 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:08:45.256748   73479 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-k7clq" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:45.261130   73479 pod_ready.go:97] node "no-preload-673754" hosting pod "coredns-5cfdc65f69-k7clq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:45.261149   73479 pod_ready.go:81] duration metric: took 4.373068ms for pod "coredns-5cfdc65f69-k7clq" in "kube-system" namespace to be "Ready" ...
	E0731 18:08:45.261157   73479 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-673754" hosting pod "coredns-5cfdc65f69-k7clq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:45.261162   73479 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:45.265115   73479 pod_ready.go:97] node "no-preload-673754" hosting pod "etcd-no-preload-673754" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:45.265135   73479 pod_ready.go:81] duration metric: took 3.965586ms for pod "etcd-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	E0731 18:08:45.265142   73479 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-673754" hosting pod "etcd-no-preload-673754" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:45.265147   73479 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:45.269566   73479 pod_ready.go:97] node "no-preload-673754" hosting pod "kube-apiserver-no-preload-673754" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:45.269585   73479 pod_ready.go:81] duration metric: took 4.431367ms for pod "kube-apiserver-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	E0731 18:08:45.269595   73479 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-673754" hosting pod "kube-apiserver-no-preload-673754" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:45.269603   73479 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:45.281026   73479 pod_ready.go:97] node "no-preload-673754" hosting pod "kube-controller-manager-no-preload-673754" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:45.281048   73479 pod_ready.go:81] duration metric: took 11.435327ms for pod "kube-controller-manager-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	E0731 18:08:45.281057   73479 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-673754" hosting pod "kube-controller-manager-no-preload-673754" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:45.281065   73479 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-hqxh6" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:45.684313   73479 pod_ready.go:97] node "no-preload-673754" hosting pod "kube-proxy-hqxh6" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:45.684347   73479 pod_ready.go:81] duration metric: took 403.272559ms for pod "kube-proxy-hqxh6" in "kube-system" namespace to be "Ready" ...
	E0731 18:08:45.684356   73479 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-673754" hosting pod "kube-proxy-hqxh6" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:45.684362   73479 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:46.082388   73479 pod_ready.go:97] node "no-preload-673754" hosting pod "kube-scheduler-no-preload-673754" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:46.082419   73479 pod_ready.go:81] duration metric: took 398.048808ms for pod "kube-scheduler-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	E0731 18:08:46.082432   73479 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-673754" hosting pod "kube-scheduler-no-preload-673754" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:46.082442   73479 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:46.482445   73479 pod_ready.go:97] node "no-preload-673754" hosting pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:46.482472   73479 pod_ready.go:81] duration metric: took 400.02111ms for pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace to be "Ready" ...
	E0731 18:08:46.482486   73479 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-673754" hosting pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:46.482493   73479 pod_ready.go:38] duration metric: took 1.230141723s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:08:46.482509   73479 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 18:08:46.495481   73479 ops.go:34] apiserver oom_adj: -16
	I0731 18:08:46.495502   73479 kubeadm.go:597] duration metric: took 8.65989212s to restartPrimaryControlPlane
	I0731 18:08:46.495513   73479 kubeadm.go:394] duration metric: took 8.71382049s to StartCluster
	I0731 18:08:46.495533   73479 settings.go:142] acquiring lock: {Name:mk8ac8269296bf0b887a00ff74caaad180005f89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:08:46.495615   73479 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 18:08:46.497426   73479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/kubeconfig: {Name:mkf2340bc1ada3c683f132aca37228d7588e1fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:08:46.497742   73479 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.126 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 18:08:46.497816   73479 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 18:08:46.497911   73479 addons.go:69] Setting storage-provisioner=true in profile "no-preload-673754"
	I0731 18:08:46.497929   73479 addons.go:69] Setting default-storageclass=true in profile "no-preload-673754"
	I0731 18:08:46.497956   73479 addons.go:69] Setting metrics-server=true in profile "no-preload-673754"
	I0731 18:08:46.497973   73479 config.go:182] Loaded profile config "no-preload-673754": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 18:08:46.497979   73479 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-673754"
	I0731 18:08:46.497988   73479 addons.go:234] Setting addon metrics-server=true in "no-preload-673754"
	W0731 18:08:46.498008   73479 addons.go:243] addon metrics-server should already be in state true
	I0731 18:08:46.497946   73479 addons.go:234] Setting addon storage-provisioner=true in "no-preload-673754"
	I0731 18:08:46.498056   73479 host.go:66] Checking if "no-preload-673754" exists ...
	W0731 18:08:46.498064   73479 addons.go:243] addon storage-provisioner should already be in state true
	I0731 18:08:46.498109   73479 host.go:66] Checking if "no-preload-673754" exists ...
	I0731 18:08:46.498315   73479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:08:46.498315   73479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:08:46.498333   73479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:08:46.498340   73479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:08:46.498448   73479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:08:46.498470   73479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:08:46.501144   73479 out.go:177] * Verifying Kubernetes components...
	I0731 18:08:46.502755   73479 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:08:46.514922   73479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46797
	I0731 18:08:46.514923   73479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44241
	I0731 18:08:46.515418   73479 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:08:46.515618   73479 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:08:46.515928   73479 main.go:141] libmachine: Using API Version  1
	I0731 18:08:46.515950   73479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:08:46.516066   73479 main.go:141] libmachine: Using API Version  1
	I0731 18:08:46.516089   73479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:08:46.516370   73479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40219
	I0731 18:08:46.516440   73479 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:08:46.516663   73479 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:08:46.516809   73479 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:08:46.516811   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetState
	I0731 18:08:46.517213   73479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:08:46.517247   73479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:08:46.517280   73479 main.go:141] libmachine: Using API Version  1
	I0731 18:08:46.517302   73479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:08:46.517618   73479 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:08:46.518191   73479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:08:46.518220   73479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:08:46.520511   73479 addons.go:234] Setting addon default-storageclass=true in "no-preload-673754"
	W0731 18:08:46.520536   73479 addons.go:243] addon default-storageclass should already be in state true
	I0731 18:08:46.520566   73479 host.go:66] Checking if "no-preload-673754" exists ...
	I0731 18:08:46.520917   73479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:08:46.520968   73479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:08:46.533349   73479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38669
	I0731 18:08:46.533802   73479 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:08:46.534250   73479 main.go:141] libmachine: Using API Version  1
	I0731 18:08:46.534272   73479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:08:46.534582   73479 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:08:46.534720   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetState
	I0731 18:08:46.535556   73479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46055
	I0731 18:08:46.535979   73479 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:08:46.536648   73479 main.go:141] libmachine: Using API Version  1
	I0731 18:08:46.536667   73479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:08:46.537080   73479 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:08:46.537331   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 18:08:46.537398   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetState
	I0731 18:08:46.538365   73479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39435
	I0731 18:08:46.538929   73479 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:08:46.539194   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 18:08:46.539401   73479 main.go:141] libmachine: Using API Version  1
	I0731 18:08:46.539419   73479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:08:46.539766   73479 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:08:46.540360   73479 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:08:46.540447   73479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:08:46.540801   73479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:08:46.541139   73479 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0731 18:08:46.541916   73479 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 18:08:46.541932   73479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 18:08:46.541952   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:46.542506   73479 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 18:08:46.542524   73479 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 18:08:46.542541   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:46.545293   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:46.545631   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:46.545759   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:46.545829   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:46.545985   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:46.546116   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:46.546256   73479 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/no-preload-673754/id_rsa Username:docker}
	I0731 18:08:46.546384   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:46.546888   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:46.546907   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:46.546924   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:46.547090   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:46.547256   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:46.547434   73479 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/no-preload-673754/id_rsa Username:docker}
	I0731 18:08:46.570759   73479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40485
	I0731 18:08:46.571222   73479 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:08:46.571668   73479 main.go:141] libmachine: Using API Version  1
	I0731 18:08:46.571688   73479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:08:46.572207   73479 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:08:46.572367   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetState
	I0731 18:08:46.574368   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 18:08:46.574582   73479 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 18:08:46.574607   73479 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 18:08:46.574627   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:46.577768   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:46.578542   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:46.578567   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:46.578741   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:46.578911   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:46.579047   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:46.579459   73479 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/no-preload-673754/id_rsa Username:docker}
	I0731 18:08:46.700752   73479 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 18:08:46.720967   73479 node_ready.go:35] waiting up to 6m0s for node "no-preload-673754" to be "Ready" ...
	I0731 18:08:46.798188   73479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 18:08:46.802534   73479 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 18:08:46.802564   73479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0731 18:08:46.828038   73479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 18:08:46.859309   73479 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 18:08:46.859337   73479 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 18:08:46.921507   73479 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 18:08:46.921536   73479 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 18:08:46.958759   73479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 18:08:48.106542   73479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.278462071s)
	I0731 18:08:48.106599   73479 main.go:141] libmachine: Making call to close driver server
	I0731 18:08:48.106608   73479 main.go:141] libmachine: (no-preload-673754) Calling .Close
	I0731 18:08:48.107151   73479 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:08:48.107177   73479 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:08:48.107187   73479 main.go:141] libmachine: Making call to close driver server
	I0731 18:08:48.107196   73479 main.go:141] libmachine: (no-preload-673754) Calling .Close
	I0731 18:08:48.107601   73479 main.go:141] libmachine: (no-preload-673754) DBG | Closing plugin on server side
	I0731 18:08:48.107604   73479 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:08:48.107631   73479 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:08:48.107831   73479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.309610972s)
	I0731 18:08:48.107872   73479 main.go:141] libmachine: Making call to close driver server
	I0731 18:08:48.107882   73479 main.go:141] libmachine: (no-preload-673754) Calling .Close
	I0731 18:08:48.108105   73479 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:08:48.108119   73479 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:08:48.108138   73479 main.go:141] libmachine: Making call to close driver server
	I0731 18:08:48.108150   73479 main.go:141] libmachine: (no-preload-673754) Calling .Close
	I0731 18:08:48.108351   73479 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:08:48.108367   73479 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:08:48.118038   73479 main.go:141] libmachine: Making call to close driver server
	I0731 18:08:48.118055   73479 main.go:141] libmachine: (no-preload-673754) Calling .Close
	I0731 18:08:48.118329   73479 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:08:48.118349   73479 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:08:48.128563   73479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.169765123s)
	I0731 18:08:48.128606   73479 main.go:141] libmachine: Making call to close driver server
	I0731 18:08:48.128619   73479 main.go:141] libmachine: (no-preload-673754) Calling .Close
	I0731 18:08:48.128901   73479 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:08:48.128915   73479 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:08:48.128924   73479 main.go:141] libmachine: Making call to close driver server
	I0731 18:08:48.128932   73479 main.go:141] libmachine: (no-preload-673754) Calling .Close
	I0731 18:08:48.129137   73479 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:08:48.129152   73479 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:08:48.129162   73479 addons.go:475] Verifying addon metrics-server=true in "no-preload-673754"
	I0731 18:08:48.129174   73479 main.go:141] libmachine: (no-preload-673754) DBG | Closing plugin on server side
	I0731 18:08:48.130887   73479 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0731 18:08:46.271648   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:48.271754   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:45.478411   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:45.978802   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:46.478407   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:46.978134   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:47.479125   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:47.978991   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:48.478597   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:48.978742   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:49.479320   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:49.978288   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:46.779263   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:48.779361   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:48.131964   73479 addons.go:510] duration metric: took 1.634151286s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0731 18:08:48.725682   73479 node_ready.go:53] node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:51.231081   73479 node_ready.go:53] node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:50.771387   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:52.771438   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:50.478112   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:50.978272   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:51.478319   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:51.978880   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:52.479176   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:52.979001   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:53.478508   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:53.978517   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:54.478857   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:54.978290   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:51.278348   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:53.278456   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:55.278495   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:53.725153   73479 node_ready.go:53] node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:54.224475   73479 node_ready.go:49] node "no-preload-673754" has status "Ready":"True"
	I0731 18:08:54.224505   73479 node_ready.go:38] duration metric: took 7.503503116s for node "no-preload-673754" to be "Ready" ...
	I0731 18:08:54.224517   73479 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:08:54.231434   73479 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-k7clq" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:56.237804   73479 pod_ready.go:102] pod "coredns-5cfdc65f69-k7clq" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:54.772597   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:57.271778   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:55.478727   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:55.978552   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:56.478246   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:56.978732   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:57.478262   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:57.978216   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:58.478212   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:58.978406   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:59.478270   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:59.978221   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:57.781459   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:00.278913   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:58.740148   73479 pod_ready.go:102] pod "coredns-5cfdc65f69-k7clq" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:01.237849   73479 pod_ready.go:92] pod "coredns-5cfdc65f69-k7clq" in "kube-system" namespace has status "Ready":"True"
	I0731 18:09:01.237874   73479 pod_ready.go:81] duration metric: took 7.00641308s for pod "coredns-5cfdc65f69-k7clq" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.237887   73479 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.242105   73479 pod_ready.go:92] pod "etcd-no-preload-673754" in "kube-system" namespace has status "Ready":"True"
	I0731 18:09:01.242122   73479 pod_ready.go:81] duration metric: took 4.229266ms for pod "etcd-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.242133   73479 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.246652   73479 pod_ready.go:92] pod "kube-apiserver-no-preload-673754" in "kube-system" namespace has status "Ready":"True"
	I0731 18:09:01.246674   73479 pod_ready.go:81] duration metric: took 4.534937ms for pod "kube-apiserver-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.246686   73479 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.251284   73479 pod_ready.go:92] pod "kube-controller-manager-no-preload-673754" in "kube-system" namespace has status "Ready":"True"
	I0731 18:09:01.251302   73479 pod_ready.go:81] duration metric: took 4.608584ms for pod "kube-controller-manager-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.251321   73479 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hqxh6" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.255030   73479 pod_ready.go:92] pod "kube-proxy-hqxh6" in "kube-system" namespace has status "Ready":"True"
	I0731 18:09:01.255045   73479 pod_ready.go:81] duration metric: took 3.718917ms for pod "kube-proxy-hqxh6" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.255052   73479 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.636799   73479 pod_ready.go:92] pod "kube-scheduler-no-preload-673754" in "kube-system" namespace has status "Ready":"True"
	I0731 18:09:01.636826   73479 pod_ready.go:81] duration metric: took 381.767881ms for pod "kube-scheduler-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.636835   73479 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:59.771686   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:02.271396   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:00.478785   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:00.978350   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:01.478635   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:01.978192   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:02.478480   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:02.979021   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:03.478366   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:03.978984   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:04.479143   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:04.978913   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:02.279613   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:04.778482   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:03.642978   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:05.644941   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:04.771938   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:07.271165   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:05.478608   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:05.978345   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:06.478435   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:06.978551   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:07.478131   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:07.978354   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:08.478977   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:08.979122   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:09.478279   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:09.978350   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:06.780364   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:09.278573   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:08.142974   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:10.643136   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:09.771950   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:11.772464   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:13.773164   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:10.479086   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:10.479175   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:10.516364   74203 cri.go:89] found id: ""
	I0731 18:09:10.516389   74203 logs.go:276] 0 containers: []
	W0731 18:09:10.516405   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:10.516411   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:10.516464   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:10.549398   74203 cri.go:89] found id: ""
	I0731 18:09:10.549422   74203 logs.go:276] 0 containers: []
	W0731 18:09:10.549433   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:10.549440   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:10.549503   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:10.584290   74203 cri.go:89] found id: ""
	I0731 18:09:10.584314   74203 logs.go:276] 0 containers: []
	W0731 18:09:10.584322   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:10.584327   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:10.584381   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:10.615832   74203 cri.go:89] found id: ""
	I0731 18:09:10.615860   74203 logs.go:276] 0 containers: []
	W0731 18:09:10.615871   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:10.615878   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:10.615941   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:10.647597   74203 cri.go:89] found id: ""
	I0731 18:09:10.647617   74203 logs.go:276] 0 containers: []
	W0731 18:09:10.647624   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:10.647629   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:10.647686   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:10.680981   74203 cri.go:89] found id: ""
	I0731 18:09:10.681016   74203 logs.go:276] 0 containers: []
	W0731 18:09:10.681027   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:10.681033   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:10.681093   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:10.713798   74203 cri.go:89] found id: ""
	I0731 18:09:10.713839   74203 logs.go:276] 0 containers: []
	W0731 18:09:10.713851   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:10.713865   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:10.713937   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:10.746378   74203 cri.go:89] found id: ""
	I0731 18:09:10.746405   74203 logs.go:276] 0 containers: []
	W0731 18:09:10.746413   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:10.746423   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:10.746439   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:10.799156   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:10.799187   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:10.812388   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:10.812413   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:10.932251   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:10.932271   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:10.932285   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:10.996810   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:10.996840   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:13.533936   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:13.549194   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:13.549250   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:13.599350   74203 cri.go:89] found id: ""
	I0731 18:09:13.599389   74203 logs.go:276] 0 containers: []
	W0731 18:09:13.599400   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:13.599407   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:13.599466   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:13.651736   74203 cri.go:89] found id: ""
	I0731 18:09:13.651771   74203 logs.go:276] 0 containers: []
	W0731 18:09:13.651791   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:13.651798   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:13.651855   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:13.699804   74203 cri.go:89] found id: ""
	I0731 18:09:13.699832   74203 logs.go:276] 0 containers: []
	W0731 18:09:13.699841   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:13.699846   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:13.699906   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:13.732760   74203 cri.go:89] found id: ""
	I0731 18:09:13.732781   74203 logs.go:276] 0 containers: []
	W0731 18:09:13.732788   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:13.732794   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:13.732849   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:13.766865   74203 cri.go:89] found id: ""
	I0731 18:09:13.766892   74203 logs.go:276] 0 containers: []
	W0731 18:09:13.766902   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:13.766910   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:13.766964   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:13.804706   74203 cri.go:89] found id: ""
	I0731 18:09:13.804733   74203 logs.go:276] 0 containers: []
	W0731 18:09:13.804743   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:13.804757   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:13.804821   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:13.838432   74203 cri.go:89] found id: ""
	I0731 18:09:13.838461   74203 logs.go:276] 0 containers: []
	W0731 18:09:13.838472   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:13.838479   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:13.838534   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:13.870455   74203 cri.go:89] found id: ""
	I0731 18:09:13.870480   74203 logs.go:276] 0 containers: []
	W0731 18:09:13.870490   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:13.870498   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:13.870510   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:13.922911   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:13.922947   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:13.936075   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:13.936098   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:14.006766   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:14.006790   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:14.006810   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:14.071066   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:14.071100   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:11.278892   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:13.279644   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:15.280298   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:12.643341   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:14.643636   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:16.280976   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:18.772338   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:16.615212   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:16.627439   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:16.627499   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:16.660764   74203 cri.go:89] found id: ""
	I0731 18:09:16.660785   74203 logs.go:276] 0 containers: []
	W0731 18:09:16.660792   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:16.660798   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:16.660842   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:16.697154   74203 cri.go:89] found id: ""
	I0731 18:09:16.697182   74203 logs.go:276] 0 containers: []
	W0731 18:09:16.697196   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:16.697201   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:16.697259   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:16.730263   74203 cri.go:89] found id: ""
	I0731 18:09:16.730284   74203 logs.go:276] 0 containers: []
	W0731 18:09:16.730291   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:16.730318   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:16.730369   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:16.765226   74203 cri.go:89] found id: ""
	I0731 18:09:16.765249   74203 logs.go:276] 0 containers: []
	W0731 18:09:16.765257   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:16.765262   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:16.765336   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:16.800502   74203 cri.go:89] found id: ""
	I0731 18:09:16.800528   74203 logs.go:276] 0 containers: []
	W0731 18:09:16.800535   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:16.800541   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:16.800599   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:16.837391   74203 cri.go:89] found id: ""
	I0731 18:09:16.837418   74203 logs.go:276] 0 containers: []
	W0731 18:09:16.837427   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:16.837435   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:16.837490   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:16.867606   74203 cri.go:89] found id: ""
	I0731 18:09:16.867628   74203 logs.go:276] 0 containers: []
	W0731 18:09:16.867637   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:16.867642   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:16.867696   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:16.901639   74203 cri.go:89] found id: ""
	I0731 18:09:16.901669   74203 logs.go:276] 0 containers: []
	W0731 18:09:16.901681   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:16.901693   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:16.901707   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:16.951692   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:16.951729   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:16.965069   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:16.965101   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:17.040337   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:17.040358   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:17.040371   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:17.115058   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:17.115093   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:19.651538   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:19.663682   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:19.663739   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:19.697851   74203 cri.go:89] found id: ""
	I0731 18:09:19.697879   74203 logs.go:276] 0 containers: []
	W0731 18:09:19.697894   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:19.697900   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:19.697996   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:19.732745   74203 cri.go:89] found id: ""
	I0731 18:09:19.732772   74203 logs.go:276] 0 containers: []
	W0731 18:09:19.732783   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:19.732790   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:19.732855   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:19.763843   74203 cri.go:89] found id: ""
	I0731 18:09:19.763865   74203 logs.go:276] 0 containers: []
	W0731 18:09:19.763873   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:19.763878   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:19.763934   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:19.797398   74203 cri.go:89] found id: ""
	I0731 18:09:19.797422   74203 logs.go:276] 0 containers: []
	W0731 18:09:19.797429   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:19.797434   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:19.797504   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:19.833239   74203 cri.go:89] found id: ""
	I0731 18:09:19.833268   74203 logs.go:276] 0 containers: []
	W0731 18:09:19.833278   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:19.833284   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:19.833340   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:19.866135   74203 cri.go:89] found id: ""
	I0731 18:09:19.866163   74203 logs.go:276] 0 containers: []
	W0731 18:09:19.866173   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:19.866181   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:19.866242   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:19.900581   74203 cri.go:89] found id: ""
	I0731 18:09:19.900606   74203 logs.go:276] 0 containers: []
	W0731 18:09:19.900615   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:19.900621   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:19.900720   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:19.936451   74203 cri.go:89] found id: ""
	I0731 18:09:19.936475   74203 logs.go:276] 0 containers: []
	W0731 18:09:19.936487   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:19.936496   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:19.936508   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:19.990522   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:19.990559   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:20.003460   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:20.003487   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:20.070869   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:20.070893   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:20.070912   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:20.148316   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:20.148354   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:17.779144   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:19.781539   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:17.143894   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:19.642139   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:21.642234   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:21.271074   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:23.771002   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:22.685964   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:22.698740   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:22.698814   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:22.735321   74203 cri.go:89] found id: ""
	I0731 18:09:22.735350   74203 logs.go:276] 0 containers: []
	W0731 18:09:22.735360   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:22.735367   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:22.735428   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:22.767689   74203 cri.go:89] found id: ""
	I0731 18:09:22.767718   74203 logs.go:276] 0 containers: []
	W0731 18:09:22.767729   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:22.767736   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:22.767795   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:22.804010   74203 cri.go:89] found id: ""
	I0731 18:09:22.804036   74203 logs.go:276] 0 containers: []
	W0731 18:09:22.804045   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:22.804050   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:22.804101   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:22.836820   74203 cri.go:89] found id: ""
	I0731 18:09:22.836847   74203 logs.go:276] 0 containers: []
	W0731 18:09:22.836858   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:22.836874   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:22.836933   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:22.870163   74203 cri.go:89] found id: ""
	I0731 18:09:22.870187   74203 logs.go:276] 0 containers: []
	W0731 18:09:22.870194   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:22.870199   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:22.870270   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:22.905926   74203 cri.go:89] found id: ""
	I0731 18:09:22.905951   74203 logs.go:276] 0 containers: []
	W0731 18:09:22.905959   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:22.905965   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:22.906020   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:22.938926   74203 cri.go:89] found id: ""
	I0731 18:09:22.938949   74203 logs.go:276] 0 containers: []
	W0731 18:09:22.938957   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:22.938963   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:22.939008   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:22.975150   74203 cri.go:89] found id: ""
	I0731 18:09:22.975185   74203 logs.go:276] 0 containers: []
	W0731 18:09:22.975194   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:22.975204   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:22.975219   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:23.043265   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:23.043290   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:23.043302   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:23.122681   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:23.122717   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:23.161745   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:23.161769   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:23.211274   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:23.211305   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:22.278664   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:24.778771   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:23.643871   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:26.143509   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:25.771922   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:27.772156   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:25.724702   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:25.739335   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:25.739415   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:25.778238   74203 cri.go:89] found id: ""
	I0731 18:09:25.778264   74203 logs.go:276] 0 containers: []
	W0731 18:09:25.778274   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:25.778282   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:25.778349   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:25.816530   74203 cri.go:89] found id: ""
	I0731 18:09:25.816566   74203 logs.go:276] 0 containers: []
	W0731 18:09:25.816579   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:25.816587   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:25.816652   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:25.853524   74203 cri.go:89] found id: ""
	I0731 18:09:25.853562   74203 logs.go:276] 0 containers: []
	W0731 18:09:25.853575   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:25.853583   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:25.853661   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:25.889690   74203 cri.go:89] found id: ""
	I0731 18:09:25.889719   74203 logs.go:276] 0 containers: []
	W0731 18:09:25.889728   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:25.889734   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:25.889803   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:25.922409   74203 cri.go:89] found id: ""
	I0731 18:09:25.922441   74203 logs.go:276] 0 containers: []
	W0731 18:09:25.922452   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:25.922459   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:25.922512   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:25.956849   74203 cri.go:89] found id: ""
	I0731 18:09:25.956876   74203 logs.go:276] 0 containers: []
	W0731 18:09:25.956886   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:25.956893   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:25.956958   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:25.994190   74203 cri.go:89] found id: ""
	I0731 18:09:25.994212   74203 logs.go:276] 0 containers: []
	W0731 18:09:25.994220   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:25.994225   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:25.994270   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:26.027980   74203 cri.go:89] found id: ""
	I0731 18:09:26.028005   74203 logs.go:276] 0 containers: []
	W0731 18:09:26.028014   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:26.028025   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:26.028044   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:26.076627   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:26.076661   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:26.089439   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:26.089464   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:26.167298   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:26.167319   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:26.167333   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:26.244611   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:26.244644   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:28.787238   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:28.800136   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:28.800221   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:28.843038   74203 cri.go:89] found id: ""
	I0731 18:09:28.843062   74203 logs.go:276] 0 containers: []
	W0731 18:09:28.843070   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:28.843076   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:28.843154   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:28.876979   74203 cri.go:89] found id: ""
	I0731 18:09:28.877010   74203 logs.go:276] 0 containers: []
	W0731 18:09:28.877021   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:28.877028   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:28.877095   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:28.913105   74203 cri.go:89] found id: ""
	I0731 18:09:28.913137   74203 logs.go:276] 0 containers: []
	W0731 18:09:28.913147   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:28.913155   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:28.913216   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:28.949113   74203 cri.go:89] found id: ""
	I0731 18:09:28.949144   74203 logs.go:276] 0 containers: []
	W0731 18:09:28.949153   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:28.949160   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:28.949208   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:28.983159   74203 cri.go:89] found id: ""
	I0731 18:09:28.983187   74203 logs.go:276] 0 containers: []
	W0731 18:09:28.983195   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:28.983200   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:28.983276   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:29.016316   74203 cri.go:89] found id: ""
	I0731 18:09:29.016356   74203 logs.go:276] 0 containers: []
	W0731 18:09:29.016364   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:29.016370   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:29.016419   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:29.050015   74203 cri.go:89] found id: ""
	I0731 18:09:29.050047   74203 logs.go:276] 0 containers: []
	W0731 18:09:29.050058   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:29.050069   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:29.050124   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:29.084711   74203 cri.go:89] found id: ""
	I0731 18:09:29.084739   74203 logs.go:276] 0 containers: []
	W0731 18:09:29.084749   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:29.084760   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:29.084777   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:29.135474   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:29.135516   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:29.149989   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:29.150022   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:29.223652   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:29.223676   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:29.223688   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:29.307949   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:29.307983   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:26.779082   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:29.280030   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:28.143957   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:30.643349   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:30.271524   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:32.271862   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:31.848760   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:31.861409   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:31.861470   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:31.894485   74203 cri.go:89] found id: ""
	I0731 18:09:31.894505   74203 logs.go:276] 0 containers: []
	W0731 18:09:31.894513   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:31.894518   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:31.894563   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:31.926760   74203 cri.go:89] found id: ""
	I0731 18:09:31.926784   74203 logs.go:276] 0 containers: []
	W0731 18:09:31.926791   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:31.926797   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:31.926857   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:31.963010   74203 cri.go:89] found id: ""
	I0731 18:09:31.963042   74203 logs.go:276] 0 containers: []
	W0731 18:09:31.963055   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:31.963062   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:31.963165   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:31.995221   74203 cri.go:89] found id: ""
	I0731 18:09:31.995249   74203 logs.go:276] 0 containers: []
	W0731 18:09:31.995260   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:31.995268   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:31.995333   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:32.033912   74203 cri.go:89] found id: ""
	I0731 18:09:32.033942   74203 logs.go:276] 0 containers: []
	W0731 18:09:32.033955   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:32.033963   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:32.034038   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:32.066416   74203 cri.go:89] found id: ""
	I0731 18:09:32.066446   74203 logs.go:276] 0 containers: []
	W0731 18:09:32.066477   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:32.066486   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:32.066549   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:32.100097   74203 cri.go:89] found id: ""
	I0731 18:09:32.100121   74203 logs.go:276] 0 containers: []
	W0731 18:09:32.100129   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:32.100135   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:32.100191   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:32.133061   74203 cri.go:89] found id: ""
	I0731 18:09:32.133088   74203 logs.go:276] 0 containers: []
	W0731 18:09:32.133096   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:32.133106   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:32.133120   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:32.169869   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:32.169897   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:32.218668   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:32.218707   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:32.231016   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:32.231039   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:32.304319   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:32.304342   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:32.304353   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:34.880423   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:34.893775   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:34.893853   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:34.925073   74203 cri.go:89] found id: ""
	I0731 18:09:34.925101   74203 logs.go:276] 0 containers: []
	W0731 18:09:34.925109   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:34.925115   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:34.925178   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:34.960870   74203 cri.go:89] found id: ""
	I0731 18:09:34.960896   74203 logs.go:276] 0 containers: []
	W0731 18:09:34.960904   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:34.960910   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:34.960961   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:34.996290   74203 cri.go:89] found id: ""
	I0731 18:09:34.996332   74203 logs.go:276] 0 containers: []
	W0731 18:09:34.996341   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:34.996347   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:34.996401   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:35.027900   74203 cri.go:89] found id: ""
	I0731 18:09:35.027932   74203 logs.go:276] 0 containers: []
	W0731 18:09:35.027940   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:35.027945   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:35.028004   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:35.060533   74203 cri.go:89] found id: ""
	I0731 18:09:35.060562   74203 logs.go:276] 0 containers: []
	W0731 18:09:35.060579   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:35.060586   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:35.060653   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:35.095307   74203 cri.go:89] found id: ""
	I0731 18:09:35.095339   74203 logs.go:276] 0 containers: []
	W0731 18:09:35.095348   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:35.095355   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:35.095421   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:35.127060   74203 cri.go:89] found id: ""
	I0731 18:09:35.127082   74203 logs.go:276] 0 containers: []
	W0731 18:09:35.127090   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:35.127095   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:35.127169   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:35.161300   74203 cri.go:89] found id: ""
	I0731 18:09:35.161328   74203 logs.go:276] 0 containers: []
	W0731 18:09:35.161339   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:35.161350   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:35.161369   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:35.233033   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:35.233060   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:35.233074   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:35.313279   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:35.313311   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:31.779160   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:33.779209   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:32.644329   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:35.143744   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:34.774758   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:37.271690   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:35.356120   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:35.356145   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:35.408231   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:35.408263   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:37.921242   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:37.933986   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:37.934044   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:37.964524   74203 cri.go:89] found id: ""
	I0731 18:09:37.964558   74203 logs.go:276] 0 containers: []
	W0731 18:09:37.964567   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:37.964574   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:37.964632   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:37.998157   74203 cri.go:89] found id: ""
	I0731 18:09:37.998183   74203 logs.go:276] 0 containers: []
	W0731 18:09:37.998191   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:37.998196   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:37.998257   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:38.034611   74203 cri.go:89] found id: ""
	I0731 18:09:38.034637   74203 logs.go:276] 0 containers: []
	W0731 18:09:38.034645   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:38.034650   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:38.034708   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:38.068005   74203 cri.go:89] found id: ""
	I0731 18:09:38.068029   74203 logs.go:276] 0 containers: []
	W0731 18:09:38.068039   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:38.068047   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:38.068104   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:38.106110   74203 cri.go:89] found id: ""
	I0731 18:09:38.106133   74203 logs.go:276] 0 containers: []
	W0731 18:09:38.106141   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:38.106146   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:38.106192   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:38.138337   74203 cri.go:89] found id: ""
	I0731 18:09:38.138364   74203 logs.go:276] 0 containers: []
	W0731 18:09:38.138375   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:38.138383   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:38.138440   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:38.171517   74203 cri.go:89] found id: ""
	I0731 18:09:38.171546   74203 logs.go:276] 0 containers: []
	W0731 18:09:38.171557   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:38.171564   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:38.171643   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:38.208708   74203 cri.go:89] found id: ""
	I0731 18:09:38.208733   74203 logs.go:276] 0 containers: []
	W0731 18:09:38.208741   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:38.208750   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:38.208760   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:38.243711   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:38.243736   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:38.298673   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:38.298705   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:38.311936   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:38.311962   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:38.384023   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:38.384049   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:38.384067   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:36.278948   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:38.279423   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:40.281213   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:37.644041   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:40.143131   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:39.772098   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:42.272096   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:40.959426   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:40.972581   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:40.972645   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:41.008917   74203 cri.go:89] found id: ""
	I0731 18:09:41.008941   74203 logs.go:276] 0 containers: []
	W0731 18:09:41.008950   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:41.008957   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:41.009018   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:41.045342   74203 cri.go:89] found id: ""
	I0731 18:09:41.045375   74203 logs.go:276] 0 containers: []
	W0731 18:09:41.045384   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:41.045390   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:41.045454   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:41.081385   74203 cri.go:89] found id: ""
	I0731 18:09:41.081409   74203 logs.go:276] 0 containers: []
	W0731 18:09:41.081417   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:41.081423   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:41.081469   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:41.118028   74203 cri.go:89] found id: ""
	I0731 18:09:41.118051   74203 logs.go:276] 0 containers: []
	W0731 18:09:41.118062   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:41.118067   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:41.118114   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:41.154162   74203 cri.go:89] found id: ""
	I0731 18:09:41.154190   74203 logs.go:276] 0 containers: []
	W0731 18:09:41.154201   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:41.154209   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:41.154271   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:41.190789   74203 cri.go:89] found id: ""
	I0731 18:09:41.190814   74203 logs.go:276] 0 containers: []
	W0731 18:09:41.190822   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:41.190827   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:41.190887   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:41.226281   74203 cri.go:89] found id: ""
	I0731 18:09:41.226312   74203 logs.go:276] 0 containers: []
	W0731 18:09:41.226321   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:41.226327   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:41.226382   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:41.258270   74203 cri.go:89] found id: ""
	I0731 18:09:41.258299   74203 logs.go:276] 0 containers: []
	W0731 18:09:41.258309   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:41.258321   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:41.258335   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:41.342713   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:41.342749   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:41.389772   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:41.389795   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:41.442645   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:41.442676   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:41.455850   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:41.455874   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:41.522017   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:44.022439   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:44.035190   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:44.035258   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:44.070759   74203 cri.go:89] found id: ""
	I0731 18:09:44.070783   74203 logs.go:276] 0 containers: []
	W0731 18:09:44.070790   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:44.070796   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:44.070857   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:44.105313   74203 cri.go:89] found id: ""
	I0731 18:09:44.105350   74203 logs.go:276] 0 containers: []
	W0731 18:09:44.105358   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:44.105364   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:44.105416   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:44.140159   74203 cri.go:89] found id: ""
	I0731 18:09:44.140208   74203 logs.go:276] 0 containers: []
	W0731 18:09:44.140220   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:44.140229   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:44.140301   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:44.176407   74203 cri.go:89] found id: ""
	I0731 18:09:44.176429   74203 logs.go:276] 0 containers: []
	W0731 18:09:44.176437   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:44.176442   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:44.176490   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:44.210875   74203 cri.go:89] found id: ""
	I0731 18:09:44.210899   74203 logs.go:276] 0 containers: []
	W0731 18:09:44.210907   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:44.210916   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:44.210969   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:44.247021   74203 cri.go:89] found id: ""
	I0731 18:09:44.247045   74203 logs.go:276] 0 containers: []
	W0731 18:09:44.247055   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:44.247061   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:44.247141   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:44.282983   74203 cri.go:89] found id: ""
	I0731 18:09:44.283011   74203 logs.go:276] 0 containers: []
	W0731 18:09:44.283021   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:44.283029   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:44.283092   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:44.319717   74203 cri.go:89] found id: ""
	I0731 18:09:44.319742   74203 logs.go:276] 0 containers: []
	W0731 18:09:44.319750   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:44.319766   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:44.319781   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:44.398602   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:44.398636   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:44.435350   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:44.435384   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:44.488021   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:44.488053   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:44.501790   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:44.501813   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:44.578374   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:42.779304   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:45.279008   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:42.143287   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:44.144123   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:46.643499   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:44.771059   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:46.771846   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:48.772300   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:47.079192   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:47.093516   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:47.093597   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:47.132872   74203 cri.go:89] found id: ""
	I0731 18:09:47.132899   74203 logs.go:276] 0 containers: []
	W0731 18:09:47.132907   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:47.132913   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:47.132969   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:47.167428   74203 cri.go:89] found id: ""
	I0731 18:09:47.167460   74203 logs.go:276] 0 containers: []
	W0731 18:09:47.167472   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:47.167480   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:47.167551   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:47.202206   74203 cri.go:89] found id: ""
	I0731 18:09:47.202237   74203 logs.go:276] 0 containers: []
	W0731 18:09:47.202250   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:47.202256   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:47.202308   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:47.238513   74203 cri.go:89] found id: ""
	I0731 18:09:47.238537   74203 logs.go:276] 0 containers: []
	W0731 18:09:47.238545   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:47.238551   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:47.238604   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:47.271732   74203 cri.go:89] found id: ""
	I0731 18:09:47.271755   74203 logs.go:276] 0 containers: []
	W0731 18:09:47.271764   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:47.271770   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:47.271828   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:47.305906   74203 cri.go:89] found id: ""
	I0731 18:09:47.305932   74203 logs.go:276] 0 containers: []
	W0731 18:09:47.305943   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:47.305948   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:47.305996   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:47.338427   74203 cri.go:89] found id: ""
	I0731 18:09:47.338452   74203 logs.go:276] 0 containers: []
	W0731 18:09:47.338461   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:47.338468   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:47.338526   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:47.374909   74203 cri.go:89] found id: ""
	I0731 18:09:47.374943   74203 logs.go:276] 0 containers: []
	W0731 18:09:47.374954   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:47.374963   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:47.374976   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:47.387739   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:47.387765   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:47.480479   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:47.480505   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:47.480519   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:47.562857   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:47.562890   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:47.608435   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:47.608466   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:50.164351   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:50.177485   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:50.177546   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:50.211474   74203 cri.go:89] found id: ""
	I0731 18:09:50.211502   74203 logs.go:276] 0 containers: []
	W0731 18:09:50.211512   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:50.211520   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:50.211583   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:50.248167   74203 cri.go:89] found id: ""
	I0731 18:09:50.248190   74203 logs.go:276] 0 containers: []
	W0731 18:09:50.248197   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:50.248203   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:50.248250   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:50.286323   74203 cri.go:89] found id: ""
	I0731 18:09:50.286358   74203 logs.go:276] 0 containers: []
	W0731 18:09:50.286366   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:50.286372   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:50.286420   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:50.316634   74203 cri.go:89] found id: ""
	I0731 18:09:50.316661   74203 logs.go:276] 0 containers: []
	W0731 18:09:50.316670   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:50.316675   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:50.316726   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:47.279198   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:49.280511   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:49.144581   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:51.642915   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:51.272079   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:53.272815   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:50.349881   74203 cri.go:89] found id: ""
	I0731 18:09:50.349909   74203 logs.go:276] 0 containers: []
	W0731 18:09:50.349919   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:50.349926   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:50.349989   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:50.384147   74203 cri.go:89] found id: ""
	I0731 18:09:50.384181   74203 logs.go:276] 0 containers: []
	W0731 18:09:50.384194   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:50.384203   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:50.384272   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:50.418024   74203 cri.go:89] found id: ""
	I0731 18:09:50.418052   74203 logs.go:276] 0 containers: []
	W0731 18:09:50.418062   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:50.418069   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:50.418130   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:50.454484   74203 cri.go:89] found id: ""
	I0731 18:09:50.454517   74203 logs.go:276] 0 containers: []
	W0731 18:09:50.454525   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:50.454533   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:50.454544   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:50.505508   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:50.505545   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:50.518504   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:50.518529   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:50.587950   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:50.587974   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:50.587989   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:50.669268   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:50.669302   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:53.209229   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:53.222114   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:53.222175   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:53.255330   74203 cri.go:89] found id: ""
	I0731 18:09:53.255356   74203 logs.go:276] 0 containers: []
	W0731 18:09:53.255365   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:53.255371   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:53.255432   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:53.290354   74203 cri.go:89] found id: ""
	I0731 18:09:53.290375   74203 logs.go:276] 0 containers: []
	W0731 18:09:53.290382   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:53.290387   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:53.290438   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:53.323621   74203 cri.go:89] found id: ""
	I0731 18:09:53.323645   74203 logs.go:276] 0 containers: []
	W0731 18:09:53.323653   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:53.323658   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:53.323718   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:53.355850   74203 cri.go:89] found id: ""
	I0731 18:09:53.355877   74203 logs.go:276] 0 containers: []
	W0731 18:09:53.355887   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:53.355894   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:53.355957   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:53.388686   74203 cri.go:89] found id: ""
	I0731 18:09:53.388716   74203 logs.go:276] 0 containers: []
	W0731 18:09:53.388726   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:53.388733   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:53.388785   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:53.426924   74203 cri.go:89] found id: ""
	I0731 18:09:53.426952   74203 logs.go:276] 0 containers: []
	W0731 18:09:53.426961   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:53.426967   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:53.427019   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:53.462041   74203 cri.go:89] found id: ""
	I0731 18:09:53.462067   74203 logs.go:276] 0 containers: []
	W0731 18:09:53.462078   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:53.462084   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:53.462145   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:53.493810   74203 cri.go:89] found id: ""
	I0731 18:09:53.493833   74203 logs.go:276] 0 containers: []
	W0731 18:09:53.493842   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:53.493852   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:53.493867   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:53.530019   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:53.530053   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:53.580749   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:53.580782   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:53.594457   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:53.594482   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:53.662096   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:53.662116   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:53.662134   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:51.778292   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:53.779043   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:53.643914   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:56.142699   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:55.772106   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:58.271063   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:56.238479   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:56.251272   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:56.251350   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:56.287380   74203 cri.go:89] found id: ""
	I0731 18:09:56.287406   74203 logs.go:276] 0 containers: []
	W0731 18:09:56.287414   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:56.287419   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:56.287471   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:56.322490   74203 cri.go:89] found id: ""
	I0731 18:09:56.322512   74203 logs.go:276] 0 containers: []
	W0731 18:09:56.322520   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:56.322526   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:56.322572   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:56.355845   74203 cri.go:89] found id: ""
	I0731 18:09:56.355874   74203 logs.go:276] 0 containers: []
	W0731 18:09:56.355885   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:56.355895   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:56.355958   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:56.388304   74203 cri.go:89] found id: ""
	I0731 18:09:56.388330   74203 logs.go:276] 0 containers: []
	W0731 18:09:56.388340   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:56.388348   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:56.388411   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:56.420837   74203 cri.go:89] found id: ""
	I0731 18:09:56.420867   74203 logs.go:276] 0 containers: []
	W0731 18:09:56.420877   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:56.420884   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:56.420950   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:56.453095   74203 cri.go:89] found id: ""
	I0731 18:09:56.453135   74203 logs.go:276] 0 containers: []
	W0731 18:09:56.453146   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:56.453155   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:56.453214   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:56.484245   74203 cri.go:89] found id: ""
	I0731 18:09:56.484272   74203 logs.go:276] 0 containers: []
	W0731 18:09:56.484282   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:56.484296   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:56.484366   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:56.519473   74203 cri.go:89] found id: ""
	I0731 18:09:56.519501   74203 logs.go:276] 0 containers: []
	W0731 18:09:56.519508   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:56.519516   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:56.519530   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:56.532178   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:56.532203   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:56.600092   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:56.600122   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:56.600137   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:56.679176   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:56.679208   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:56.715464   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:56.715499   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:59.267214   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:59.280666   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:59.280740   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:59.312898   74203 cri.go:89] found id: ""
	I0731 18:09:59.312928   74203 logs.go:276] 0 containers: []
	W0731 18:09:59.312940   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:59.312947   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:59.313013   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:59.347881   74203 cri.go:89] found id: ""
	I0731 18:09:59.347907   74203 logs.go:276] 0 containers: []
	W0731 18:09:59.347915   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:59.347919   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:59.347978   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:59.382566   74203 cri.go:89] found id: ""
	I0731 18:09:59.382603   74203 logs.go:276] 0 containers: []
	W0731 18:09:59.382615   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:59.382629   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:59.382691   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:59.417123   74203 cri.go:89] found id: ""
	I0731 18:09:59.417148   74203 logs.go:276] 0 containers: []
	W0731 18:09:59.417157   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:59.417163   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:59.417220   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:59.452674   74203 cri.go:89] found id: ""
	I0731 18:09:59.452699   74203 logs.go:276] 0 containers: []
	W0731 18:09:59.452709   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:59.452715   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:59.452775   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:59.488879   74203 cri.go:89] found id: ""
	I0731 18:09:59.488905   74203 logs.go:276] 0 containers: []
	W0731 18:09:59.488913   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:59.488921   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:59.488981   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:59.521773   74203 cri.go:89] found id: ""
	I0731 18:09:59.521801   74203 logs.go:276] 0 containers: []
	W0731 18:09:59.521809   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:59.521816   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:59.521876   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:59.566619   74203 cri.go:89] found id: ""
	I0731 18:09:59.566649   74203 logs.go:276] 0 containers: []
	W0731 18:09:59.566659   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:59.566670   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:59.566687   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:59.638301   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:59.638351   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:59.638367   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:59.721561   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:59.721597   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:59.759371   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:59.759402   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:59.811223   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:59.811255   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:56.280351   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:58.777896   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:00.779028   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:58.144006   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:00.643536   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:00.772456   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:03.270710   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:02.325339   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:02.337908   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:02.337963   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:02.369343   74203 cri.go:89] found id: ""
	I0731 18:10:02.369369   74203 logs.go:276] 0 containers: []
	W0731 18:10:02.369378   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:02.369384   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:02.369442   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:02.406207   74203 cri.go:89] found id: ""
	I0731 18:10:02.406234   74203 logs.go:276] 0 containers: []
	W0731 18:10:02.406242   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:02.406247   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:02.406297   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:02.442001   74203 cri.go:89] found id: ""
	I0731 18:10:02.442031   74203 logs.go:276] 0 containers: []
	W0731 18:10:02.442041   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:02.442049   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:02.442109   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:02.478407   74203 cri.go:89] found id: ""
	I0731 18:10:02.478431   74203 logs.go:276] 0 containers: []
	W0731 18:10:02.478439   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:02.478444   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:02.478491   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:02.513832   74203 cri.go:89] found id: ""
	I0731 18:10:02.513875   74203 logs.go:276] 0 containers: []
	W0731 18:10:02.513888   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:02.513896   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:02.513962   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:02.550830   74203 cri.go:89] found id: ""
	I0731 18:10:02.550856   74203 logs.go:276] 0 containers: []
	W0731 18:10:02.550867   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:02.550874   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:02.550937   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:02.584649   74203 cri.go:89] found id: ""
	I0731 18:10:02.584676   74203 logs.go:276] 0 containers: []
	W0731 18:10:02.584684   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:02.584691   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:02.584752   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:02.617436   74203 cri.go:89] found id: ""
	I0731 18:10:02.617464   74203 logs.go:276] 0 containers: []
	W0731 18:10:02.617475   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:02.617485   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:02.617500   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:02.671571   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:02.671609   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:02.686657   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:02.686694   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:02.755974   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:02.756008   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:02.756025   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:02.837976   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:02.838012   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:02.779666   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:04.779994   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:02.644075   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:05.142859   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:05.272500   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:07.771599   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:05.375212   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:05.388635   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:05.388703   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:05.427583   74203 cri.go:89] found id: ""
	I0731 18:10:05.427610   74203 logs.go:276] 0 containers: []
	W0731 18:10:05.427617   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:05.427622   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:05.427673   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:05.462550   74203 cri.go:89] found id: ""
	I0731 18:10:05.462575   74203 logs.go:276] 0 containers: []
	W0731 18:10:05.462583   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:05.462589   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:05.462645   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:05.501768   74203 cri.go:89] found id: ""
	I0731 18:10:05.501790   74203 logs.go:276] 0 containers: []
	W0731 18:10:05.501797   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:05.501802   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:05.501860   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:05.539692   74203 cri.go:89] found id: ""
	I0731 18:10:05.539719   74203 logs.go:276] 0 containers: []
	W0731 18:10:05.539731   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:05.539737   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:05.539798   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:05.573844   74203 cri.go:89] found id: ""
	I0731 18:10:05.573872   74203 logs.go:276] 0 containers: []
	W0731 18:10:05.573884   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:05.573891   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:05.573953   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:05.607827   74203 cri.go:89] found id: ""
	I0731 18:10:05.607848   74203 logs.go:276] 0 containers: []
	W0731 18:10:05.607858   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:05.607863   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:05.607913   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:05.639644   74203 cri.go:89] found id: ""
	I0731 18:10:05.639673   74203 logs.go:276] 0 containers: []
	W0731 18:10:05.639684   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:05.639691   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:05.639753   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:05.673164   74203 cri.go:89] found id: ""
	I0731 18:10:05.673188   74203 logs.go:276] 0 containers: []
	W0731 18:10:05.673195   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:05.673203   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:05.673215   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:05.755189   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:05.755221   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:05.793686   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:05.793715   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:05.844930   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:05.844965   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:05.859150   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:05.859176   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:05.929945   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:08.430669   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:08.444918   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:08.444989   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:08.482598   74203 cri.go:89] found id: ""
	I0731 18:10:08.482625   74203 logs.go:276] 0 containers: []
	W0731 18:10:08.482635   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:08.482642   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:08.482708   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:08.519687   74203 cri.go:89] found id: ""
	I0731 18:10:08.519717   74203 logs.go:276] 0 containers: []
	W0731 18:10:08.519726   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:08.519734   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:08.519795   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:08.551600   74203 cri.go:89] found id: ""
	I0731 18:10:08.551638   74203 logs.go:276] 0 containers: []
	W0731 18:10:08.551649   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:08.551657   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:08.551713   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:08.585233   74203 cri.go:89] found id: ""
	I0731 18:10:08.585263   74203 logs.go:276] 0 containers: []
	W0731 18:10:08.585274   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:08.585282   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:08.585343   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:08.622464   74203 cri.go:89] found id: ""
	I0731 18:10:08.622492   74203 logs.go:276] 0 containers: []
	W0731 18:10:08.622502   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:08.622510   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:08.622569   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:08.658360   74203 cri.go:89] found id: ""
	I0731 18:10:08.658390   74203 logs.go:276] 0 containers: []
	W0731 18:10:08.658402   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:08.658410   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:08.658471   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:08.692076   74203 cri.go:89] found id: ""
	I0731 18:10:08.692100   74203 logs.go:276] 0 containers: []
	W0731 18:10:08.692109   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:08.692116   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:08.692179   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:08.729584   74203 cri.go:89] found id: ""
	I0731 18:10:08.729612   74203 logs.go:276] 0 containers: []
	W0731 18:10:08.729622   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:08.729633   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:08.729647   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:08.806395   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:08.806457   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:08.806485   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:08.884008   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:08.884046   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:08.924359   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:08.924398   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:08.978161   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:08.978195   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:07.279327   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:09.281214   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:07.143145   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:09.143995   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:11.643254   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:09.773024   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:12.272862   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:14.273615   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:11.491784   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:11.504711   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:11.504784   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:11.541314   74203 cri.go:89] found id: ""
	I0731 18:10:11.541353   74203 logs.go:276] 0 containers: []
	W0731 18:10:11.541361   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:11.541366   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:11.541424   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:11.576481   74203 cri.go:89] found id: ""
	I0731 18:10:11.576509   74203 logs.go:276] 0 containers: []
	W0731 18:10:11.576527   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:11.576535   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:11.576597   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:11.610370   74203 cri.go:89] found id: ""
	I0731 18:10:11.610395   74203 logs.go:276] 0 containers: []
	W0731 18:10:11.610404   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:11.610412   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:11.610470   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:11.645559   74203 cri.go:89] found id: ""
	I0731 18:10:11.645586   74203 logs.go:276] 0 containers: []
	W0731 18:10:11.645593   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:11.645598   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:11.645654   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:11.677576   74203 cri.go:89] found id: ""
	I0731 18:10:11.677613   74203 logs.go:276] 0 containers: []
	W0731 18:10:11.677624   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:11.677631   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:11.677681   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:11.710173   74203 cri.go:89] found id: ""
	I0731 18:10:11.710199   74203 logs.go:276] 0 containers: []
	W0731 18:10:11.710208   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:11.710215   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:11.710273   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:11.743722   74203 cri.go:89] found id: ""
	I0731 18:10:11.743752   74203 logs.go:276] 0 containers: []
	W0731 18:10:11.743763   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:11.743782   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:11.743857   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:11.776730   74203 cri.go:89] found id: ""
	I0731 18:10:11.776752   74203 logs.go:276] 0 containers: []
	W0731 18:10:11.776759   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:11.776766   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:11.776777   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:11.846385   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:11.846404   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:11.846415   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:11.923748   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:11.923779   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:11.959700   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:11.959734   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:12.009971   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:12.010002   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:14.524097   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:14.537349   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:14.537449   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:14.569907   74203 cri.go:89] found id: ""
	I0731 18:10:14.569934   74203 logs.go:276] 0 containers: []
	W0731 18:10:14.569941   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:14.569947   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:14.569999   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:14.605058   74203 cri.go:89] found id: ""
	I0731 18:10:14.605085   74203 logs.go:276] 0 containers: []
	W0731 18:10:14.605095   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:14.605102   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:14.605155   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:14.640941   74203 cri.go:89] found id: ""
	I0731 18:10:14.640964   74203 logs.go:276] 0 containers: []
	W0731 18:10:14.640975   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:14.640982   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:14.641039   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:14.678774   74203 cri.go:89] found id: ""
	I0731 18:10:14.678803   74203 logs.go:276] 0 containers: []
	W0731 18:10:14.678814   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:14.678822   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:14.678880   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:14.714123   74203 cri.go:89] found id: ""
	I0731 18:10:14.714152   74203 logs.go:276] 0 containers: []
	W0731 18:10:14.714163   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:14.714171   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:14.714230   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:14.750212   74203 cri.go:89] found id: ""
	I0731 18:10:14.750243   74203 logs.go:276] 0 containers: []
	W0731 18:10:14.750255   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:14.750262   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:14.750322   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:14.786820   74203 cri.go:89] found id: ""
	I0731 18:10:14.786842   74203 logs.go:276] 0 containers: []
	W0731 18:10:14.786850   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:14.786856   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:14.786904   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:14.819667   74203 cri.go:89] found id: ""
	I0731 18:10:14.819689   74203 logs.go:276] 0 containers: []
	W0731 18:10:14.819697   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:14.819705   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:14.819725   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:14.832525   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:14.832550   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:14.901190   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:14.901216   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:14.901229   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:14.977123   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:14.977158   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:15.014882   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:15.014912   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:11.779007   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:14.279638   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:14.142303   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:16.143713   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:16.770910   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:18.771058   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:17.564989   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:17.578676   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:17.578740   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:17.610077   74203 cri.go:89] found id: ""
	I0731 18:10:17.610103   74203 logs.go:276] 0 containers: []
	W0731 18:10:17.610112   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:17.610117   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:17.610169   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:17.643143   74203 cri.go:89] found id: ""
	I0731 18:10:17.643166   74203 logs.go:276] 0 containers: []
	W0731 18:10:17.643173   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:17.643179   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:17.643225   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:17.677979   74203 cri.go:89] found id: ""
	I0731 18:10:17.678002   74203 logs.go:276] 0 containers: []
	W0731 18:10:17.678010   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:17.678016   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:17.678086   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:17.711905   74203 cri.go:89] found id: ""
	I0731 18:10:17.711941   74203 logs.go:276] 0 containers: []
	W0731 18:10:17.711953   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:17.711960   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:17.712013   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:17.745842   74203 cri.go:89] found id: ""
	I0731 18:10:17.745870   74203 logs.go:276] 0 containers: []
	W0731 18:10:17.745881   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:17.745889   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:17.745949   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:17.778170   74203 cri.go:89] found id: ""
	I0731 18:10:17.778242   74203 logs.go:276] 0 containers: []
	W0731 18:10:17.778260   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:17.778272   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:17.778340   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:17.810717   74203 cri.go:89] found id: ""
	I0731 18:10:17.810744   74203 logs.go:276] 0 containers: []
	W0731 18:10:17.810755   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:17.810762   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:17.810823   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:17.843237   74203 cri.go:89] found id: ""
	I0731 18:10:17.843268   74203 logs.go:276] 0 containers: []
	W0731 18:10:17.843278   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:17.843288   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:17.843303   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:17.894338   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:17.894376   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:17.907898   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:17.907927   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:17.977115   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:17.977133   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:17.977145   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:18.059924   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:18.059968   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:16.279697   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:18.780698   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:18.144063   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:20.643891   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:20.772956   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:23.270974   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:20.600903   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:20.613609   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:20.613680   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:20.646352   74203 cri.go:89] found id: ""
	I0731 18:10:20.646379   74203 logs.go:276] 0 containers: []
	W0731 18:10:20.646388   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:20.646395   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:20.646453   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:20.680448   74203 cri.go:89] found id: ""
	I0731 18:10:20.680475   74203 logs.go:276] 0 containers: []
	W0731 18:10:20.680486   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:20.680493   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:20.680555   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:20.716330   74203 cri.go:89] found id: ""
	I0731 18:10:20.716365   74203 logs.go:276] 0 containers: []
	W0731 18:10:20.716378   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:20.716387   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:20.716448   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:20.748630   74203 cri.go:89] found id: ""
	I0731 18:10:20.748657   74203 logs.go:276] 0 containers: []
	W0731 18:10:20.748665   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:20.748670   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:20.748736   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:20.787769   74203 cri.go:89] found id: ""
	I0731 18:10:20.787793   74203 logs.go:276] 0 containers: []
	W0731 18:10:20.787802   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:20.787809   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:20.787869   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:20.819884   74203 cri.go:89] found id: ""
	I0731 18:10:20.819911   74203 logs.go:276] 0 containers: []
	W0731 18:10:20.819921   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:20.819929   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:20.819988   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:20.853414   74203 cri.go:89] found id: ""
	I0731 18:10:20.853437   74203 logs.go:276] 0 containers: []
	W0731 18:10:20.853445   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:20.853450   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:20.853508   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:20.889198   74203 cri.go:89] found id: ""
	I0731 18:10:20.889224   74203 logs.go:276] 0 containers: []
	W0731 18:10:20.889231   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:20.889239   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:20.889251   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:20.903240   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:20.903268   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:20.971003   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:20.971032   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:20.971051   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:21.045856   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:21.045888   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:21.086089   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:21.086121   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:23.639664   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:23.652573   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:23.652632   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:23.684719   74203 cri.go:89] found id: ""
	I0731 18:10:23.684746   74203 logs.go:276] 0 containers: []
	W0731 18:10:23.684757   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:23.684765   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:23.684820   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:23.717315   74203 cri.go:89] found id: ""
	I0731 18:10:23.717350   74203 logs.go:276] 0 containers: []
	W0731 18:10:23.717362   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:23.717369   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:23.717432   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:23.750251   74203 cri.go:89] found id: ""
	I0731 18:10:23.750275   74203 logs.go:276] 0 containers: []
	W0731 18:10:23.750286   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:23.750293   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:23.750397   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:23.785700   74203 cri.go:89] found id: ""
	I0731 18:10:23.785726   74203 logs.go:276] 0 containers: []
	W0731 18:10:23.785737   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:23.785745   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:23.785792   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:23.816856   74203 cri.go:89] found id: ""
	I0731 18:10:23.816885   74203 logs.go:276] 0 containers: []
	W0731 18:10:23.816895   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:23.816902   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:23.816965   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:23.849931   74203 cri.go:89] found id: ""
	I0731 18:10:23.849962   74203 logs.go:276] 0 containers: []
	W0731 18:10:23.849972   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:23.849980   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:23.850043   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:23.881413   74203 cri.go:89] found id: ""
	I0731 18:10:23.881444   74203 logs.go:276] 0 containers: []
	W0731 18:10:23.881452   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:23.881458   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:23.881516   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:23.914272   74203 cri.go:89] found id: ""
	I0731 18:10:23.914303   74203 logs.go:276] 0 containers: []
	W0731 18:10:23.914313   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:23.914325   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:23.914352   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:23.979988   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:23.980015   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:23.980027   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:24.057159   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:24.057198   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:24.097567   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:24.097603   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:24.154740   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:24.154781   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:21.279091   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:23.779103   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:25.779754   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:23.142423   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:25.642901   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:25.272277   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:27.771221   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:26.670324   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:26.683866   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:26.683951   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:26.717671   74203 cri.go:89] found id: ""
	I0731 18:10:26.717722   74203 logs.go:276] 0 containers: []
	W0731 18:10:26.717733   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:26.717739   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:26.717790   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:26.751201   74203 cri.go:89] found id: ""
	I0731 18:10:26.751228   74203 logs.go:276] 0 containers: []
	W0731 18:10:26.751236   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:26.751246   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:26.751315   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:26.784768   74203 cri.go:89] found id: ""
	I0731 18:10:26.784793   74203 logs.go:276] 0 containers: []
	W0731 18:10:26.784803   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:26.784811   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:26.784868   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:26.822269   74203 cri.go:89] found id: ""
	I0731 18:10:26.822298   74203 logs.go:276] 0 containers: []
	W0731 18:10:26.822307   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:26.822315   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:26.822378   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:26.854405   74203 cri.go:89] found id: ""
	I0731 18:10:26.854427   74203 logs.go:276] 0 containers: []
	W0731 18:10:26.854434   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:26.854441   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:26.854490   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:26.888975   74203 cri.go:89] found id: ""
	I0731 18:10:26.889000   74203 logs.go:276] 0 containers: []
	W0731 18:10:26.889007   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:26.889013   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:26.889085   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:26.922940   74203 cri.go:89] found id: ""
	I0731 18:10:26.922967   74203 logs.go:276] 0 containers: []
	W0731 18:10:26.922976   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:26.922981   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:26.923040   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:26.955717   74203 cri.go:89] found id: ""
	I0731 18:10:26.955743   74203 logs.go:276] 0 containers: []
	W0731 18:10:26.955754   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:26.955764   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:26.955779   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:27.006453   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:27.006481   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:27.019136   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:27.019159   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:27.086988   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:27.087014   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:27.087031   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:27.161574   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:27.161604   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:29.705620   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:29.718718   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:29.718775   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:29.751079   74203 cri.go:89] found id: ""
	I0731 18:10:29.751123   74203 logs.go:276] 0 containers: []
	W0731 18:10:29.751134   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:29.751142   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:29.751198   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:29.790944   74203 cri.go:89] found id: ""
	I0731 18:10:29.790971   74203 logs.go:276] 0 containers: []
	W0731 18:10:29.790982   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:29.790988   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:29.791041   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:29.827921   74203 cri.go:89] found id: ""
	I0731 18:10:29.827951   74203 logs.go:276] 0 containers: []
	W0731 18:10:29.827965   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:29.827971   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:29.828031   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:29.861365   74203 cri.go:89] found id: ""
	I0731 18:10:29.861398   74203 logs.go:276] 0 containers: []
	W0731 18:10:29.861409   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:29.861417   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:29.861472   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:29.894509   74203 cri.go:89] found id: ""
	I0731 18:10:29.894537   74203 logs.go:276] 0 containers: []
	W0731 18:10:29.894546   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:29.894552   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:29.894614   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:29.926793   74203 cri.go:89] found id: ""
	I0731 18:10:29.926821   74203 logs.go:276] 0 containers: []
	W0731 18:10:29.926832   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:29.926839   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:29.926904   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:29.963765   74203 cri.go:89] found id: ""
	I0731 18:10:29.963792   74203 logs.go:276] 0 containers: []
	W0731 18:10:29.963802   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:29.963809   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:29.963870   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:29.998577   74203 cri.go:89] found id: ""
	I0731 18:10:29.998604   74203 logs.go:276] 0 containers: []
	W0731 18:10:29.998611   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:29.998619   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:29.998630   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:30.050035   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:30.050072   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:30.064147   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:30.064178   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:30.136990   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:30.137012   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:30.137030   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:30.214687   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:30.214719   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:28.279257   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:30.778466   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:27.644082   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:30.144191   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:29.772316   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:32.272651   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:32.753503   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:32.766795   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:32.766873   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:32.812134   74203 cri.go:89] found id: ""
	I0731 18:10:32.812161   74203 logs.go:276] 0 containers: []
	W0731 18:10:32.812169   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:32.812175   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:32.812229   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:32.846997   74203 cri.go:89] found id: ""
	I0731 18:10:32.847029   74203 logs.go:276] 0 containers: []
	W0731 18:10:32.847039   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:32.847044   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:32.847092   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:32.884093   74203 cri.go:89] found id: ""
	I0731 18:10:32.884123   74203 logs.go:276] 0 containers: []
	W0731 18:10:32.884132   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:32.884138   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:32.884188   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:32.920160   74203 cri.go:89] found id: ""
	I0731 18:10:32.920186   74203 logs.go:276] 0 containers: []
	W0731 18:10:32.920197   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:32.920204   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:32.920263   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:32.952750   74203 cri.go:89] found id: ""
	I0731 18:10:32.952777   74203 logs.go:276] 0 containers: []
	W0731 18:10:32.952788   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:32.952795   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:32.952865   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:32.989086   74203 cri.go:89] found id: ""
	I0731 18:10:32.989115   74203 logs.go:276] 0 containers: []
	W0731 18:10:32.989125   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:32.989135   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:32.989189   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:33.021554   74203 cri.go:89] found id: ""
	I0731 18:10:33.021590   74203 logs.go:276] 0 containers: []
	W0731 18:10:33.021602   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:33.021609   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:33.021662   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:33.061097   74203 cri.go:89] found id: ""
	I0731 18:10:33.061128   74203 logs.go:276] 0 containers: []
	W0731 18:10:33.061141   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:33.061160   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:33.061174   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:33.113497   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:33.113534   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:33.126816   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:33.126842   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:33.196713   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:33.196733   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:33.196744   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:33.277697   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:33.277724   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:33.279738   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:35.780181   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:32.643177   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:35.143606   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:34.771678   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:36.772167   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:39.272752   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:35.817143   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:35.829760   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:35.829820   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:35.862974   74203 cri.go:89] found id: ""
	I0731 18:10:35.863002   74203 logs.go:276] 0 containers: []
	W0731 18:10:35.863014   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:35.863022   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:35.863078   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:35.898547   74203 cri.go:89] found id: ""
	I0731 18:10:35.898576   74203 logs.go:276] 0 containers: []
	W0731 18:10:35.898584   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:35.898590   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:35.898651   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:35.930351   74203 cri.go:89] found id: ""
	I0731 18:10:35.930379   74203 logs.go:276] 0 containers: []
	W0731 18:10:35.930390   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:35.930396   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:35.930463   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:35.962623   74203 cri.go:89] found id: ""
	I0731 18:10:35.962652   74203 logs.go:276] 0 containers: []
	W0731 18:10:35.962663   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:35.962670   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:35.962727   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:35.998213   74203 cri.go:89] found id: ""
	I0731 18:10:35.998233   74203 logs.go:276] 0 containers: []
	W0731 18:10:35.998240   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:35.998245   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:35.998291   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:36.032670   74203 cri.go:89] found id: ""
	I0731 18:10:36.032695   74203 logs.go:276] 0 containers: []
	W0731 18:10:36.032703   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:36.032709   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:36.032757   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:36.066349   74203 cri.go:89] found id: ""
	I0731 18:10:36.066381   74203 logs.go:276] 0 containers: []
	W0731 18:10:36.066392   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:36.066399   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:36.066461   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:36.104137   74203 cri.go:89] found id: ""
	I0731 18:10:36.104168   74203 logs.go:276] 0 containers: []
	W0731 18:10:36.104180   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:36.104200   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:36.104215   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:36.155814   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:36.155844   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:36.168885   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:36.168912   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:36.235950   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:36.235972   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:36.235987   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:36.318382   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:36.318414   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:38.853972   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:38.867018   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:38.867089   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:38.902069   74203 cri.go:89] found id: ""
	I0731 18:10:38.902097   74203 logs.go:276] 0 containers: []
	W0731 18:10:38.902109   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:38.902115   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:38.902181   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:38.935272   74203 cri.go:89] found id: ""
	I0731 18:10:38.935296   74203 logs.go:276] 0 containers: []
	W0731 18:10:38.935316   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:38.935329   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:38.935387   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:38.968582   74203 cri.go:89] found id: ""
	I0731 18:10:38.968610   74203 logs.go:276] 0 containers: []
	W0731 18:10:38.968621   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:38.968629   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:38.968688   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:38.999740   74203 cri.go:89] found id: ""
	I0731 18:10:38.999770   74203 logs.go:276] 0 containers: []
	W0731 18:10:38.999780   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:38.999787   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:38.999845   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:39.032964   74203 cri.go:89] found id: ""
	I0731 18:10:39.032993   74203 logs.go:276] 0 containers: []
	W0731 18:10:39.033008   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:39.033015   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:39.033099   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:39.064121   74203 cri.go:89] found id: ""
	I0731 18:10:39.064149   74203 logs.go:276] 0 containers: []
	W0731 18:10:39.064158   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:39.064164   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:39.064222   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:39.098462   74203 cri.go:89] found id: ""
	I0731 18:10:39.098488   74203 logs.go:276] 0 containers: []
	W0731 18:10:39.098498   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:39.098505   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:39.098564   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:39.130627   74203 cri.go:89] found id: ""
	I0731 18:10:39.130653   74203 logs.go:276] 0 containers: []
	W0731 18:10:39.130663   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:39.130674   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:39.130687   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:39.223664   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:39.223698   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:39.260502   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:39.260533   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:39.315643   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:39.315675   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:39.329731   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:39.329761   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:39.395078   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:38.278911   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:40.779921   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:37.643246   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:39.643862   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:41.772051   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:44.271544   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:41.895698   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:41.910111   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:41.910191   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:41.943700   74203 cri.go:89] found id: ""
	I0731 18:10:41.943732   74203 logs.go:276] 0 containers: []
	W0731 18:10:41.943743   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:41.943751   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:41.943812   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:41.976848   74203 cri.go:89] found id: ""
	I0731 18:10:41.976879   74203 logs.go:276] 0 containers: []
	W0731 18:10:41.976888   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:41.976894   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:41.976967   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:42.009424   74203 cri.go:89] found id: ""
	I0731 18:10:42.009451   74203 logs.go:276] 0 containers: []
	W0731 18:10:42.009462   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:42.009477   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:42.009546   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:42.047233   74203 cri.go:89] found id: ""
	I0731 18:10:42.047260   74203 logs.go:276] 0 containers: []
	W0731 18:10:42.047268   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:42.047274   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:42.047342   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:42.079900   74203 cri.go:89] found id: ""
	I0731 18:10:42.079928   74203 logs.go:276] 0 containers: []
	W0731 18:10:42.079938   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:42.079945   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:42.080025   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:42.114122   74203 cri.go:89] found id: ""
	I0731 18:10:42.114152   74203 logs.go:276] 0 containers: []
	W0731 18:10:42.114164   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:42.114172   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:42.114224   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:42.148741   74203 cri.go:89] found id: ""
	I0731 18:10:42.148768   74203 logs.go:276] 0 containers: []
	W0731 18:10:42.148780   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:42.148789   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:42.148853   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:42.184739   74203 cri.go:89] found id: ""
	I0731 18:10:42.184762   74203 logs.go:276] 0 containers: []
	W0731 18:10:42.184769   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:42.184777   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:42.184791   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:42.254676   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:42.254694   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:42.254706   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:42.334936   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:42.334978   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:42.371511   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:42.371540   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:42.421800   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:42.421831   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:44.934983   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:44.947212   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:44.947293   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:44.979722   74203 cri.go:89] found id: ""
	I0731 18:10:44.979748   74203 logs.go:276] 0 containers: []
	W0731 18:10:44.979760   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:44.979767   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:44.979819   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:45.011594   74203 cri.go:89] found id: ""
	I0731 18:10:45.011620   74203 logs.go:276] 0 containers: []
	W0731 18:10:45.011630   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:45.011637   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:45.011803   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:45.043174   74203 cri.go:89] found id: ""
	I0731 18:10:45.043197   74203 logs.go:276] 0 containers: []
	W0731 18:10:45.043207   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:45.043214   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:45.043278   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:45.074629   74203 cri.go:89] found id: ""
	I0731 18:10:45.074652   74203 logs.go:276] 0 containers: []
	W0731 18:10:45.074662   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:45.074669   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:45.074727   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:45.108917   74203 cri.go:89] found id: ""
	I0731 18:10:45.108944   74203 logs.go:276] 0 containers: []
	W0731 18:10:45.108952   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:45.108959   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:45.109018   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:45.142200   74203 cri.go:89] found id: ""
	I0731 18:10:45.142227   74203 logs.go:276] 0 containers: []
	W0731 18:10:45.142237   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:45.142244   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:45.142306   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:45.177076   74203 cri.go:89] found id: ""
	I0731 18:10:45.177101   74203 logs.go:276] 0 containers: []
	W0731 18:10:45.177109   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:45.177114   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:45.177168   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:45.209352   74203 cri.go:89] found id: ""
	I0731 18:10:45.209376   74203 logs.go:276] 0 containers: []
	W0731 18:10:45.209383   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:45.209392   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:45.209407   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:45.257966   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:45.257998   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:45.272429   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:45.272462   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 18:10:43.279626   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:45.778975   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:42.145247   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:44.642278   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:46.644897   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:46.771785   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:48.772117   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	W0731 18:10:45.347952   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:45.347973   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:45.347988   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:45.428556   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:45.428609   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:47.971089   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:47.986677   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:47.986749   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:48.020396   74203 cri.go:89] found id: ""
	I0731 18:10:48.020426   74203 logs.go:276] 0 containers: []
	W0731 18:10:48.020438   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:48.020446   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:48.020512   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:48.058129   74203 cri.go:89] found id: ""
	I0731 18:10:48.058161   74203 logs.go:276] 0 containers: []
	W0731 18:10:48.058172   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:48.058180   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:48.058249   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:48.091894   74203 cri.go:89] found id: ""
	I0731 18:10:48.091922   74203 logs.go:276] 0 containers: []
	W0731 18:10:48.091932   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:48.091939   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:48.091998   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:48.124757   74203 cri.go:89] found id: ""
	I0731 18:10:48.124788   74203 logs.go:276] 0 containers: []
	W0731 18:10:48.124798   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:48.124807   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:48.124871   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:48.159145   74203 cri.go:89] found id: ""
	I0731 18:10:48.159172   74203 logs.go:276] 0 containers: []
	W0731 18:10:48.159184   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:48.159191   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:48.159253   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:48.200024   74203 cri.go:89] found id: ""
	I0731 18:10:48.200051   74203 logs.go:276] 0 containers: []
	W0731 18:10:48.200061   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:48.200069   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:48.200128   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:48.233838   74203 cri.go:89] found id: ""
	I0731 18:10:48.233870   74203 logs.go:276] 0 containers: []
	W0731 18:10:48.233880   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:48.233886   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:48.233941   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:48.265786   74203 cri.go:89] found id: ""
	I0731 18:10:48.265812   74203 logs.go:276] 0 containers: []
	W0731 18:10:48.265821   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:48.265832   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:48.265846   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:48.280422   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:48.280449   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:48.346774   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:48.346796   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:48.346808   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:48.424017   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:48.424052   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:48.464139   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:48.464166   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:47.781556   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:50.278635   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:49.143684   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:51.144631   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:51.272847   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:53.771397   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:51.013681   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:51.028745   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:51.028814   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:51.062656   74203 cri.go:89] found id: ""
	I0731 18:10:51.062683   74203 logs.go:276] 0 containers: []
	W0731 18:10:51.062691   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:51.062700   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:51.062749   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:51.099203   74203 cri.go:89] found id: ""
	I0731 18:10:51.099228   74203 logs.go:276] 0 containers: []
	W0731 18:10:51.099237   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:51.099243   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:51.099310   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:51.133507   74203 cri.go:89] found id: ""
	I0731 18:10:51.133533   74203 logs.go:276] 0 containers: []
	W0731 18:10:51.133540   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:51.133546   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:51.133596   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:51.169935   74203 cri.go:89] found id: ""
	I0731 18:10:51.169954   74203 logs.go:276] 0 containers: []
	W0731 18:10:51.169961   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:51.169966   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:51.170012   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:51.202877   74203 cri.go:89] found id: ""
	I0731 18:10:51.202903   74203 logs.go:276] 0 containers: []
	W0731 18:10:51.202913   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:51.202919   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:51.202988   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:51.239913   74203 cri.go:89] found id: ""
	I0731 18:10:51.239939   74203 logs.go:276] 0 containers: []
	W0731 18:10:51.239949   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:51.239957   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:51.240018   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:51.272024   74203 cri.go:89] found id: ""
	I0731 18:10:51.272095   74203 logs.go:276] 0 containers: []
	W0731 18:10:51.272115   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:51.272123   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:51.272185   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:51.307016   74203 cri.go:89] found id: ""
	I0731 18:10:51.307043   74203 logs.go:276] 0 containers: []
	W0731 18:10:51.307053   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:51.307063   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:51.307079   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:51.364018   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:51.364066   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:51.384277   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:51.384303   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:51.472657   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:51.472679   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:51.472696   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:51.548408   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:51.548439   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:54.086526   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:54.099293   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:54.099368   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:54.129927   74203 cri.go:89] found id: ""
	I0731 18:10:54.129954   74203 logs.go:276] 0 containers: []
	W0731 18:10:54.129965   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:54.129972   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:54.130042   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:54.166428   74203 cri.go:89] found id: ""
	I0731 18:10:54.166457   74203 logs.go:276] 0 containers: []
	W0731 18:10:54.166468   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:54.166476   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:54.166538   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:54.204523   74203 cri.go:89] found id: ""
	I0731 18:10:54.204549   74203 logs.go:276] 0 containers: []
	W0731 18:10:54.204556   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:54.204562   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:54.204619   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:54.241706   74203 cri.go:89] found id: ""
	I0731 18:10:54.241730   74203 logs.go:276] 0 containers: []
	W0731 18:10:54.241737   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:54.241744   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:54.241802   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:54.277154   74203 cri.go:89] found id: ""
	I0731 18:10:54.277178   74203 logs.go:276] 0 containers: []
	W0731 18:10:54.277187   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:54.277193   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:54.277255   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:54.310198   74203 cri.go:89] found id: ""
	I0731 18:10:54.310223   74203 logs.go:276] 0 containers: []
	W0731 18:10:54.310231   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:54.310237   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:54.310283   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:54.344807   74203 cri.go:89] found id: ""
	I0731 18:10:54.344837   74203 logs.go:276] 0 containers: []
	W0731 18:10:54.344847   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:54.344854   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:54.344915   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:54.383358   74203 cri.go:89] found id: ""
	I0731 18:10:54.383391   74203 logs.go:276] 0 containers: []
	W0731 18:10:54.383400   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:54.383410   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:54.383424   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:54.431876   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:54.431908   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:54.444797   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:54.444824   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:54.518816   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:54.518839   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:54.518855   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:54.600072   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:54.600109   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:52.279006   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:54.279520   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:53.643093   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:56.143250   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:56.272955   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:58.771584   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:57.141070   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:57.155903   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:57.155975   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:57.189406   74203 cri.go:89] found id: ""
	I0731 18:10:57.189428   74203 logs.go:276] 0 containers: []
	W0731 18:10:57.189435   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:57.189441   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:57.189510   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:57.221507   74203 cri.go:89] found id: ""
	I0731 18:10:57.221531   74203 logs.go:276] 0 containers: []
	W0731 18:10:57.221540   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:57.221547   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:57.221614   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:57.257843   74203 cri.go:89] found id: ""
	I0731 18:10:57.257868   74203 logs.go:276] 0 containers: []
	W0731 18:10:57.257880   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:57.257887   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:57.257939   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:57.292697   74203 cri.go:89] found id: ""
	I0731 18:10:57.292728   74203 logs.go:276] 0 containers: []
	W0731 18:10:57.292737   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:57.292744   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:57.292802   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:57.325705   74203 cri.go:89] found id: ""
	I0731 18:10:57.325728   74203 logs.go:276] 0 containers: []
	W0731 18:10:57.325735   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:57.325740   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:57.325787   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:57.357436   74203 cri.go:89] found id: ""
	I0731 18:10:57.357463   74203 logs.go:276] 0 containers: []
	W0731 18:10:57.357471   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:57.357477   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:57.357534   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:57.388215   74203 cri.go:89] found id: ""
	I0731 18:10:57.388240   74203 logs.go:276] 0 containers: []
	W0731 18:10:57.388249   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:57.388256   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:57.388315   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:57.419609   74203 cri.go:89] found id: ""
	I0731 18:10:57.419631   74203 logs.go:276] 0 containers: []
	W0731 18:10:57.419643   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:57.419652   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:57.419663   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:57.497157   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:57.497188   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:57.533512   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:57.533552   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:57.587866   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:57.587904   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:57.601191   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:57.601222   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:57.681899   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:00.182160   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:00.195509   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:00.195598   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:00.230650   74203 cri.go:89] found id: ""
	I0731 18:11:00.230674   74203 logs.go:276] 0 containers: []
	W0731 18:11:00.230682   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:00.230689   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:00.230747   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:00.268629   74203 cri.go:89] found id: ""
	I0731 18:11:00.268656   74203 logs.go:276] 0 containers: []
	W0731 18:11:00.268666   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:00.268672   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:00.268740   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:00.301805   74203 cri.go:89] found id: ""
	I0731 18:11:00.301827   74203 logs.go:276] 0 containers: []
	W0731 18:11:00.301836   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:00.301843   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:00.301901   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:00.333844   74203 cri.go:89] found id: ""
	I0731 18:11:00.333871   74203 logs.go:276] 0 containers: []
	W0731 18:11:00.333882   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:00.333889   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:00.333949   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:56.779307   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:58.779655   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:58.643375   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:00.643713   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:01.272195   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:03.272739   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:00.366250   74203 cri.go:89] found id: ""
	I0731 18:11:00.366278   74203 logs.go:276] 0 containers: []
	W0731 18:11:00.366288   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:00.366295   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:00.366358   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:00.399301   74203 cri.go:89] found id: ""
	I0731 18:11:00.399325   74203 logs.go:276] 0 containers: []
	W0731 18:11:00.399335   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:00.399342   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:00.399405   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:00.432182   74203 cri.go:89] found id: ""
	I0731 18:11:00.432207   74203 logs.go:276] 0 containers: []
	W0731 18:11:00.432218   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:00.432224   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:00.432284   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:00.465395   74203 cri.go:89] found id: ""
	I0731 18:11:00.465423   74203 logs.go:276] 0 containers: []
	W0731 18:11:00.465432   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:00.465440   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:00.465453   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:00.516042   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:00.516077   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:00.528621   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:00.528647   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:00.600297   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:00.600322   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:00.600339   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:00.680368   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:00.680399   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:03.217684   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:03.230691   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:03.230752   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:03.264882   74203 cri.go:89] found id: ""
	I0731 18:11:03.264910   74203 logs.go:276] 0 containers: []
	W0731 18:11:03.264918   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:03.264924   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:03.264976   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:03.301608   74203 cri.go:89] found id: ""
	I0731 18:11:03.301733   74203 logs.go:276] 0 containers: []
	W0731 18:11:03.301754   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:03.301765   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:03.301838   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:03.335077   74203 cri.go:89] found id: ""
	I0731 18:11:03.335102   74203 logs.go:276] 0 containers: []
	W0731 18:11:03.335121   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:03.335128   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:03.335196   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:03.370755   74203 cri.go:89] found id: ""
	I0731 18:11:03.370783   74203 logs.go:276] 0 containers: []
	W0731 18:11:03.370794   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:03.370802   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:03.370862   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:03.403004   74203 cri.go:89] found id: ""
	I0731 18:11:03.403035   74203 logs.go:276] 0 containers: []
	W0731 18:11:03.403045   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:03.403052   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:03.403125   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:03.437169   74203 cri.go:89] found id: ""
	I0731 18:11:03.437209   74203 logs.go:276] 0 containers: []
	W0731 18:11:03.437219   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:03.437235   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:03.437296   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:03.469956   74203 cri.go:89] found id: ""
	I0731 18:11:03.469981   74203 logs.go:276] 0 containers: []
	W0731 18:11:03.469991   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:03.469998   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:03.470056   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:03.503850   74203 cri.go:89] found id: ""
	I0731 18:11:03.503878   74203 logs.go:276] 0 containers: []
	W0731 18:11:03.503888   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:03.503898   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:03.503913   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:03.554993   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:03.555036   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:03.567898   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:03.567925   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:03.630151   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:03.630188   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:03.630207   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:03.708552   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:03.708596   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:01.278830   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:03.278880   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:05.778296   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:03.143289   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:05.152015   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:05.771810   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:08.271205   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:06.249728   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:06.261923   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:06.261998   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:06.296249   74203 cri.go:89] found id: ""
	I0731 18:11:06.296276   74203 logs.go:276] 0 containers: []
	W0731 18:11:06.296286   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:06.296292   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:06.296356   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:06.329355   74203 cri.go:89] found id: ""
	I0731 18:11:06.329381   74203 logs.go:276] 0 containers: []
	W0731 18:11:06.329389   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:06.329395   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:06.329443   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:06.362585   74203 cri.go:89] found id: ""
	I0731 18:11:06.362618   74203 logs.go:276] 0 containers: []
	W0731 18:11:06.362630   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:06.362643   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:06.362704   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:06.396489   74203 cri.go:89] found id: ""
	I0731 18:11:06.396514   74203 logs.go:276] 0 containers: []
	W0731 18:11:06.396521   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:06.396527   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:06.396590   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:06.428859   74203 cri.go:89] found id: ""
	I0731 18:11:06.428888   74203 logs.go:276] 0 containers: []
	W0731 18:11:06.428897   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:06.428903   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:06.428960   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:06.468817   74203 cri.go:89] found id: ""
	I0731 18:11:06.468846   74203 logs.go:276] 0 containers: []
	W0731 18:11:06.468856   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:06.468864   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:06.468924   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:06.499975   74203 cri.go:89] found id: ""
	I0731 18:11:06.500000   74203 logs.go:276] 0 containers: []
	W0731 18:11:06.500008   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:06.500013   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:06.500067   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:06.537410   74203 cri.go:89] found id: ""
	I0731 18:11:06.537440   74203 logs.go:276] 0 containers: []
	W0731 18:11:06.537451   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:06.537461   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:06.537476   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:06.589664   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:06.589709   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:06.603978   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:06.604005   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:06.673436   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:06.673454   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:06.673465   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:06.757101   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:06.757143   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:09.299562   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:09.311910   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:09.311971   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:09.346517   74203 cri.go:89] found id: ""
	I0731 18:11:09.346545   74203 logs.go:276] 0 containers: []
	W0731 18:11:09.346555   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:09.346562   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:09.346634   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:09.377688   74203 cri.go:89] found id: ""
	I0731 18:11:09.377713   74203 logs.go:276] 0 containers: []
	W0731 18:11:09.377720   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:09.377726   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:09.377787   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:09.412149   74203 cri.go:89] found id: ""
	I0731 18:11:09.412176   74203 logs.go:276] 0 containers: []
	W0731 18:11:09.412186   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:09.412193   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:09.412259   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:09.444134   74203 cri.go:89] found id: ""
	I0731 18:11:09.444162   74203 logs.go:276] 0 containers: []
	W0731 18:11:09.444172   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:09.444178   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:09.444233   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:09.481407   74203 cri.go:89] found id: ""
	I0731 18:11:09.481436   74203 logs.go:276] 0 containers: []
	W0731 18:11:09.481447   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:09.481453   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:09.481513   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:09.514926   74203 cri.go:89] found id: ""
	I0731 18:11:09.514950   74203 logs.go:276] 0 containers: []
	W0731 18:11:09.514967   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:09.514974   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:09.515036   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:09.547253   74203 cri.go:89] found id: ""
	I0731 18:11:09.547278   74203 logs.go:276] 0 containers: []
	W0731 18:11:09.547285   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:09.547291   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:09.547376   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:09.587585   74203 cri.go:89] found id: ""
	I0731 18:11:09.587614   74203 logs.go:276] 0 containers: []
	W0731 18:11:09.587622   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:09.587632   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:09.587646   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:09.642024   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:09.642057   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:09.655244   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:09.655270   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:09.721446   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:09.721474   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:09.721489   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:09.803315   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:09.803349   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:07.779195   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:10.278028   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:07.643242   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:10.143895   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:10.271515   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:12.771322   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:12.344355   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:12.357122   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:12.357194   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:12.392237   74203 cri.go:89] found id: ""
	I0731 18:11:12.392258   74203 logs.go:276] 0 containers: []
	W0731 18:11:12.392267   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:12.392272   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:12.392339   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:12.424490   74203 cri.go:89] found id: ""
	I0731 18:11:12.424514   74203 logs.go:276] 0 containers: []
	W0731 18:11:12.424523   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:12.424529   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:12.424587   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:12.458438   74203 cri.go:89] found id: ""
	I0731 18:11:12.458467   74203 logs.go:276] 0 containers: []
	W0731 18:11:12.458477   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:12.458483   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:12.458545   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:12.495343   74203 cri.go:89] found id: ""
	I0731 18:11:12.495371   74203 logs.go:276] 0 containers: []
	W0731 18:11:12.495383   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:12.495391   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:12.495455   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:12.527285   74203 cri.go:89] found id: ""
	I0731 18:11:12.527314   74203 logs.go:276] 0 containers: []
	W0731 18:11:12.527324   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:12.527334   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:12.527393   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:12.560341   74203 cri.go:89] found id: ""
	I0731 18:11:12.560369   74203 logs.go:276] 0 containers: []
	W0731 18:11:12.560379   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:12.560387   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:12.560444   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:12.595084   74203 cri.go:89] found id: ""
	I0731 18:11:12.595120   74203 logs.go:276] 0 containers: []
	W0731 18:11:12.595133   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:12.595141   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:12.595215   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:12.630666   74203 cri.go:89] found id: ""
	I0731 18:11:12.630692   74203 logs.go:276] 0 containers: []
	W0731 18:11:12.630702   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:12.630711   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:12.630727   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:12.683588   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:12.683620   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:12.696899   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:12.696925   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:12.757815   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:12.757837   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:12.757870   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:12.834888   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:12.834927   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:12.278464   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:14.279031   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:12.643960   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:15.142811   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:14.771367   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:16.772010   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:19.271857   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:15.372797   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:15.386268   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:15.386356   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:15.420446   74203 cri.go:89] found id: ""
	I0731 18:11:15.420477   74203 logs.go:276] 0 containers: []
	W0731 18:11:15.420488   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:15.420497   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:15.420556   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:15.456092   74203 cri.go:89] found id: ""
	I0731 18:11:15.456118   74203 logs.go:276] 0 containers: []
	W0731 18:11:15.456129   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:15.456136   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:15.456194   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:15.488277   74203 cri.go:89] found id: ""
	I0731 18:11:15.488304   74203 logs.go:276] 0 containers: []
	W0731 18:11:15.488316   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:15.488323   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:15.488384   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:15.520701   74203 cri.go:89] found id: ""
	I0731 18:11:15.520730   74203 logs.go:276] 0 containers: []
	W0731 18:11:15.520741   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:15.520749   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:15.520818   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:15.552831   74203 cri.go:89] found id: ""
	I0731 18:11:15.552854   74203 logs.go:276] 0 containers: []
	W0731 18:11:15.552862   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:15.552867   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:15.552920   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:15.589161   74203 cri.go:89] found id: ""
	I0731 18:11:15.589191   74203 logs.go:276] 0 containers: []
	W0731 18:11:15.589203   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:15.589210   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:15.589274   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:15.622501   74203 cri.go:89] found id: ""
	I0731 18:11:15.622532   74203 logs.go:276] 0 containers: []
	W0731 18:11:15.622544   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:15.622552   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:15.622611   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:15.654772   74203 cri.go:89] found id: ""
	I0731 18:11:15.654801   74203 logs.go:276] 0 containers: []
	W0731 18:11:15.654815   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:15.654826   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:15.654843   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:15.703103   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:15.703148   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:15.716620   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:15.716645   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:15.783391   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:15.783416   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:15.783431   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:15.857462   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:15.857495   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:18.394223   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:18.407297   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:18.407374   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:18.439542   74203 cri.go:89] found id: ""
	I0731 18:11:18.439564   74203 logs.go:276] 0 containers: []
	W0731 18:11:18.439572   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:18.439578   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:18.439625   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:18.471838   74203 cri.go:89] found id: ""
	I0731 18:11:18.471863   74203 logs.go:276] 0 containers: []
	W0731 18:11:18.471873   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:18.471883   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:18.471943   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:18.505325   74203 cri.go:89] found id: ""
	I0731 18:11:18.505355   74203 logs.go:276] 0 containers: []
	W0731 18:11:18.505365   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:18.505372   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:18.505432   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:18.536155   74203 cri.go:89] found id: ""
	I0731 18:11:18.536180   74203 logs.go:276] 0 containers: []
	W0731 18:11:18.536189   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:18.536194   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:18.536241   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:18.569301   74203 cri.go:89] found id: ""
	I0731 18:11:18.569329   74203 logs.go:276] 0 containers: []
	W0731 18:11:18.569339   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:18.569344   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:18.569398   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:18.603053   74203 cri.go:89] found id: ""
	I0731 18:11:18.603079   74203 logs.go:276] 0 containers: []
	W0731 18:11:18.603087   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:18.603092   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:18.603170   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:18.636259   74203 cri.go:89] found id: ""
	I0731 18:11:18.636287   74203 logs.go:276] 0 containers: []
	W0731 18:11:18.636298   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:18.636305   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:18.636361   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:18.667839   74203 cri.go:89] found id: ""
	I0731 18:11:18.667863   74203 logs.go:276] 0 containers: []
	W0731 18:11:18.667873   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:18.667883   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:18.667897   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:18.681005   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:18.681030   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:18.747793   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:18.747875   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:18.747892   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:18.828970   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:18.829005   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:18.866724   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:18.866749   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:16.279368   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:18.778730   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:20.779465   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:17.144041   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:19.645356   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:21.272651   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:23.771240   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:21.416598   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:21.431968   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:21.432027   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:21.469670   74203 cri.go:89] found id: ""
	I0731 18:11:21.469696   74203 logs.go:276] 0 containers: []
	W0731 18:11:21.469703   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:21.469709   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:21.469762   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:21.508461   74203 cri.go:89] found id: ""
	I0731 18:11:21.508490   74203 logs.go:276] 0 containers: []
	W0731 18:11:21.508500   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:21.508506   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:21.508570   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:21.548101   74203 cri.go:89] found id: ""
	I0731 18:11:21.548127   74203 logs.go:276] 0 containers: []
	W0731 18:11:21.548136   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:21.548142   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:21.548204   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:21.582617   74203 cri.go:89] found id: ""
	I0731 18:11:21.582646   74203 logs.go:276] 0 containers: []
	W0731 18:11:21.582653   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:21.582659   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:21.582712   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:21.614185   74203 cri.go:89] found id: ""
	I0731 18:11:21.614210   74203 logs.go:276] 0 containers: []
	W0731 18:11:21.614218   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:21.614223   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:21.614278   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:21.647596   74203 cri.go:89] found id: ""
	I0731 18:11:21.647619   74203 logs.go:276] 0 containers: []
	W0731 18:11:21.647629   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:21.647636   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:21.647693   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:21.680106   74203 cri.go:89] found id: ""
	I0731 18:11:21.680132   74203 logs.go:276] 0 containers: []
	W0731 18:11:21.680142   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:21.680149   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:21.680208   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:21.714708   74203 cri.go:89] found id: ""
	I0731 18:11:21.714733   74203 logs.go:276] 0 containers: []
	W0731 18:11:21.714742   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:21.714754   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:21.714779   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:21.783425   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:21.783448   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:21.783462   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:21.859943   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:21.859980   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:21.898374   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:21.898405   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:21.945753   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:21.945784   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:24.459481   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:24.471376   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:24.471435   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:24.506474   74203 cri.go:89] found id: ""
	I0731 18:11:24.506502   74203 logs.go:276] 0 containers: []
	W0731 18:11:24.506511   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:24.506516   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:24.506572   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:24.547298   74203 cri.go:89] found id: ""
	I0731 18:11:24.547324   74203 logs.go:276] 0 containers: []
	W0731 18:11:24.547332   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:24.547337   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:24.547402   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:24.579912   74203 cri.go:89] found id: ""
	I0731 18:11:24.579944   74203 logs.go:276] 0 containers: []
	W0731 18:11:24.579955   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:24.579963   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:24.580032   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:24.613754   74203 cri.go:89] found id: ""
	I0731 18:11:24.613782   74203 logs.go:276] 0 containers: []
	W0731 18:11:24.613791   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:24.613799   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:24.613859   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:24.649782   74203 cri.go:89] found id: ""
	I0731 18:11:24.649811   74203 logs.go:276] 0 containers: []
	W0731 18:11:24.649822   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:24.649829   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:24.649888   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:24.689232   74203 cri.go:89] found id: ""
	I0731 18:11:24.689264   74203 logs.go:276] 0 containers: []
	W0731 18:11:24.689274   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:24.689283   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:24.689361   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:24.727861   74203 cri.go:89] found id: ""
	I0731 18:11:24.727894   74203 logs.go:276] 0 containers: []
	W0731 18:11:24.727902   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:24.727924   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:24.727983   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:24.763839   74203 cri.go:89] found id: ""
	I0731 18:11:24.763866   74203 logs.go:276] 0 containers: []
	W0731 18:11:24.763876   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:24.763886   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:24.763901   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:24.841090   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:24.841131   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:24.877206   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:24.877231   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:24.926149   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:24.926180   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:24.938795   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:24.938822   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:25.008349   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:23.279256   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:25.778644   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:22.143312   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:24.144259   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:26.144310   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:25.771403   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:28.270613   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:27.509192   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:27.522506   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:27.522582   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:27.557915   74203 cri.go:89] found id: ""
	I0731 18:11:27.557943   74203 logs.go:276] 0 containers: []
	W0731 18:11:27.557954   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:27.557962   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:27.558019   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:27.594295   74203 cri.go:89] found id: ""
	I0731 18:11:27.594322   74203 logs.go:276] 0 containers: []
	W0731 18:11:27.594332   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:27.594348   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:27.594410   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:27.626830   74203 cri.go:89] found id: ""
	I0731 18:11:27.626857   74203 logs.go:276] 0 containers: []
	W0731 18:11:27.626868   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:27.626875   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:27.626964   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:27.662062   74203 cri.go:89] found id: ""
	I0731 18:11:27.662084   74203 logs.go:276] 0 containers: []
	W0731 18:11:27.662092   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:27.662099   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:27.662158   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:27.695686   74203 cri.go:89] found id: ""
	I0731 18:11:27.695715   74203 logs.go:276] 0 containers: []
	W0731 18:11:27.695727   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:27.695735   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:27.695785   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:27.729444   74203 cri.go:89] found id: ""
	I0731 18:11:27.729467   74203 logs.go:276] 0 containers: []
	W0731 18:11:27.729475   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:27.729481   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:27.729531   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:27.761889   74203 cri.go:89] found id: ""
	I0731 18:11:27.761916   74203 logs.go:276] 0 containers: []
	W0731 18:11:27.761926   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:27.761934   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:27.761995   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:27.796178   74203 cri.go:89] found id: ""
	I0731 18:11:27.796199   74203 logs.go:276] 0 containers: []
	W0731 18:11:27.796206   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:27.796214   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:27.796227   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:27.849613   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:27.849650   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:27.862892   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:27.862923   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:27.928691   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:27.928717   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:27.928740   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:28.006310   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:28.006340   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:27.779125   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:30.279252   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:28.643172   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:30.645474   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:30.271016   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:32.771684   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:30.543065   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:30.555951   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:30.556013   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:30.597411   74203 cri.go:89] found id: ""
	I0731 18:11:30.597440   74203 logs.go:276] 0 containers: []
	W0731 18:11:30.597451   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:30.597458   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:30.597518   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:30.629836   74203 cri.go:89] found id: ""
	I0731 18:11:30.629866   74203 logs.go:276] 0 containers: []
	W0731 18:11:30.629873   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:30.629878   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:30.629932   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:30.667402   74203 cri.go:89] found id: ""
	I0731 18:11:30.667432   74203 logs.go:276] 0 containers: []
	W0731 18:11:30.667443   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:30.667450   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:30.667513   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:30.701677   74203 cri.go:89] found id: ""
	I0731 18:11:30.701708   74203 logs.go:276] 0 containers: []
	W0731 18:11:30.701716   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:30.701722   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:30.701773   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:30.736685   74203 cri.go:89] found id: ""
	I0731 18:11:30.736714   74203 logs.go:276] 0 containers: []
	W0731 18:11:30.736721   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:30.736736   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:30.736786   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:30.771501   74203 cri.go:89] found id: ""
	I0731 18:11:30.771526   74203 logs.go:276] 0 containers: []
	W0731 18:11:30.771543   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:30.771549   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:30.771597   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:30.805878   74203 cri.go:89] found id: ""
	I0731 18:11:30.805902   74203 logs.go:276] 0 containers: []
	W0731 18:11:30.805911   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:30.805921   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:30.805966   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:30.839001   74203 cri.go:89] found id: ""
	I0731 18:11:30.839027   74203 logs.go:276] 0 containers: []
	W0731 18:11:30.839038   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:30.839048   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:30.839062   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:30.893357   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:30.893387   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:30.907222   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:30.907248   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:30.985626   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:30.985648   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:30.985668   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:31.067900   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:31.067948   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:33.607259   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:33.621596   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:33.621656   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:33.663616   74203 cri.go:89] found id: ""
	I0731 18:11:33.663642   74203 logs.go:276] 0 containers: []
	W0731 18:11:33.663649   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:33.663655   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:33.663704   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:33.702133   74203 cri.go:89] found id: ""
	I0731 18:11:33.702159   74203 logs.go:276] 0 containers: []
	W0731 18:11:33.702167   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:33.702173   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:33.702226   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:33.733730   74203 cri.go:89] found id: ""
	I0731 18:11:33.733752   74203 logs.go:276] 0 containers: []
	W0731 18:11:33.733760   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:33.733765   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:33.733813   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:33.765036   74203 cri.go:89] found id: ""
	I0731 18:11:33.765064   74203 logs.go:276] 0 containers: []
	W0731 18:11:33.765074   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:33.765080   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:33.765128   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:33.799604   74203 cri.go:89] found id: ""
	I0731 18:11:33.799630   74203 logs.go:276] 0 containers: []
	W0731 18:11:33.799640   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:33.799648   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:33.799716   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:33.831434   74203 cri.go:89] found id: ""
	I0731 18:11:33.831455   74203 logs.go:276] 0 containers: []
	W0731 18:11:33.831464   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:33.831469   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:33.831514   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:33.862975   74203 cri.go:89] found id: ""
	I0731 18:11:33.863004   74203 logs.go:276] 0 containers: []
	W0731 18:11:33.863014   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:33.863022   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:33.863090   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:33.895674   74203 cri.go:89] found id: ""
	I0731 18:11:33.895704   74203 logs.go:276] 0 containers: []
	W0731 18:11:33.895714   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:33.895723   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:33.895737   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:33.931954   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:33.931980   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:33.985353   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:33.985385   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:33.997857   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:33.997882   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:34.060523   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:34.060553   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:34.060575   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:32.778212   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:35.278655   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:33.151579   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:35.643326   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:34.771873   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:36.772309   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:39.271582   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:36.643003   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:36.659306   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:36.659385   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:36.717097   74203 cri.go:89] found id: ""
	I0731 18:11:36.717129   74203 logs.go:276] 0 containers: []
	W0731 18:11:36.717141   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:36.717149   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:36.717212   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:36.750288   74203 cri.go:89] found id: ""
	I0731 18:11:36.750314   74203 logs.go:276] 0 containers: []
	W0731 18:11:36.750325   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:36.750331   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:36.750391   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:36.785272   74203 cri.go:89] found id: ""
	I0731 18:11:36.785296   74203 logs.go:276] 0 containers: []
	W0731 18:11:36.785304   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:36.785310   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:36.785360   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:36.818927   74203 cri.go:89] found id: ""
	I0731 18:11:36.818953   74203 logs.go:276] 0 containers: []
	W0731 18:11:36.818965   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:36.818972   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:36.819020   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:36.854562   74203 cri.go:89] found id: ""
	I0731 18:11:36.854593   74203 logs.go:276] 0 containers: []
	W0731 18:11:36.854602   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:36.854607   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:36.854670   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:36.887786   74203 cri.go:89] found id: ""
	I0731 18:11:36.887814   74203 logs.go:276] 0 containers: []
	W0731 18:11:36.887825   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:36.887833   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:36.887893   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:36.919418   74203 cri.go:89] found id: ""
	I0731 18:11:36.919446   74203 logs.go:276] 0 containers: []
	W0731 18:11:36.919457   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:36.919471   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:36.919533   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:36.956934   74203 cri.go:89] found id: ""
	I0731 18:11:36.956957   74203 logs.go:276] 0 containers: []
	W0731 18:11:36.956964   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:36.956971   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:36.956989   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:37.003755   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:37.003783   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:37.016977   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:37.017004   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:37.091617   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:37.091646   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:37.091662   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:37.170870   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:37.170903   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:39.714271   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:39.730306   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:39.730383   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:39.765368   74203 cri.go:89] found id: ""
	I0731 18:11:39.765399   74203 logs.go:276] 0 containers: []
	W0731 18:11:39.765407   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:39.765412   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:39.765471   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:39.800394   74203 cri.go:89] found id: ""
	I0731 18:11:39.800419   74203 logs.go:276] 0 containers: []
	W0731 18:11:39.800427   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:39.800433   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:39.800486   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:39.834861   74203 cri.go:89] found id: ""
	I0731 18:11:39.834889   74203 logs.go:276] 0 containers: []
	W0731 18:11:39.834898   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:39.834903   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:39.834958   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:39.868108   74203 cri.go:89] found id: ""
	I0731 18:11:39.868132   74203 logs.go:276] 0 containers: []
	W0731 18:11:39.868141   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:39.868146   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:39.868220   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:39.902097   74203 cri.go:89] found id: ""
	I0731 18:11:39.902120   74203 logs.go:276] 0 containers: []
	W0731 18:11:39.902128   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:39.902134   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:39.902184   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:39.933073   74203 cri.go:89] found id: ""
	I0731 18:11:39.933100   74203 logs.go:276] 0 containers: []
	W0731 18:11:39.933109   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:39.933114   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:39.933165   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:39.965748   74203 cri.go:89] found id: ""
	I0731 18:11:39.965775   74203 logs.go:276] 0 containers: []
	W0731 18:11:39.965785   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:39.965796   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:39.965856   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:39.998164   74203 cri.go:89] found id: ""
	I0731 18:11:39.998189   74203 logs.go:276] 0 containers: []
	W0731 18:11:39.998197   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:39.998205   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:39.998222   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:40.049991   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:40.050027   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:40.063676   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:40.063705   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:40.125855   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:40.125880   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:40.125896   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:40.207937   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:40.207970   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:37.778894   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:40.278489   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:37.643651   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:40.144731   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:41.271897   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:43.771556   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:42.746315   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:42.758998   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:42.759053   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:42.791921   74203 cri.go:89] found id: ""
	I0731 18:11:42.791946   74203 logs.go:276] 0 containers: []
	W0731 18:11:42.791954   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:42.791958   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:42.792004   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:42.822888   74203 cri.go:89] found id: ""
	I0731 18:11:42.822914   74203 logs.go:276] 0 containers: []
	W0731 18:11:42.822922   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:42.822927   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:42.822973   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:42.854516   74203 cri.go:89] found id: ""
	I0731 18:11:42.854545   74203 logs.go:276] 0 containers: []
	W0731 18:11:42.854564   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:42.854574   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:42.854638   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:42.890933   74203 cri.go:89] found id: ""
	I0731 18:11:42.890955   74203 logs.go:276] 0 containers: []
	W0731 18:11:42.890963   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:42.890968   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:42.891013   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:42.925170   74203 cri.go:89] found id: ""
	I0731 18:11:42.925196   74203 logs.go:276] 0 containers: []
	W0731 18:11:42.925206   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:42.925213   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:42.925273   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:42.959845   74203 cri.go:89] found id: ""
	I0731 18:11:42.959868   74203 logs.go:276] 0 containers: []
	W0731 18:11:42.959876   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:42.959881   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:42.959938   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:42.997305   74203 cri.go:89] found id: ""
	I0731 18:11:42.997346   74203 logs.go:276] 0 containers: []
	W0731 18:11:42.997358   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:42.997366   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:42.997427   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:43.030663   74203 cri.go:89] found id: ""
	I0731 18:11:43.030690   74203 logs.go:276] 0 containers: []
	W0731 18:11:43.030700   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:43.030711   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:43.030725   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:43.112280   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:43.112303   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:43.112318   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:43.209002   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:43.209035   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:43.249596   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:43.249629   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:43.302419   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:43.302449   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:42.278874   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:44.273355   73696 pod_ready.go:81] duration metric: took 4m0.000454583s for pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace to be "Ready" ...
	E0731 18:11:44.273380   73696 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace to be "Ready" (will not retry!)
	I0731 18:11:44.273399   73696 pod_ready.go:38] duration metric: took 4m8.019714552s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:11:44.273430   73696 kubeadm.go:597] duration metric: took 4m16.379038728s to restartPrimaryControlPlane
	W0731 18:11:44.273506   73696 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 18:11:44.273531   73696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 18:11:42.643165   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:44.644976   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:46.271751   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:48.771274   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:45.816910   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:45.829909   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:45.829976   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:45.865534   74203 cri.go:89] found id: ""
	I0731 18:11:45.865561   74203 logs.go:276] 0 containers: []
	W0731 18:11:45.865572   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:45.865584   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:45.865646   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:45.901552   74203 cri.go:89] found id: ""
	I0731 18:11:45.901585   74203 logs.go:276] 0 containers: []
	W0731 18:11:45.901593   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:45.901598   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:45.901678   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:45.938790   74203 cri.go:89] found id: ""
	I0731 18:11:45.938820   74203 logs.go:276] 0 containers: []
	W0731 18:11:45.938842   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:45.938859   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:45.938926   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:45.971502   74203 cri.go:89] found id: ""
	I0731 18:11:45.971534   74203 logs.go:276] 0 containers: []
	W0731 18:11:45.971546   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:45.971553   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:45.971620   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:46.009281   74203 cri.go:89] found id: ""
	I0731 18:11:46.009316   74203 logs.go:276] 0 containers: []
	W0731 18:11:46.009327   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:46.009335   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:46.009399   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:46.042899   74203 cri.go:89] found id: ""
	I0731 18:11:46.042928   74203 logs.go:276] 0 containers: []
	W0731 18:11:46.042939   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:46.042947   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:46.043005   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:46.079982   74203 cri.go:89] found id: ""
	I0731 18:11:46.080013   74203 logs.go:276] 0 containers: []
	W0731 18:11:46.080024   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:46.080031   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:46.080098   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:46.113136   74203 cri.go:89] found id: ""
	I0731 18:11:46.113168   74203 logs.go:276] 0 containers: []
	W0731 18:11:46.113179   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:46.113191   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:46.113206   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:46.165818   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:46.165855   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:46.181058   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:46.181083   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:46.256805   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:46.256826   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:46.256838   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:46.353045   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:46.353093   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:48.894656   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:48.910648   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:48.910723   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:48.941080   74203 cri.go:89] found id: ""
	I0731 18:11:48.941103   74203 logs.go:276] 0 containers: []
	W0731 18:11:48.941111   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:48.941117   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:48.941164   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:48.972113   74203 cri.go:89] found id: ""
	I0731 18:11:48.972136   74203 logs.go:276] 0 containers: []
	W0731 18:11:48.972146   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:48.972151   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:48.972208   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:49.004521   74203 cri.go:89] found id: ""
	I0731 18:11:49.004547   74203 logs.go:276] 0 containers: []
	W0731 18:11:49.004557   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:49.004571   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:49.004658   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:49.036600   74203 cri.go:89] found id: ""
	I0731 18:11:49.036622   74203 logs.go:276] 0 containers: []
	W0731 18:11:49.036629   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:49.036635   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:49.036683   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:49.071397   74203 cri.go:89] found id: ""
	I0731 18:11:49.071426   74203 logs.go:276] 0 containers: []
	W0731 18:11:49.071436   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:49.071444   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:49.071501   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:49.108907   74203 cri.go:89] found id: ""
	I0731 18:11:49.108933   74203 logs.go:276] 0 containers: []
	W0731 18:11:49.108944   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:49.108952   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:49.109007   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:49.141808   74203 cri.go:89] found id: ""
	I0731 18:11:49.141834   74203 logs.go:276] 0 containers: []
	W0731 18:11:49.141844   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:49.141856   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:49.141917   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:49.174063   74203 cri.go:89] found id: ""
	I0731 18:11:49.174087   74203 logs.go:276] 0 containers: []
	W0731 18:11:49.174095   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:49.174104   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:49.174116   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:49.212152   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:49.212181   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:49.267297   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:49.267324   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:49.281342   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:49.281365   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:49.349843   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:49.349866   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:49.349882   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:47.144588   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:49.644395   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:51.271203   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:53.770849   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:51.927764   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:51.940480   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:51.940539   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:51.973731   74203 cri.go:89] found id: ""
	I0731 18:11:51.973759   74203 logs.go:276] 0 containers: []
	W0731 18:11:51.973768   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:51.973780   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:51.973837   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:52.003761   74203 cri.go:89] found id: ""
	I0731 18:11:52.003783   74203 logs.go:276] 0 containers: []
	W0731 18:11:52.003790   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:52.003795   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:52.003844   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:52.035009   74203 cri.go:89] found id: ""
	I0731 18:11:52.035028   74203 logs.go:276] 0 containers: []
	W0731 18:11:52.035035   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:52.035041   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:52.035100   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:52.065475   74203 cri.go:89] found id: ""
	I0731 18:11:52.065501   74203 logs.go:276] 0 containers: []
	W0731 18:11:52.065509   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:52.065515   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:52.065574   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:52.097529   74203 cri.go:89] found id: ""
	I0731 18:11:52.097558   74203 logs.go:276] 0 containers: []
	W0731 18:11:52.097567   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:52.097573   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:52.097622   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:52.128881   74203 cri.go:89] found id: ""
	I0731 18:11:52.128909   74203 logs.go:276] 0 containers: []
	W0731 18:11:52.128917   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:52.128923   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:52.128974   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:52.159894   74203 cri.go:89] found id: ""
	I0731 18:11:52.159921   74203 logs.go:276] 0 containers: []
	W0731 18:11:52.159931   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:52.159939   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:52.159998   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:52.191955   74203 cri.go:89] found id: ""
	I0731 18:11:52.191981   74203 logs.go:276] 0 containers: []
	W0731 18:11:52.191990   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:52.191999   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:52.192009   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:52.246389   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:52.246423   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:52.260226   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:52.260253   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:52.328423   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:52.328447   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:52.328459   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:52.408456   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:52.408495   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:54.947734   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:54.960359   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:54.960420   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:54.994231   74203 cri.go:89] found id: ""
	I0731 18:11:54.994256   74203 logs.go:276] 0 containers: []
	W0731 18:11:54.994264   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:54.994270   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:54.994332   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:55.027323   74203 cri.go:89] found id: ""
	I0731 18:11:55.027364   74203 logs.go:276] 0 containers: []
	W0731 18:11:55.027374   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:55.027382   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:55.027440   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:55.061741   74203 cri.go:89] found id: ""
	I0731 18:11:55.061763   74203 logs.go:276] 0 containers: []
	W0731 18:11:55.061771   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:55.061776   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:55.061822   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:55.100685   74203 cri.go:89] found id: ""
	I0731 18:11:55.100712   74203 logs.go:276] 0 containers: []
	W0731 18:11:55.100720   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:55.100726   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:55.100780   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:55.141917   74203 cri.go:89] found id: ""
	I0731 18:11:55.141958   74203 logs.go:276] 0 containers: []
	W0731 18:11:55.141971   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:55.141980   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:55.142054   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:55.176669   74203 cri.go:89] found id: ""
	I0731 18:11:55.176702   74203 logs.go:276] 0 containers: []
	W0731 18:11:55.176711   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:55.176718   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:55.176780   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:55.209795   74203 cri.go:89] found id: ""
	I0731 18:11:55.209829   74203 logs.go:276] 0 containers: []
	W0731 18:11:55.209842   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:55.209850   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:55.209915   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:55.244503   74203 cri.go:89] found id: ""
	I0731 18:11:55.244527   74203 logs.go:276] 0 containers: []
	W0731 18:11:55.244537   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:55.244556   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:55.244572   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:55.320033   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:55.320071   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:52.143803   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:54.644223   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:56.273321   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:58.772541   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:55.357684   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:55.357719   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:55.411465   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:55.411501   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:55.423802   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:55.423833   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:55.487945   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:57.988078   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:58.001639   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:58.001724   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:58.036075   74203 cri.go:89] found id: ""
	I0731 18:11:58.036099   74203 logs.go:276] 0 containers: []
	W0731 18:11:58.036107   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:58.036112   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:58.036163   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:58.067316   74203 cri.go:89] found id: ""
	I0731 18:11:58.067340   74203 logs.go:276] 0 containers: []
	W0731 18:11:58.067348   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:58.067353   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:58.067420   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:58.102446   74203 cri.go:89] found id: ""
	I0731 18:11:58.102470   74203 logs.go:276] 0 containers: []
	W0731 18:11:58.102479   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:58.102485   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:58.102553   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:58.134924   74203 cri.go:89] found id: ""
	I0731 18:11:58.134949   74203 logs.go:276] 0 containers: []
	W0731 18:11:58.134957   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:58.134963   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:58.135023   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:58.171589   74203 cri.go:89] found id: ""
	I0731 18:11:58.171611   74203 logs.go:276] 0 containers: []
	W0731 18:11:58.171620   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:58.171625   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:58.171673   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:58.203813   74203 cri.go:89] found id: ""
	I0731 18:11:58.203836   74203 logs.go:276] 0 containers: []
	W0731 18:11:58.203844   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:58.203850   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:58.203911   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:58.236251   74203 cri.go:89] found id: ""
	I0731 18:11:58.236277   74203 logs.go:276] 0 containers: []
	W0731 18:11:58.236288   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:58.236295   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:58.236357   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:58.270595   74203 cri.go:89] found id: ""
	I0731 18:11:58.270624   74203 logs.go:276] 0 containers: []
	W0731 18:11:58.270636   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:58.270647   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:58.270662   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:58.321889   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:58.321927   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:58.334529   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:58.334552   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:58.398489   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:58.398515   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:58.398540   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:58.479657   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:58.479695   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:57.143080   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:59.144357   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:01.643343   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:01.266100   73800 pod_ready.go:81] duration metric: took 4m0.000711681s for pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace to be "Ready" ...
	E0731 18:12:01.266123   73800 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace to be "Ready" (will not retry!)
	I0731 18:12:01.266160   73800 pod_ready.go:38] duration metric: took 4m6.529342365s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:12:01.266205   73800 kubeadm.go:597] duration metric: took 4m13.643145888s to restartPrimaryControlPlane
	W0731 18:12:01.266270   73800 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 18:12:01.266297   73800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 18:12:01.014684   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:12:01.027959   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:12:01.028026   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:12:01.065423   74203 cri.go:89] found id: ""
	I0731 18:12:01.065459   74203 logs.go:276] 0 containers: []
	W0731 18:12:01.065472   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:12:01.065481   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:12:01.065545   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:12:01.099519   74203 cri.go:89] found id: ""
	I0731 18:12:01.099549   74203 logs.go:276] 0 containers: []
	W0731 18:12:01.099561   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:12:01.099568   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:12:01.099630   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:12:01.131239   74203 cri.go:89] found id: ""
	I0731 18:12:01.131262   74203 logs.go:276] 0 containers: []
	W0731 18:12:01.131270   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:12:01.131275   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:12:01.131321   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:12:01.163209   74203 cri.go:89] found id: ""
	I0731 18:12:01.163229   74203 logs.go:276] 0 containers: []
	W0731 18:12:01.163237   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:12:01.163242   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:12:01.163295   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:12:01.201165   74203 cri.go:89] found id: ""
	I0731 18:12:01.201193   74203 logs.go:276] 0 containers: []
	W0731 18:12:01.201204   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:12:01.201217   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:12:01.201274   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:12:01.233310   74203 cri.go:89] found id: ""
	I0731 18:12:01.233334   74203 logs.go:276] 0 containers: []
	W0731 18:12:01.233342   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:12:01.233348   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:12:01.233415   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:12:01.263412   74203 cri.go:89] found id: ""
	I0731 18:12:01.263442   74203 logs.go:276] 0 containers: []
	W0731 18:12:01.263452   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:12:01.263459   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:12:01.263521   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:12:01.296598   74203 cri.go:89] found id: ""
	I0731 18:12:01.296624   74203 logs.go:276] 0 containers: []
	W0731 18:12:01.296632   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:12:01.296642   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:12:01.296656   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:12:01.372362   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:12:01.372381   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:12:01.372395   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:12:01.461997   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:12:01.462029   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:12:01.507610   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:12:01.507636   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:12:01.558335   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:12:01.558375   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:12:04.073333   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:12:04.091122   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:12:04.091205   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:12:04.130510   74203 cri.go:89] found id: ""
	I0731 18:12:04.130545   74203 logs.go:276] 0 containers: []
	W0731 18:12:04.130558   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:12:04.130566   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:12:04.130632   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:12:04.174749   74203 cri.go:89] found id: ""
	I0731 18:12:04.174775   74203 logs.go:276] 0 containers: []
	W0731 18:12:04.174785   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:12:04.174792   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:12:04.174846   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:12:04.212123   74203 cri.go:89] found id: ""
	I0731 18:12:04.212160   74203 logs.go:276] 0 containers: []
	W0731 18:12:04.212172   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:12:04.212180   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:12:04.212254   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:12:04.251558   74203 cri.go:89] found id: ""
	I0731 18:12:04.251589   74203 logs.go:276] 0 containers: []
	W0731 18:12:04.251600   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:12:04.251608   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:12:04.251671   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:12:04.284831   74203 cri.go:89] found id: ""
	I0731 18:12:04.284864   74203 logs.go:276] 0 containers: []
	W0731 18:12:04.284878   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:12:04.284886   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:12:04.284954   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:12:04.325076   74203 cri.go:89] found id: ""
	I0731 18:12:04.325115   74203 logs.go:276] 0 containers: []
	W0731 18:12:04.325126   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:12:04.325135   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:12:04.325195   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:12:04.370883   74203 cri.go:89] found id: ""
	I0731 18:12:04.370922   74203 logs.go:276] 0 containers: []
	W0731 18:12:04.370933   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:12:04.370940   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:12:04.370999   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:12:04.410639   74203 cri.go:89] found id: ""
	I0731 18:12:04.410671   74203 logs.go:276] 0 containers: []
	W0731 18:12:04.410685   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:12:04.410697   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:12:04.410713   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:12:04.462988   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:12:04.463023   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:12:04.479086   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:12:04.479123   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:12:04.544675   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:12:04.544699   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:12:04.544712   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:12:04.633231   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:12:04.633267   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:12:03.645118   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:06.143865   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:07.174252   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:12:07.187289   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:12:07.187393   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:12:07.220927   74203 cri.go:89] found id: ""
	I0731 18:12:07.220953   74203 logs.go:276] 0 containers: []
	W0731 18:12:07.220964   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:12:07.220972   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:12:07.221040   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:12:07.256817   74203 cri.go:89] found id: ""
	I0731 18:12:07.256849   74203 logs.go:276] 0 containers: []
	W0731 18:12:07.256861   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:12:07.256870   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:12:07.256935   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:12:07.290267   74203 cri.go:89] found id: ""
	I0731 18:12:07.290297   74203 logs.go:276] 0 containers: []
	W0731 18:12:07.290309   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:12:07.290315   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:12:07.290373   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:12:07.330037   74203 cri.go:89] found id: ""
	I0731 18:12:07.330068   74203 logs.go:276] 0 containers: []
	W0731 18:12:07.330079   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:12:07.330087   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:12:07.330143   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:12:07.366745   74203 cri.go:89] found id: ""
	I0731 18:12:07.366770   74203 logs.go:276] 0 containers: []
	W0731 18:12:07.366778   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:12:07.366783   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:12:07.366861   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:12:07.400608   74203 cri.go:89] found id: ""
	I0731 18:12:07.400637   74203 logs.go:276] 0 containers: []
	W0731 18:12:07.400648   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:12:07.400661   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:12:07.400727   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:12:07.434996   74203 cri.go:89] found id: ""
	I0731 18:12:07.435028   74203 logs.go:276] 0 containers: []
	W0731 18:12:07.435037   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:12:07.435044   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:12:07.435130   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:12:07.474347   74203 cri.go:89] found id: ""
	I0731 18:12:07.474375   74203 logs.go:276] 0 containers: []
	W0731 18:12:07.474387   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:12:07.474400   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:12:07.474415   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:12:07.549009   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:12:07.549045   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:12:07.586710   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:12:07.586736   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:12:07.640770   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:12:07.640800   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:12:07.654380   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:12:07.654405   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:12:07.721479   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:12:10.221837   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:12:10.235686   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:12:10.235746   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:12:10.268769   74203 cri.go:89] found id: ""
	I0731 18:12:10.268794   74203 logs.go:276] 0 containers: []
	W0731 18:12:10.268802   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:12:10.268808   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:12:10.268860   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:12:10.305229   74203 cri.go:89] found id: ""
	I0731 18:12:10.305264   74203 logs.go:276] 0 containers: []
	W0731 18:12:10.305277   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:12:10.305290   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:12:10.305353   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:12:10.337070   74203 cri.go:89] found id: ""
	I0731 18:12:10.337095   74203 logs.go:276] 0 containers: []
	W0731 18:12:10.337104   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:12:10.337109   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:12:10.337155   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:12:08.643708   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:10.645483   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:10.372979   74203 cri.go:89] found id: ""
	I0731 18:12:10.373005   74203 logs.go:276] 0 containers: []
	W0731 18:12:10.373015   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:12:10.373022   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:12:10.373079   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:12:10.407225   74203 cri.go:89] found id: ""
	I0731 18:12:10.407252   74203 logs.go:276] 0 containers: []
	W0731 18:12:10.407264   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:12:10.407270   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:12:10.407327   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:12:10.443338   74203 cri.go:89] found id: ""
	I0731 18:12:10.443366   74203 logs.go:276] 0 containers: []
	W0731 18:12:10.443377   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:12:10.443385   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:12:10.443474   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:12:10.477005   74203 cri.go:89] found id: ""
	I0731 18:12:10.477030   74203 logs.go:276] 0 containers: []
	W0731 18:12:10.477038   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:12:10.477043   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:12:10.477092   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:12:10.509338   74203 cri.go:89] found id: ""
	I0731 18:12:10.509367   74203 logs.go:276] 0 containers: []
	W0731 18:12:10.509378   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:12:10.509389   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:12:10.509405   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:12:10.559604   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:12:10.559639   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:12:10.572652   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:12:10.572682   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:12:10.642749   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:12:10.642772   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:12:10.642789   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:12:10.728716   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:12:10.728753   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:12:13.267783   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:12:13.282235   74203 kubeadm.go:597] duration metric: took 4m4.41837453s to restartPrimaryControlPlane
	W0731 18:12:13.282324   74203 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 18:12:13.282355   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 18:12:15.410363   73696 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.136815784s)
	I0731 18:12:15.410431   73696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:12:15.426599   73696 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 18:12:15.435823   73696 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 18:12:15.444553   73696 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 18:12:15.444581   73696 kubeadm.go:157] found existing configuration files:
	
	I0731 18:12:15.444624   73696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0731 18:12:15.453198   73696 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 18:12:15.453273   73696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 18:12:15.461988   73696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0731 18:12:15.470178   73696 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 18:12:15.470238   73696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 18:12:15.478903   73696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0731 18:12:15.487176   73696 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 18:12:15.487215   73696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 18:12:15.496114   73696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0731 18:12:15.504518   73696 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 18:12:15.504579   73696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 18:12:15.513915   73696 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 18:12:15.563318   73696 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0731 18:12:15.563381   73696 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 18:12:15.697426   73696 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 18:12:15.697574   73696 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 18:12:15.697688   73696 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 18:12:15.902621   73696 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 18:12:15.904763   73696 out.go:204]   - Generating certificates and keys ...
	I0731 18:12:15.904869   73696 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 18:12:15.904948   73696 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 18:12:15.905049   73696 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 18:12:15.905149   73696 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 18:12:15.905247   73696 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 18:12:15.905328   73696 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 18:12:15.905426   73696 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 18:12:15.905516   73696 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 18:12:15.905620   73696 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 18:12:15.905729   73696 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 18:12:15.905812   73696 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 18:12:15.905890   73696 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 18:12:16.011366   73696 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 18:12:16.171776   73696 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0731 18:12:16.404302   73696 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 18:12:16.559451   73696 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 18:12:16.686612   73696 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 18:12:16.687311   73696 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 18:12:16.689956   73696 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 18:12:13.142855   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:15.144107   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:16.959318   74203 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.676937263s)
	I0731 18:12:16.959425   74203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:12:16.973440   74203 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 18:12:16.983482   74203 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 18:12:16.993930   74203 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 18:12:16.993951   74203 kubeadm.go:157] found existing configuration files:
	
	I0731 18:12:16.993993   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 18:12:17.002713   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 18:12:17.002771   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 18:12:17.012107   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 18:12:17.022548   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 18:12:17.022604   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 18:12:17.033569   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 18:12:17.043338   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 18:12:17.043391   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 18:12:17.052064   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 18:12:17.060785   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 18:12:17.060850   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 18:12:17.069499   74203 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 18:12:17.136512   74203 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0731 18:12:17.136579   74203 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 18:12:17.286224   74203 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 18:12:17.286383   74203 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 18:12:17.286506   74203 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 18:12:17.467092   74203 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 18:12:17.468918   74203 out.go:204]   - Generating certificates and keys ...
	I0731 18:12:17.469024   74203 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 18:12:17.469135   74203 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 18:12:17.469229   74203 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 18:12:17.469307   74203 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 18:12:17.469439   74203 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 18:12:17.469525   74203 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 18:12:17.469609   74203 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 18:12:17.470025   74203 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 18:12:17.470501   74203 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 18:12:17.470852   74203 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 18:12:17.470899   74203 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 18:12:17.470949   74203 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 18:12:17.673308   74203 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 18:12:17.922789   74203 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 18:12:18.391239   74203 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 18:12:18.464854   74203 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 18:12:18.480495   74203 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 18:12:18.480675   74203 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 18:12:18.480746   74203 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 18:12:18.632564   74203 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 18:12:18.635416   74203 out.go:204]   - Booting up control plane ...
	I0731 18:12:18.635542   74203 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 18:12:18.643338   74203 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 18:12:18.645881   74203 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 18:12:18.646898   74203 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 18:12:18.650052   74203 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 18:12:16.691876   73696 out.go:204]   - Booting up control plane ...
	I0731 18:12:16.691967   73696 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 18:12:16.692064   73696 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 18:12:16.692643   73696 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 18:12:16.713038   73696 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 18:12:16.713123   73696 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 18:12:16.713159   73696 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 18:12:16.855506   73696 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0731 18:12:16.855638   73696 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0731 18:12:17.856697   73696 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001297342s
	I0731 18:12:17.856823   73696 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0731 18:12:17.144295   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:19.644100   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:21.644654   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:22.358287   73696 kubeadm.go:310] [api-check] The API server is healthy after 4.501118217s
	I0731 18:12:22.370066   73696 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 18:12:22.382929   73696 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 18:12:22.402765   73696 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 18:12:22.403044   73696 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-094310 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 18:12:22.419724   73696 kubeadm.go:310] [bootstrap-token] Using token: hduea8.ix2m91ewiu6okgi9
	I0731 18:12:22.421231   73696 out.go:204]   - Configuring RBAC rules ...
	I0731 18:12:22.421382   73696 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 18:12:22.426230   73696 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 18:12:22.434423   73696 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 18:12:22.437839   73696 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 18:12:22.449264   73696 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 18:12:22.452420   73696 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 18:12:22.764876   73696 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 18:12:23.216229   73696 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 18:12:23.765173   73696 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 18:12:23.766223   73696 kubeadm.go:310] 
	I0731 18:12:23.766311   73696 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 18:12:23.766356   73696 kubeadm.go:310] 
	I0731 18:12:23.766466   73696 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 18:12:23.766487   73696 kubeadm.go:310] 
	I0731 18:12:23.766521   73696 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 18:12:23.766641   73696 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 18:12:23.766726   73696 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 18:12:23.766741   73696 kubeadm.go:310] 
	I0731 18:12:23.766827   73696 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 18:12:23.766844   73696 kubeadm.go:310] 
	I0731 18:12:23.766899   73696 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 18:12:23.766910   73696 kubeadm.go:310] 
	I0731 18:12:23.766986   73696 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 18:12:23.767089   73696 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 18:12:23.767225   73696 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 18:12:23.767237   73696 kubeadm.go:310] 
	I0731 18:12:23.767310   73696 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 18:12:23.767401   73696 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 18:12:23.767411   73696 kubeadm.go:310] 
	I0731 18:12:23.767531   73696 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token hduea8.ix2m91ewiu6okgi9 \
	I0731 18:12:23.767662   73696 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:460b24b51d10a3760f2391542a8a89e401359854f437f922cbee3f2d8ed6c856 \
	I0731 18:12:23.767695   73696 kubeadm.go:310] 	--control-plane 
	I0731 18:12:23.767702   73696 kubeadm.go:310] 
	I0731 18:12:23.767773   73696 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 18:12:23.767782   73696 kubeadm.go:310] 
	I0731 18:12:23.767847   73696 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token hduea8.ix2m91ewiu6okgi9 \
	I0731 18:12:23.767930   73696 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:460b24b51d10a3760f2391542a8a89e401359854f437f922cbee3f2d8ed6c856 
	I0731 18:12:23.768912   73696 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 18:12:23.769058   73696 cni.go:84] Creating CNI manager for ""
	I0731 18:12:23.769073   73696 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:12:23.771596   73696 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 18:12:23.773122   73696 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 18:12:23.782944   73696 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 18:12:23.800254   73696 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 18:12:23.800383   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:23.800398   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-094310 minikube.k8s.io/updated_at=2024_07_31T18_12_23_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1d737dad7efa60c56d30434fcd857dd3b14c91d9 minikube.k8s.io/name=default-k8s-diff-port-094310 minikube.k8s.io/primary=true
	I0731 18:12:23.827190   73696 ops.go:34] apiserver oom_adj: -16
	I0731 18:12:23.990425   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:24.490585   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:24.991490   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:25.490948   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:25.991461   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:23.645259   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:26.144352   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:26.491041   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:26.990516   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:27.491386   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:27.991150   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:28.490838   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:28.991267   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:29.490459   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:29.990672   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:30.491302   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:30.990644   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:28.644749   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:31.143617   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:32.532203   73800 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.265875459s)
	I0731 18:12:32.532286   73800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:12:32.548139   73800 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 18:12:32.558049   73800 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 18:12:32.567036   73800 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 18:12:32.567060   73800 kubeadm.go:157] found existing configuration files:
	
	I0731 18:12:32.567133   73800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 18:12:32.576069   73800 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 18:12:32.576124   73800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 18:12:32.584762   73800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 18:12:32.592927   73800 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 18:12:32.592980   73800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 18:12:32.601309   73800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 18:12:32.609478   73800 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 18:12:32.609525   73800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 18:12:32.617980   73800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 18:12:32.625943   73800 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 18:12:32.625978   73800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 18:12:32.634091   73800 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 18:12:32.821569   73800 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 18:12:31.491226   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:31.991099   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:32.490751   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:32.991252   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:33.490564   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:33.990977   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:34.491037   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:34.990696   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:35.491381   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:35.990793   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:36.490926   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:36.581312   73696 kubeadm.go:1113] duration metric: took 12.780981821s to wait for elevateKubeSystemPrivileges
	I0731 18:12:36.581370   73696 kubeadm.go:394] duration metric: took 5m8.741923744s to StartCluster
	I0731 18:12:36.581393   73696 settings.go:142] acquiring lock: {Name:mk8ac8269296bf0b887a00ff74caaad180005f89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:12:36.581485   73696 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 18:12:36.583690   73696 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/kubeconfig: {Name:mkf2340bc1ada3c683f132aca37228d7588e1fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:12:36.583986   73696 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.197 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 18:12:36.585079   73696 config.go:182] Loaded profile config "default-k8s-diff-port-094310": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:12:36.585328   73696 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 18:12:36.585677   73696 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-094310"
	I0731 18:12:36.585686   73696 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-094310"
	I0731 18:12:36.585688   73696 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-094310"
	I0731 18:12:36.585705   73696 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-094310"
	W0731 18:12:36.585717   73696 addons.go:243] addon storage-provisioner should already be in state true
	I0731 18:12:36.585720   73696 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-094310"
	I0731 18:12:36.585732   73696 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-094310"
	W0731 18:12:36.585740   73696 addons.go:243] addon metrics-server should already be in state true
	I0731 18:12:36.585752   73696 host.go:66] Checking if "default-k8s-diff-port-094310" exists ...
	I0731 18:12:36.585766   73696 host.go:66] Checking if "default-k8s-diff-port-094310" exists ...
	I0731 18:12:36.586152   73696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:36.586166   73696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:36.586166   73696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:36.586174   73696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:36.586180   73696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:36.586188   73696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:36.586456   73696 out.go:177] * Verifying Kubernetes components...
	I0731 18:12:36.588174   73696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:12:36.605611   73696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38873
	I0731 18:12:36.605856   73696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44661
	I0731 18:12:36.606122   73696 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:36.606710   73696 main.go:141] libmachine: Using API Version  1
	I0731 18:12:36.606731   73696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:36.606809   73696 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:36.607072   73696 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:36.607240   73696 main.go:141] libmachine: Using API Version  1
	I0731 18:12:36.607262   73696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:36.607789   73696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:36.607817   73696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:36.608000   73696 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:36.608173   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetState
	I0731 18:12:36.609009   73696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39155
	I0731 18:12:36.609469   73696 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:36.609954   73696 main.go:141] libmachine: Using API Version  1
	I0731 18:12:36.609973   73696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:36.610333   73696 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:36.610936   73696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:36.610998   73696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:36.612199   73696 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-094310"
	W0731 18:12:36.612224   73696 addons.go:243] addon default-storageclass should already be in state true
	I0731 18:12:36.612254   73696 host.go:66] Checking if "default-k8s-diff-port-094310" exists ...
	I0731 18:12:36.612624   73696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:36.612659   73696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:36.626474   73696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38009
	I0731 18:12:36.626981   73696 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:36.627514   73696 main.go:141] libmachine: Using API Version  1
	I0731 18:12:36.627534   73696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:36.627836   73696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43349
	I0731 18:12:36.628007   73696 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:36.628336   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetState
	I0731 18:12:36.628415   73696 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:36.628816   73696 main.go:141] libmachine: Using API Version  1
	I0731 18:12:36.628831   73696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:36.629237   73696 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:36.629450   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetState
	I0731 18:12:36.630518   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:12:36.631198   73696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43471
	I0731 18:12:36.631550   73696 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:36.632064   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:12:36.632200   73696 main.go:141] libmachine: Using API Version  1
	I0731 18:12:36.632217   73696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:36.632576   73696 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:12:36.632739   73696 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:36.633275   73696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:36.633313   73696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:36.633711   73696 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0731 18:12:33.642776   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:35.643640   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:36.633805   73696 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 18:12:36.633820   73696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 18:12:36.633840   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:12:36.634990   73696 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 18:12:36.635005   73696 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 18:12:36.635022   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:12:36.637135   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:12:36.637767   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:12:36.637792   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:12:36.637891   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:12:36.639047   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:12:36.639583   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:12:36.639617   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:12:36.639885   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:12:36.640106   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:12:36.640235   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:12:36.640419   73696 sshutil.go:53] new ssh client: &{IP:192.168.72.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/default-k8s-diff-port-094310/id_rsa Username:docker}
	I0731 18:12:36.641860   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:12:36.642037   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:12:36.642205   73696 sshutil.go:53] new ssh client: &{IP:192.168.72.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/default-k8s-diff-port-094310/id_rsa Username:docker}
	I0731 18:12:36.659960   73696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36389
	I0731 18:12:36.660280   73696 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:36.660692   73696 main.go:141] libmachine: Using API Version  1
	I0731 18:12:36.660713   73696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:36.660986   73696 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:36.661150   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetState
	I0731 18:12:36.663024   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:12:36.663232   73696 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 18:12:36.663245   73696 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 18:12:36.663264   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:12:36.666016   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:12:36.666393   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:12:36.666472   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:12:36.666562   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:12:36.666730   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:12:36.666832   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:12:36.666935   73696 sshutil.go:53] new ssh client: &{IP:192.168.72.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/default-k8s-diff-port-094310/id_rsa Username:docker}
	I0731 18:12:36.813977   73696 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 18:12:36.832201   73696 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-094310" to be "Ready" ...
	I0731 18:12:36.849864   73696 node_ready.go:49] node "default-k8s-diff-port-094310" has status "Ready":"True"
	I0731 18:12:36.849891   73696 node_ready.go:38] duration metric: took 17.657098ms for node "default-k8s-diff-port-094310" to be "Ready" ...
	I0731 18:12:36.849903   73696 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:12:36.860981   73696 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:36.865178   73696 pod_ready.go:92] pod "etcd-default-k8s-diff-port-094310" in "kube-system" namespace has status "Ready":"True"
	I0731 18:12:36.865198   73696 pod_ready.go:81] duration metric: took 4.190559ms for pod "etcd-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:36.865209   73696 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:36.869977   73696 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-094310" in "kube-system" namespace has status "Ready":"True"
	I0731 18:12:36.869998   73696 pod_ready.go:81] duration metric: took 4.780295ms for pod "kube-apiserver-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:36.870008   73696 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:36.874051   73696 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-094310" in "kube-system" namespace has status "Ready":"True"
	I0731 18:12:36.874069   73696 pod_ready.go:81] duration metric: took 4.053362ms for pod "kube-controller-manager-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:36.874079   73696 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:36.878519   73696 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-094310" in "kube-system" namespace has status "Ready":"True"
	I0731 18:12:36.878536   73696 pod_ready.go:81] duration metric: took 4.448692ms for pod "kube-scheduler-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:36.878544   73696 pod_ready.go:38] duration metric: took 28.628924ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:12:36.878564   73696 api_server.go:52] waiting for apiserver process to appear ...
	I0731 18:12:36.878622   73696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:12:36.892011   73696 api_server.go:72] duration metric: took 307.983877ms to wait for apiserver process to appear ...
	I0731 18:12:36.892031   73696 api_server.go:88] waiting for apiserver healthz status ...
	I0731 18:12:36.892049   73696 api_server.go:253] Checking apiserver healthz at https://192.168.72.197:8444/healthz ...
	I0731 18:12:36.895929   73696 api_server.go:279] https://192.168.72.197:8444/healthz returned 200:
	ok
	I0731 18:12:36.896760   73696 api_server.go:141] control plane version: v1.30.3
	I0731 18:12:36.896780   73696 api_server.go:131] duration metric: took 4.741896ms to wait for apiserver health ...
	I0731 18:12:36.896789   73696 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 18:12:36.974073   73696 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 18:12:36.974092   73696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0731 18:12:37.010218   73696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 18:12:37.018536   73696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 18:12:37.039734   73696 system_pods.go:59] 5 kube-system pods found
	I0731 18:12:37.039767   73696 system_pods.go:61] "etcd-default-k8s-diff-port-094310" [f73c4fb4-b160-4e79-a0a4-8fba255cfb8c] Running
	I0731 18:12:37.039773   73696 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-094310" [820215b3-0eaa-46ca-bf0b-4fa73942160a] Running
	I0731 18:12:37.039778   73696 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-094310" [5ca0ed32-5439-49b5-9a85-1a4b1628371d] Running
	I0731 18:12:37.039787   73696 system_pods.go:61] "kube-proxy-4vvjq" [a8dd9db6-5f45-4c8d-a9d3-03ede1db07eb] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 18:12:37.039792   73696 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-094310" [11590718-4e74-46eb-be90-6405c3f1759f] Running
	I0731 18:12:37.039802   73696 system_pods.go:74] duration metric: took 143.007992ms to wait for pod list to return data ...
	I0731 18:12:37.039812   73696 default_sa.go:34] waiting for default service account to be created ...
	I0731 18:12:37.041650   73696 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 18:12:37.041672   73696 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 18:12:37.096891   73696 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 18:12:37.096920   73696 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 18:12:37.159438   73696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 18:12:37.235560   73696 default_sa.go:45] found service account: "default"
	I0731 18:12:37.235599   73696 default_sa.go:55] duration metric: took 195.778976ms for default service account to be created ...
	I0731 18:12:37.235612   73696 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 18:12:37.439935   73696 system_pods.go:86] 7 kube-system pods found
	I0731 18:12:37.439966   73696 system_pods.go:89] "coredns-7db6d8ff4d-2r7zb" [e7acc926-db53-4c1c-a62f-45e303c69fc7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:12:37.439975   73696 system_pods.go:89] "coredns-7db6d8ff4d-756jj" [c91e86b1-8348-4dc7-9aa1-6909055c2dde] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:12:37.439982   73696 system_pods.go:89] "etcd-default-k8s-diff-port-094310" [f73c4fb4-b160-4e79-a0a4-8fba255cfb8c] Running
	I0731 18:12:37.439988   73696 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-094310" [820215b3-0eaa-46ca-bf0b-4fa73942160a] Running
	I0731 18:12:37.439993   73696 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-094310" [5ca0ed32-5439-49b5-9a85-1a4b1628371d] Running
	I0731 18:12:37.439998   73696 system_pods.go:89] "kube-proxy-4vvjq" [a8dd9db6-5f45-4c8d-a9d3-03ede1db07eb] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 18:12:37.440003   73696 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-094310" [11590718-4e74-46eb-be90-6405c3f1759f] Running
	I0731 18:12:37.440020   73696 retry.go:31] will retry after 230.300903ms: missing components: kube-dns, kube-proxy
	I0731 18:12:37.676385   73696 system_pods.go:86] 7 kube-system pods found
	I0731 18:12:37.676411   73696 system_pods.go:89] "coredns-7db6d8ff4d-2r7zb" [e7acc926-db53-4c1c-a62f-45e303c69fc7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:12:37.676421   73696 system_pods.go:89] "coredns-7db6d8ff4d-756jj" [c91e86b1-8348-4dc7-9aa1-6909055c2dde] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:12:37.676429   73696 system_pods.go:89] "etcd-default-k8s-diff-port-094310" [f73c4fb4-b160-4e79-a0a4-8fba255cfb8c] Running
	I0731 18:12:37.676436   73696 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-094310" [820215b3-0eaa-46ca-bf0b-4fa73942160a] Running
	I0731 18:12:37.676442   73696 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-094310" [5ca0ed32-5439-49b5-9a85-1a4b1628371d] Running
	I0731 18:12:37.676451   73696 system_pods.go:89] "kube-proxy-4vvjq" [a8dd9db6-5f45-4c8d-a9d3-03ede1db07eb] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 18:12:37.676456   73696 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-094310" [11590718-4e74-46eb-be90-6405c3f1759f] Running
	I0731 18:12:37.676475   73696 retry.go:31] will retry after 311.28179ms: missing components: kube-dns, kube-proxy
	I0731 18:12:37.813837   73696 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:37.813870   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .Close
	I0731 18:12:37.814017   73696 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:37.814039   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .Close
	I0731 18:12:37.814265   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | Closing plugin on server side
	I0731 18:12:37.814316   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | Closing plugin on server side
	I0731 18:12:37.814363   73696 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:37.814376   73696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:37.814391   73696 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:37.814402   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .Close
	I0731 18:12:37.814531   73696 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:37.814556   73696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:37.814598   73696 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:37.814608   73696 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:37.814631   73696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:37.814622   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .Close
	I0731 18:12:37.816102   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | Closing plugin on server side
	I0731 18:12:37.816268   73696 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:37.816280   73696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:37.830991   73696 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:37.831018   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .Close
	I0731 18:12:37.831354   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | Closing plugin on server side
	I0731 18:12:37.831354   73696 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:37.831380   73696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:37.995206   73696 system_pods.go:86] 8 kube-system pods found
	I0731 18:12:37.995248   73696 system_pods.go:89] "coredns-7db6d8ff4d-2r7zb" [e7acc926-db53-4c1c-a62f-45e303c69fc7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:12:37.995262   73696 system_pods.go:89] "coredns-7db6d8ff4d-756jj" [c91e86b1-8348-4dc7-9aa1-6909055c2dde] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:12:37.995272   73696 system_pods.go:89] "etcd-default-k8s-diff-port-094310" [f73c4fb4-b160-4e79-a0a4-8fba255cfb8c] Running
	I0731 18:12:37.995295   73696 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-094310" [820215b3-0eaa-46ca-bf0b-4fa73942160a] Running
	I0731 18:12:37.995310   73696 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-094310" [5ca0ed32-5439-49b5-9a85-1a4b1628371d] Running
	I0731 18:12:37.995322   73696 system_pods.go:89] "kube-proxy-4vvjq" [a8dd9db6-5f45-4c8d-a9d3-03ede1db07eb] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 18:12:37.995332   73696 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-094310" [11590718-4e74-46eb-be90-6405c3f1759f] Running
	I0731 18:12:37.995345   73696 system_pods.go:89] "storage-provisioner" [2e4c5a96-4dfc-4af6-8d3e-bab865644328] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 18:12:37.995370   73696 retry.go:31] will retry after 381.430275ms: missing components: kube-dns, kube-proxy
	I0731 18:12:38.392678   73696 system_pods.go:86] 9 kube-system pods found
	I0731 18:12:38.392719   73696 system_pods.go:89] "coredns-7db6d8ff4d-2r7zb" [e7acc926-db53-4c1c-a62f-45e303c69fc7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:12:38.392732   73696 system_pods.go:89] "coredns-7db6d8ff4d-756jj" [c91e86b1-8348-4dc7-9aa1-6909055c2dde] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:12:38.392742   73696 system_pods.go:89] "etcd-default-k8s-diff-port-094310" [f73c4fb4-b160-4e79-a0a4-8fba255cfb8c] Running
	I0731 18:12:38.392751   73696 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-094310" [820215b3-0eaa-46ca-bf0b-4fa73942160a] Running
	I0731 18:12:38.392760   73696 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-094310" [5ca0ed32-5439-49b5-9a85-1a4b1628371d] Running
	I0731 18:12:38.392770   73696 system_pods.go:89] "kube-proxy-4vvjq" [a8dd9db6-5f45-4c8d-a9d3-03ede1db07eb] Running
	I0731 18:12:38.392778   73696 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-094310" [11590718-4e74-46eb-be90-6405c3f1759f] Running
	I0731 18:12:38.392787   73696 system_pods.go:89] "metrics-server-569cc877fc-mskwc" [c57990b4-1d91-4764-9c33-2fd5f7d2f83b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 18:12:38.392802   73696 system_pods.go:89] "storage-provisioner" [2e4c5a96-4dfc-4af6-8d3e-bab865644328] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 18:12:38.392823   73696 retry.go:31] will retry after 567.905994ms: missing components: kube-dns
	I0731 18:12:38.501117   73696 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.341621275s)
	I0731 18:12:38.501181   73696 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:38.501200   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .Close
	I0731 18:12:38.501595   73696 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:38.501615   73696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:38.501625   73696 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:38.501634   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .Close
	I0731 18:12:38.501593   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | Closing plugin on server side
	I0731 18:12:38.501907   73696 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:38.501953   73696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:38.501966   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | Closing plugin on server side
	I0731 18:12:38.501975   73696 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-094310"
	I0731 18:12:38.505204   73696 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0731 18:12:38.506517   73696 addons.go:510] duration metric: took 1.921658263s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0731 18:12:38.967657   73696 system_pods.go:86] 9 kube-system pods found
	I0731 18:12:38.967691   73696 system_pods.go:89] "coredns-7db6d8ff4d-2r7zb" [e7acc926-db53-4c1c-a62f-45e303c69fc7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:12:38.967700   73696 system_pods.go:89] "coredns-7db6d8ff4d-756jj" [c91e86b1-8348-4dc7-9aa1-6909055c2dde] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:12:38.967708   73696 system_pods.go:89] "etcd-default-k8s-diff-port-094310" [f73c4fb4-b160-4e79-a0a4-8fba255cfb8c] Running
	I0731 18:12:38.967716   73696 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-094310" [820215b3-0eaa-46ca-bf0b-4fa73942160a] Running
	I0731 18:12:38.967723   73696 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-094310" [5ca0ed32-5439-49b5-9a85-1a4b1628371d] Running
	I0731 18:12:38.967729   73696 system_pods.go:89] "kube-proxy-4vvjq" [a8dd9db6-5f45-4c8d-a9d3-03ede1db07eb] Running
	I0731 18:12:38.967736   73696 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-094310" [11590718-4e74-46eb-be90-6405c3f1759f] Running
	I0731 18:12:38.967746   73696 system_pods.go:89] "metrics-server-569cc877fc-mskwc" [c57990b4-1d91-4764-9c33-2fd5f7d2f83b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 18:12:38.967759   73696 system_pods.go:89] "storage-provisioner" [2e4c5a96-4dfc-4af6-8d3e-bab865644328] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 18:12:38.967779   73696 retry.go:31] will retry after 488.293971ms: missing components: kube-dns
	I0731 18:12:39.464918   73696 system_pods.go:86] 9 kube-system pods found
	I0731 18:12:39.464956   73696 system_pods.go:89] "coredns-7db6d8ff4d-2r7zb" [e7acc926-db53-4c1c-a62f-45e303c69fc7] Running
	I0731 18:12:39.464965   73696 system_pods.go:89] "coredns-7db6d8ff4d-756jj" [c91e86b1-8348-4dc7-9aa1-6909055c2dde] Running
	I0731 18:12:39.464972   73696 system_pods.go:89] "etcd-default-k8s-diff-port-094310" [f73c4fb4-b160-4e79-a0a4-8fba255cfb8c] Running
	I0731 18:12:39.464978   73696 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-094310" [820215b3-0eaa-46ca-bf0b-4fa73942160a] Running
	I0731 18:12:39.464986   73696 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-094310" [5ca0ed32-5439-49b5-9a85-1a4b1628371d] Running
	I0731 18:12:39.464992   73696 system_pods.go:89] "kube-proxy-4vvjq" [a8dd9db6-5f45-4c8d-a9d3-03ede1db07eb] Running
	I0731 18:12:39.464999   73696 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-094310" [11590718-4e74-46eb-be90-6405c3f1759f] Running
	I0731 18:12:39.465017   73696 system_pods.go:89] "metrics-server-569cc877fc-mskwc" [c57990b4-1d91-4764-9c33-2fd5f7d2f83b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 18:12:39.465028   73696 system_pods.go:89] "storage-provisioner" [2e4c5a96-4dfc-4af6-8d3e-bab865644328] Running
	I0731 18:12:39.465041   73696 system_pods.go:126] duration metric: took 2.229422302s to wait for k8s-apps to be running ...
	I0731 18:12:39.465053   73696 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 18:12:39.465111   73696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:12:39.482063   73696 system_svc.go:56] duration metric: took 16.998965ms WaitForService to wait for kubelet
	I0731 18:12:39.482092   73696 kubeadm.go:582] duration metric: took 2.898066741s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 18:12:39.482138   73696 node_conditions.go:102] verifying NodePressure condition ...
	I0731 18:12:39.486728   73696 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 18:12:39.486752   73696 node_conditions.go:123] node cpu capacity is 2
	I0731 18:12:39.486764   73696 node_conditions.go:105] duration metric: took 4.617934ms to run NodePressure ...
	I0731 18:12:39.486777   73696 start.go:241] waiting for startup goroutines ...
	I0731 18:12:39.486787   73696 start.go:246] waiting for cluster config update ...
	I0731 18:12:39.486798   73696 start.go:255] writing updated cluster config ...
	I0731 18:12:39.487565   73696 ssh_runner.go:195] Run: rm -f paused
	I0731 18:12:39.539591   73696 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 18:12:39.541533   73696 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-094310" cluster and "default" namespace by default
	I0731 18:12:37.644379   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:39.645608   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:41.969949   73800 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0731 18:12:41.970018   73800 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 18:12:41.970137   73800 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 18:12:41.970234   73800 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 18:12:41.970386   73800 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 18:12:41.970495   73800 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 18:12:41.972177   73800 out.go:204]   - Generating certificates and keys ...
	I0731 18:12:41.972244   73800 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 18:12:41.972314   73800 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 18:12:41.972403   73800 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 18:12:41.972480   73800 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 18:12:41.972538   73800 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 18:12:41.972588   73800 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 18:12:41.972654   73800 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 18:12:41.972748   73800 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 18:12:41.972859   73800 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 18:12:41.972982   73800 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 18:12:41.973027   73800 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 18:12:41.973082   73800 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 18:12:41.973152   73800 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 18:12:41.973205   73800 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0731 18:12:41.973252   73800 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 18:12:41.973323   73800 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 18:12:41.973387   73800 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 18:12:41.973456   73800 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 18:12:41.973553   73800 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 18:12:41.974927   73800 out.go:204]   - Booting up control plane ...
	I0731 18:12:41.975019   73800 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 18:12:41.975128   73800 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 18:12:41.975215   73800 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 18:12:41.975342   73800 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 18:12:41.975425   73800 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 18:12:41.975474   73800 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 18:12:41.975635   73800 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0731 18:12:41.975710   73800 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0731 18:12:41.975766   73800 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001397088s
	I0731 18:12:41.975824   73800 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0731 18:12:41.975909   73800 kubeadm.go:310] [api-check] The API server is healthy after 5.001258426s
	I0731 18:12:41.976064   73800 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 18:12:41.976241   73800 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 18:12:41.976355   73800 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 18:12:41.976528   73800 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-436067 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 18:12:41.976605   73800 kubeadm.go:310] [bootstrap-token] Using token: m9csv8.j58cj919sgzkgy1k
	I0731 18:12:41.978880   73800 out.go:204]   - Configuring RBAC rules ...
	I0731 18:12:41.978976   73800 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 18:12:41.979087   73800 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 18:12:41.979277   73800 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 18:12:41.979441   73800 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 18:12:41.979622   73800 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 18:12:41.979708   73800 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 18:12:41.979835   73800 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 18:12:41.979875   73800 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 18:12:41.979918   73800 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 18:12:41.979924   73800 kubeadm.go:310] 
	I0731 18:12:41.979971   73800 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 18:12:41.979979   73800 kubeadm.go:310] 
	I0731 18:12:41.980058   73800 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 18:12:41.980067   73800 kubeadm.go:310] 
	I0731 18:12:41.980098   73800 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 18:12:41.980160   73800 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 18:12:41.980229   73800 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 18:12:41.980236   73800 kubeadm.go:310] 
	I0731 18:12:41.980300   73800 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 18:12:41.980311   73800 kubeadm.go:310] 
	I0731 18:12:41.980384   73800 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 18:12:41.980393   73800 kubeadm.go:310] 
	I0731 18:12:41.980446   73800 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 18:12:41.980548   73800 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 18:12:41.980644   73800 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 18:12:41.980653   73800 kubeadm.go:310] 
	I0731 18:12:41.980759   73800 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 18:12:41.980824   73800 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 18:12:41.980830   73800 kubeadm.go:310] 
	I0731 18:12:41.980896   73800 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token m9csv8.j58cj919sgzkgy1k \
	I0731 18:12:41.980984   73800 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:460b24b51d10a3760f2391542a8a89e401359854f437f922cbee3f2d8ed6c856 \
	I0731 18:12:41.981011   73800 kubeadm.go:310] 	--control-plane 
	I0731 18:12:41.981023   73800 kubeadm.go:310] 
	I0731 18:12:41.981093   73800 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 18:12:41.981099   73800 kubeadm.go:310] 
	I0731 18:12:41.981183   73800 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token m9csv8.j58cj919sgzkgy1k \
	I0731 18:12:41.981306   73800 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:460b24b51d10a3760f2391542a8a89e401359854f437f922cbee3f2d8ed6c856 
	I0731 18:12:41.981317   73800 cni.go:84] Creating CNI manager for ""
	I0731 18:12:41.981324   73800 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:12:41.982701   73800 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 18:12:41.983929   73800 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 18:12:41.995272   73800 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 18:12:42.014929   73800 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 18:12:42.014984   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:42.015033   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-436067 minikube.k8s.io/updated_at=2024_07_31T18_12_42_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1d737dad7efa60c56d30434fcd857dd3b14c91d9 minikube.k8s.io/name=embed-certs-436067 minikube.k8s.io/primary=true
	I0731 18:12:42.164811   73800 ops.go:34] apiserver oom_adj: -16
	I0731 18:12:42.164934   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:42.665108   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:43.165818   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:43.665733   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:44.165074   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:42.144896   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:44.644077   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:44.665477   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:45.165127   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:45.665440   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:46.165555   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:46.665998   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:47.165829   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:47.665704   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:48.164973   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:48.665549   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:49.165210   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:47.142947   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:49.144015   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:51.644495   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:49.665500   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:50.165567   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:50.665547   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:51.166002   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:51.664981   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:52.165135   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:52.665927   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:53.165045   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:53.664981   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:54.165715   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:54.252373   73800 kubeadm.go:1113] duration metric: took 12.237438799s to wait for elevateKubeSystemPrivileges
	I0731 18:12:54.252415   73800 kubeadm.go:394] duration metric: took 5m6.689979758s to StartCluster
	I0731 18:12:54.252435   73800 settings.go:142] acquiring lock: {Name:mk8ac8269296bf0b887a00ff74caaad180005f89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:12:54.252509   73800 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 18:12:54.254175   73800 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/kubeconfig: {Name:mkf2340bc1ada3c683f132aca37228d7588e1fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:12:54.254495   73800 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.86 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 18:12:54.254600   73800 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 18:12:54.254687   73800 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-436067"
	I0731 18:12:54.254721   73800 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-436067"
	I0731 18:12:54.254724   73800 addons.go:69] Setting default-storageclass=true in profile "embed-certs-436067"
	W0731 18:12:54.254734   73800 addons.go:243] addon storage-provisioner should already be in state true
	I0731 18:12:54.254737   73800 config.go:182] Loaded profile config "embed-certs-436067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:12:54.254743   73800 addons.go:69] Setting metrics-server=true in profile "embed-certs-436067"
	I0731 18:12:54.254760   73800 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-436067"
	I0731 18:12:54.254769   73800 host.go:66] Checking if "embed-certs-436067" exists ...
	I0731 18:12:54.254785   73800 addons.go:234] Setting addon metrics-server=true in "embed-certs-436067"
	W0731 18:12:54.254795   73800 addons.go:243] addon metrics-server should already be in state true
	I0731 18:12:54.254826   73800 host.go:66] Checking if "embed-certs-436067" exists ...
	I0731 18:12:54.255205   73800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:54.255208   73800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:54.255233   73800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:54.255238   73800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:54.255302   73800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:54.255323   73800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:54.256412   73800 out.go:177] * Verifying Kubernetes components...
	I0731 18:12:54.257653   73800 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:12:54.274456   73800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34433
	I0731 18:12:54.274959   73800 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:54.275532   73800 main.go:141] libmachine: Using API Version  1
	I0731 18:12:54.275554   73800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:54.275828   73800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44423
	I0731 18:12:54.275851   73800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41513
	I0731 18:12:54.276001   73800 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:54.276152   73800 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:54.276225   73800 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:54.276498   73800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:54.276534   73800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:54.276592   73800 main.go:141] libmachine: Using API Version  1
	I0731 18:12:54.276606   73800 main.go:141] libmachine: Using API Version  1
	I0731 18:12:54.276613   73800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:54.276616   73800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:54.276954   73800 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:54.277055   73800 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:54.277103   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetState
	I0731 18:12:54.277663   73800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:54.277704   73800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:54.280559   73800 addons.go:234] Setting addon default-storageclass=true in "embed-certs-436067"
	W0731 18:12:54.280583   73800 addons.go:243] addon default-storageclass should already be in state true
	I0731 18:12:54.280615   73800 host.go:66] Checking if "embed-certs-436067" exists ...
	I0731 18:12:54.280969   73800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:54.281000   73800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:54.293211   73800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38393
	I0731 18:12:54.293657   73800 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:54.294121   73800 main.go:141] libmachine: Using API Version  1
	I0731 18:12:54.294142   73800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:54.294444   73800 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:54.294642   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetState
	I0731 18:12:54.294724   73800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38771
	I0731 18:12:54.295077   73800 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:54.295590   73800 main.go:141] libmachine: Using API Version  1
	I0731 18:12:54.295609   73800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:54.296058   73800 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:54.296285   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetState
	I0731 18:12:54.296377   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:12:54.298013   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:12:54.298541   73800 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0731 18:12:54.299454   73800 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:12:54.299489   73800 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 18:12:54.299501   73800 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 18:12:54.299515   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:12:54.300664   73800 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 18:12:54.300682   73800 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 18:12:54.300699   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:12:54.301018   73800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36647
	I0731 18:12:54.301671   73800 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:54.302210   73800 main.go:141] libmachine: Using API Version  1
	I0731 18:12:54.302229   73800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:54.302731   73800 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:54.302857   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:12:54.303479   73800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:54.303503   73800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:54.303710   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:12:54.303744   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:12:54.303768   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:12:54.303893   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:12:54.304071   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:12:54.304232   73800 sshutil.go:53] new ssh client: &{IP:192.168.50.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/embed-certs-436067/id_rsa Username:docker}
	I0731 18:12:54.304601   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:12:54.305040   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:12:54.305063   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:12:54.305311   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:12:54.305480   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:12:54.305594   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:12:54.305712   73800 sshutil.go:53] new ssh client: &{IP:192.168.50.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/embed-certs-436067/id_rsa Username:docker}
	I0731 18:12:54.318168   73800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33711
	I0731 18:12:54.318558   73800 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:54.319015   73800 main.go:141] libmachine: Using API Version  1
	I0731 18:12:54.319033   73800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:54.319355   73800 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:54.319552   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetState
	I0731 18:12:54.321369   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:12:54.321540   73800 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 18:12:54.321553   73800 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 18:12:54.321565   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:12:54.324613   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:12:54.324994   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:12:54.325011   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:12:54.325310   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:12:54.325437   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:12:54.325571   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:12:54.325683   73800 sshutil.go:53] new ssh client: &{IP:192.168.50.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/embed-certs-436067/id_rsa Username:docker}
	I0731 18:12:54.435485   73800 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 18:12:54.462541   73800 node_ready.go:35] waiting up to 6m0s for node "embed-certs-436067" to be "Ready" ...
	I0731 18:12:54.473787   73800 node_ready.go:49] node "embed-certs-436067" has status "Ready":"True"
	I0731 18:12:54.473810   73800 node_ready.go:38] duration metric: took 11.237808ms for node "embed-certs-436067" to be "Ready" ...
	I0731 18:12:54.473819   73800 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:12:54.485589   73800 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:54.507887   73800 pod_ready.go:92] pod "etcd-embed-certs-436067" in "kube-system" namespace has status "Ready":"True"
	I0731 18:12:54.507910   73800 pod_ready.go:81] duration metric: took 22.296215ms for pod "etcd-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:54.507921   73800 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:54.524721   73800 pod_ready.go:92] pod "kube-apiserver-embed-certs-436067" in "kube-system" namespace has status "Ready":"True"
	I0731 18:12:54.524742   73800 pod_ready.go:81] duration metric: took 16.814491ms for pod "kube-apiserver-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:54.524751   73800 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:54.536810   73800 pod_ready.go:92] pod "kube-controller-manager-embed-certs-436067" in "kube-system" namespace has status "Ready":"True"
	I0731 18:12:54.536837   73800 pod_ready.go:81] duration metric: took 12.078703ms for pod "kube-controller-manager-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:54.536848   73800 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-85spm" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:54.552538   73800 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 18:12:54.579223   73800 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 18:12:54.579244   73800 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0731 18:12:54.596087   73800 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 18:12:54.617180   73800 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 18:12:54.617209   73800 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 18:12:54.679879   73800 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 18:12:54.679908   73800 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 18:12:54.775272   73800 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 18:12:55.199299   73800 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:55.199335   73800 main.go:141] libmachine: (embed-certs-436067) Calling .Close
	I0731 18:12:55.199342   73800 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:55.199361   73800 main.go:141] libmachine: (embed-certs-436067) Calling .Close
	I0731 18:12:55.199618   73800 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:55.199666   73800 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:55.199678   73800 main.go:141] libmachine: (embed-certs-436067) DBG | Closing plugin on server side
	I0731 18:12:55.199634   73800 main.go:141] libmachine: (embed-certs-436067) DBG | Closing plugin on server side
	I0731 18:12:55.199685   73800 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:55.199710   73800 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:55.199689   73800 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:55.199717   73800 main.go:141] libmachine: (embed-certs-436067) Calling .Close
	I0731 18:12:55.199726   73800 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:55.199735   73800 main.go:141] libmachine: (embed-certs-436067) Calling .Close
	I0731 18:12:55.200002   73800 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:55.200016   73800 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:55.200079   73800 main.go:141] libmachine: (embed-certs-436067) DBG | Closing plugin on server side
	I0731 18:12:55.200107   73800 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:55.200120   73800 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:55.227472   73800 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:55.227497   73800 main.go:141] libmachine: (embed-certs-436067) Calling .Close
	I0731 18:12:55.227792   73800 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:55.227811   73800 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:55.712134   73800 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:55.712159   73800 main.go:141] libmachine: (embed-certs-436067) Calling .Close
	I0731 18:12:55.712516   73800 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:55.712568   73800 main.go:141] libmachine: (embed-certs-436067) DBG | Closing plugin on server side
	I0731 18:12:55.712574   73800 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:55.712596   73800 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:55.712605   73800 main.go:141] libmachine: (embed-certs-436067) Calling .Close
	I0731 18:12:55.712851   73800 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:55.712868   73800 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:55.712867   73800 main.go:141] libmachine: (embed-certs-436067) DBG | Closing plugin on server side
	I0731 18:12:55.712877   73800 addons.go:475] Verifying addon metrics-server=true in "embed-certs-436067"
	I0731 18:12:55.714432   73800 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0731 18:12:54.143455   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:56.144177   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:55.715903   73800 addons.go:510] duration metric: took 1.461304856s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0731 18:12:56.542100   73800 pod_ready.go:92] pod "kube-proxy-85spm" in "kube-system" namespace has status "Ready":"True"
	I0731 18:12:56.542122   73800 pod_ready.go:81] duration metric: took 2.005265959s for pod "kube-proxy-85spm" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:56.542135   73800 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:56.553810   73800 pod_ready.go:92] pod "kube-scheduler-embed-certs-436067" in "kube-system" namespace has status "Ready":"True"
	I0731 18:12:56.553831   73800 pod_ready.go:81] duration metric: took 11.689814ms for pod "kube-scheduler-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:56.553840   73800 pod_ready.go:38] duration metric: took 2.080010607s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:12:56.553853   73800 api_server.go:52] waiting for apiserver process to appear ...
	I0731 18:12:56.553899   73800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:12:56.568301   73800 api_server.go:72] duration metric: took 2.313759916s to wait for apiserver process to appear ...
	I0731 18:12:56.568327   73800 api_server.go:88] waiting for apiserver healthz status ...
	I0731 18:12:56.568345   73800 api_server.go:253] Checking apiserver healthz at https://192.168.50.86:8443/healthz ...
	I0731 18:12:56.573861   73800 api_server.go:279] https://192.168.50.86:8443/healthz returned 200:
	ok
	I0731 18:12:56.575494   73800 api_server.go:141] control plane version: v1.30.3
	I0731 18:12:56.575513   73800 api_server.go:131] duration metric: took 7.1795ms to wait for apiserver health ...
	I0731 18:12:56.575520   73800 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 18:12:56.669169   73800 system_pods.go:59] 9 kube-system pods found
	I0731 18:12:56.669197   73800 system_pods.go:61] "coredns-7db6d8ff4d-fqkfd" [2e5a67a3-6f2b-43a5-8b94-cf48202c5958] Running
	I0731 18:12:56.669202   73800 system_pods.go:61] "coredns-7db6d8ff4d-qpb62" [e57f157b-01b5-42ec-9630-b9b5ae94fe5d] Running
	I0731 18:12:56.669206   73800 system_pods.go:61] "etcd-embed-certs-436067" [d0439817-b307-4673-8a64-34aa31266fbb] Running
	I0731 18:12:56.669210   73800 system_pods.go:61] "kube-apiserver-embed-certs-436067" [c2409e33-d42b-4132-8f63-197c4d445704] Running
	I0731 18:12:56.669214   73800 system_pods.go:61] "kube-controller-manager-embed-certs-436067" [dfea1f44-48a3-4a3c-8a27-be6f01f58add] Running
	I0731 18:12:56.669218   73800 system_pods.go:61] "kube-proxy-85spm" [eecb399a-0365-436d-8f14-a7853a5a2ea3] Running
	I0731 18:12:56.669221   73800 system_pods.go:61] "kube-scheduler-embed-certs-436067" [e406617d-adf7-4baa-9a8a-6fd92847a12f] Running
	I0731 18:12:56.669228   73800 system_pods.go:61] "metrics-server-569cc877fc-pgf6q" [b799fa04-0bf7-4914-9738-964b825577b5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 18:12:56.669231   73800 system_pods.go:61] "storage-provisioner" [a7d95d32-1002-4fba-b0fd-555b872efa1f] Running
	I0731 18:12:56.669240   73800 system_pods.go:74] duration metric: took 93.714593ms to wait for pod list to return data ...
	I0731 18:12:56.669247   73800 default_sa.go:34] waiting for default service account to be created ...
	I0731 18:12:56.866494   73800 default_sa.go:45] found service account: "default"
	I0731 18:12:56.866521   73800 default_sa.go:55] duration metric: took 197.264891ms for default service account to be created ...
	I0731 18:12:56.866532   73800 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 18:12:57.068903   73800 system_pods.go:86] 9 kube-system pods found
	I0731 18:12:57.068930   73800 system_pods.go:89] "coredns-7db6d8ff4d-fqkfd" [2e5a67a3-6f2b-43a5-8b94-cf48202c5958] Running
	I0731 18:12:57.068936   73800 system_pods.go:89] "coredns-7db6d8ff4d-qpb62" [e57f157b-01b5-42ec-9630-b9b5ae94fe5d] Running
	I0731 18:12:57.068940   73800 system_pods.go:89] "etcd-embed-certs-436067" [d0439817-b307-4673-8a64-34aa31266fbb] Running
	I0731 18:12:57.068944   73800 system_pods.go:89] "kube-apiserver-embed-certs-436067" [c2409e33-d42b-4132-8f63-197c4d445704] Running
	I0731 18:12:57.068948   73800 system_pods.go:89] "kube-controller-manager-embed-certs-436067" [dfea1f44-48a3-4a3c-8a27-be6f01f58add] Running
	I0731 18:12:57.068951   73800 system_pods.go:89] "kube-proxy-85spm" [eecb399a-0365-436d-8f14-a7853a5a2ea3] Running
	I0731 18:12:57.068955   73800 system_pods.go:89] "kube-scheduler-embed-certs-436067" [e406617d-adf7-4baa-9a8a-6fd92847a12f] Running
	I0731 18:12:57.068961   73800 system_pods.go:89] "metrics-server-569cc877fc-pgf6q" [b799fa04-0bf7-4914-9738-964b825577b5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 18:12:57.068965   73800 system_pods.go:89] "storage-provisioner" [a7d95d32-1002-4fba-b0fd-555b872efa1f] Running
	I0731 18:12:57.068972   73800 system_pods.go:126] duration metric: took 202.435205ms to wait for k8s-apps to be running ...
	I0731 18:12:57.068980   73800 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 18:12:57.069018   73800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:12:57.083728   73800 system_svc.go:56] duration metric: took 14.739831ms WaitForService to wait for kubelet
	I0731 18:12:57.083756   73800 kubeadm.go:582] duration metric: took 2.829227102s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 18:12:57.083782   73800 node_conditions.go:102] verifying NodePressure condition ...
	I0731 18:12:57.266463   73800 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 18:12:57.266486   73800 node_conditions.go:123] node cpu capacity is 2
	I0731 18:12:57.266495   73800 node_conditions.go:105] duration metric: took 182.707869ms to run NodePressure ...
	I0731 18:12:57.266505   73800 start.go:241] waiting for startup goroutines ...
	I0731 18:12:57.266512   73800 start.go:246] waiting for cluster config update ...
	I0731 18:12:57.266521   73800 start.go:255] writing updated cluster config ...
	I0731 18:12:57.266767   73800 ssh_runner.go:195] Run: rm -f paused
	I0731 18:12:57.313723   73800 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 18:12:57.315966   73800 out.go:177] * Done! kubectl is now configured to use "embed-certs-436067" cluster and "default" namespace by default
	I0731 18:12:58.652853   74203 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0731 18:12:58.653480   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:12:58.653735   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:12:58.643237   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:13:01.143274   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:13:01.643357   73479 pod_ready.go:81] duration metric: took 4m0.006506347s for pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace to be "Ready" ...
	E0731 18:13:01.643382   73479 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0731 18:13:01.643388   73479 pod_ready.go:38] duration metric: took 4m7.418860701s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:13:01.643402   73479 api_server.go:52] waiting for apiserver process to appear ...
	I0731 18:13:01.643428   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:13:01.643481   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:13:01.692071   73479 cri.go:89] found id: "895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0"
	I0731 18:13:01.692092   73479 cri.go:89] found id: ""
	I0731 18:13:01.692101   73479 logs.go:276] 1 containers: [895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0]
	I0731 18:13:01.692159   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:01.697266   73479 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:13:01.697356   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:13:01.736299   73479 cri.go:89] found id: "65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645"
	I0731 18:13:01.736350   73479 cri.go:89] found id: ""
	I0731 18:13:01.736360   73479 logs.go:276] 1 containers: [65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645]
	I0731 18:13:01.736417   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:01.740672   73479 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:13:01.740733   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:13:01.774782   73479 cri.go:89] found id: "f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855"
	I0731 18:13:01.774816   73479 cri.go:89] found id: ""
	I0731 18:13:01.774826   73479 logs.go:276] 1 containers: [f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855]
	I0731 18:13:01.774893   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:01.778542   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:13:01.778618   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:13:01.818749   73479 cri.go:89] found id: "ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2"
	I0731 18:13:01.818769   73479 cri.go:89] found id: ""
	I0731 18:13:01.818776   73479 logs.go:276] 1 containers: [ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2]
	I0731 18:13:01.818828   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:01.827176   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:13:01.827248   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:13:01.860700   73479 cri.go:89] found id: "57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847"
	I0731 18:13:01.860730   73479 cri.go:89] found id: ""
	I0731 18:13:01.860739   73479 logs.go:276] 1 containers: [57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847]
	I0731 18:13:01.860825   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:03.654494   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:13:03.654747   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:13:01.864629   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:13:01.864702   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:13:01.899293   73479 cri.go:89] found id: "ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c"
	I0731 18:13:01.899338   73479 cri.go:89] found id: ""
	I0731 18:13:01.899347   73479 logs.go:276] 1 containers: [ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c]
	I0731 18:13:01.899406   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:01.903202   73479 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:13:01.903272   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:13:01.934472   73479 cri.go:89] found id: ""
	I0731 18:13:01.934505   73479 logs.go:276] 0 containers: []
	W0731 18:13:01.934516   73479 logs.go:278] No container was found matching "kindnet"
	I0731 18:13:01.934523   73479 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 18:13:01.934588   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 18:13:01.967244   73479 cri.go:89] found id: "6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df"
	I0731 18:13:01.967271   73479 cri.go:89] found id: "9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c"
	I0731 18:13:01.967276   73479 cri.go:89] found id: ""
	I0731 18:13:01.967285   73479 logs.go:276] 2 containers: [6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df 9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c]
	I0731 18:13:01.967349   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:01.971167   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:01.975648   73479 logs.go:123] Gathering logs for kubelet ...
	I0731 18:13:01.975670   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:13:02.031430   73479 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:13:02.031472   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 18:13:02.158774   73479 logs.go:123] Gathering logs for kube-apiserver [895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0] ...
	I0731 18:13:02.158803   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0"
	I0731 18:13:02.199495   73479 logs.go:123] Gathering logs for kube-scheduler [ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2] ...
	I0731 18:13:02.199521   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2"
	I0731 18:13:02.232285   73479 logs.go:123] Gathering logs for kube-proxy [57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847] ...
	I0731 18:13:02.232327   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847"
	I0731 18:13:02.272360   73479 logs.go:123] Gathering logs for storage-provisioner [9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c] ...
	I0731 18:13:02.272389   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c"
	I0731 18:13:02.305902   73479 logs.go:123] Gathering logs for dmesg ...
	I0731 18:13:02.305931   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:13:02.319954   73479 logs.go:123] Gathering logs for etcd [65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645] ...
	I0731 18:13:02.319984   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645"
	I0731 18:13:02.361657   73479 logs.go:123] Gathering logs for coredns [f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855] ...
	I0731 18:13:02.361685   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855"
	I0731 18:13:02.395696   73479 logs.go:123] Gathering logs for kube-controller-manager [ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c] ...
	I0731 18:13:02.395724   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c"
	I0731 18:13:02.444671   73479 logs.go:123] Gathering logs for storage-provisioner [6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df] ...
	I0731 18:13:02.444704   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df"
	I0731 18:13:02.480666   73479 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:13:02.480693   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:13:02.967693   73479 logs.go:123] Gathering logs for container status ...
	I0731 18:13:02.967741   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:13:05.512381   73479 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:13:05.528582   73479 api_server.go:72] duration metric: took 4m19.030809429s to wait for apiserver process to appear ...
	I0731 18:13:05.528612   73479 api_server.go:88] waiting for apiserver healthz status ...
	I0731 18:13:05.528652   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:13:05.528730   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:13:05.567984   73479 cri.go:89] found id: "895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0"
	I0731 18:13:05.568004   73479 cri.go:89] found id: ""
	I0731 18:13:05.568013   73479 logs.go:276] 1 containers: [895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0]
	I0731 18:13:05.568073   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:05.571946   73479 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:13:05.572003   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:13:05.620468   73479 cri.go:89] found id: "65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645"
	I0731 18:13:05.620495   73479 cri.go:89] found id: ""
	I0731 18:13:05.620504   73479 logs.go:276] 1 containers: [65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645]
	I0731 18:13:05.620571   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:05.624599   73479 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:13:05.624653   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:13:05.663717   73479 cri.go:89] found id: "f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855"
	I0731 18:13:05.663740   73479 cri.go:89] found id: ""
	I0731 18:13:05.663748   73479 logs.go:276] 1 containers: [f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855]
	I0731 18:13:05.663803   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:05.667601   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:13:05.667672   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:13:05.699764   73479 cri.go:89] found id: "ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2"
	I0731 18:13:05.699791   73479 cri.go:89] found id: ""
	I0731 18:13:05.699801   73479 logs.go:276] 1 containers: [ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2]
	I0731 18:13:05.699858   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:05.703965   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:13:05.704036   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:13:05.739460   73479 cri.go:89] found id: "57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847"
	I0731 18:13:05.739487   73479 cri.go:89] found id: ""
	I0731 18:13:05.739496   73479 logs.go:276] 1 containers: [57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847]
	I0731 18:13:05.739558   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:05.743180   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:13:05.743232   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:13:05.777369   73479 cri.go:89] found id: "ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c"
	I0731 18:13:05.777390   73479 cri.go:89] found id: ""
	I0731 18:13:05.777397   73479 logs.go:276] 1 containers: [ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c]
	I0731 18:13:05.777449   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:05.781388   73479 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:13:05.781435   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:13:05.825567   73479 cri.go:89] found id: ""
	I0731 18:13:05.825599   73479 logs.go:276] 0 containers: []
	W0731 18:13:05.825610   73479 logs.go:278] No container was found matching "kindnet"
	I0731 18:13:05.825617   73479 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 18:13:05.825689   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 18:13:05.859538   73479 cri.go:89] found id: "6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df"
	I0731 18:13:05.859570   73479 cri.go:89] found id: "9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c"
	I0731 18:13:05.859577   73479 cri.go:89] found id: ""
	I0731 18:13:05.859586   73479 logs.go:276] 2 containers: [6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df 9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c]
	I0731 18:13:05.859657   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:05.863513   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:05.866989   73479 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:13:05.867011   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:13:06.314116   73479 logs.go:123] Gathering logs for container status ...
	I0731 18:13:06.314166   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:13:06.357738   73479 logs.go:123] Gathering logs for kubelet ...
	I0731 18:13:06.357764   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:13:06.407330   73479 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:13:06.407365   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 18:13:06.508580   73479 logs.go:123] Gathering logs for kube-apiserver [895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0] ...
	I0731 18:13:06.508616   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0"
	I0731 18:13:06.550032   73479 logs.go:123] Gathering logs for coredns [f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855] ...
	I0731 18:13:06.550071   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855"
	I0731 18:13:06.588519   73479 logs.go:123] Gathering logs for kube-proxy [57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847] ...
	I0731 18:13:06.588548   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847"
	I0731 18:13:06.622872   73479 logs.go:123] Gathering logs for storage-provisioner [6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df] ...
	I0731 18:13:06.622901   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df"
	I0731 18:13:06.666694   73479 logs.go:123] Gathering logs for dmesg ...
	I0731 18:13:06.666721   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:13:06.680326   73479 logs.go:123] Gathering logs for etcd [65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645] ...
	I0731 18:13:06.680355   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645"
	I0731 18:13:06.723966   73479 logs.go:123] Gathering logs for kube-scheduler [ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2] ...
	I0731 18:13:06.723997   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2"
	I0731 18:13:06.760873   73479 logs.go:123] Gathering logs for kube-controller-manager [ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c] ...
	I0731 18:13:06.760901   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c"
	I0731 18:13:06.809348   73479 logs.go:123] Gathering logs for storage-provisioner [9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c] ...
	I0731 18:13:06.809387   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c"
	I0731 18:13:09.341394   73479 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0731 18:13:09.346642   73479 api_server.go:279] https://192.168.61.126:8443/healthz returned 200:
	ok
	I0731 18:13:09.347803   73479 api_server.go:141] control plane version: v1.31.0-beta.0
	I0731 18:13:09.347821   73479 api_server.go:131] duration metric: took 3.819202346s to wait for apiserver health ...
	I0731 18:13:09.347828   73479 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 18:13:09.347850   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:13:09.347903   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:13:09.391857   73479 cri.go:89] found id: "895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0"
	I0731 18:13:09.391885   73479 cri.go:89] found id: ""
	I0731 18:13:09.391895   73479 logs.go:276] 1 containers: [895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0]
	I0731 18:13:09.391956   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:09.395723   73479 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:13:09.395789   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:13:09.430108   73479 cri.go:89] found id: "65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645"
	I0731 18:13:09.430128   73479 cri.go:89] found id: ""
	I0731 18:13:09.430135   73479 logs.go:276] 1 containers: [65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645]
	I0731 18:13:09.430180   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:09.433933   73479 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:13:09.434037   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:13:09.471630   73479 cri.go:89] found id: "f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855"
	I0731 18:13:09.471655   73479 cri.go:89] found id: ""
	I0731 18:13:09.471663   73479 logs.go:276] 1 containers: [f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855]
	I0731 18:13:09.471709   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:09.476432   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:13:09.476496   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:13:09.519568   73479 cri.go:89] found id: "ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2"
	I0731 18:13:09.519590   73479 cri.go:89] found id: ""
	I0731 18:13:09.519598   73479 logs.go:276] 1 containers: [ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2]
	I0731 18:13:09.519641   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:09.523587   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:13:09.523656   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:13:09.559405   73479 cri.go:89] found id: "57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847"
	I0731 18:13:09.559429   73479 cri.go:89] found id: ""
	I0731 18:13:09.559438   73479 logs.go:276] 1 containers: [57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847]
	I0731 18:13:09.559485   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:09.564137   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:13:09.564199   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:13:09.605298   73479 cri.go:89] found id: "ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c"
	I0731 18:13:09.605324   73479 cri.go:89] found id: ""
	I0731 18:13:09.605332   73479 logs.go:276] 1 containers: [ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c]
	I0731 18:13:09.605403   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:09.612233   73479 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:13:09.612296   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:13:09.648804   73479 cri.go:89] found id: ""
	I0731 18:13:09.648836   73479 logs.go:276] 0 containers: []
	W0731 18:13:09.648848   73479 logs.go:278] No container was found matching "kindnet"
	I0731 18:13:09.648855   73479 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 18:13:09.648916   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 18:13:09.694708   73479 cri.go:89] found id: "6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df"
	I0731 18:13:09.694733   73479 cri.go:89] found id: "9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c"
	I0731 18:13:09.694737   73479 cri.go:89] found id: ""
	I0731 18:13:09.694743   73479 logs.go:276] 2 containers: [6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df 9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c]
	I0731 18:13:09.694794   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:09.698687   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:09.702244   73479 logs.go:123] Gathering logs for storage-provisioner [6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df] ...
	I0731 18:13:09.702261   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df"
	I0731 18:13:09.737777   73479 logs.go:123] Gathering logs for storage-provisioner [9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c] ...
	I0731 18:13:09.737808   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c"
	I0731 18:13:09.771128   73479 logs.go:123] Gathering logs for container status ...
	I0731 18:13:09.771161   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:13:09.817498   73479 logs.go:123] Gathering logs for dmesg ...
	I0731 18:13:09.817525   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:13:09.833574   73479 logs.go:123] Gathering logs for etcd [65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645] ...
	I0731 18:13:09.833607   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645"
	I0731 18:13:09.872664   73479 logs.go:123] Gathering logs for kube-scheduler [ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2] ...
	I0731 18:13:09.872691   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2"
	I0731 18:13:09.913741   73479 logs.go:123] Gathering logs for coredns [f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855] ...
	I0731 18:13:09.913771   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855"
	I0731 18:13:09.949469   73479 logs.go:123] Gathering logs for kube-proxy [57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847] ...
	I0731 18:13:09.949512   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847"
	I0731 18:13:09.985409   73479 logs.go:123] Gathering logs for kube-controller-manager [ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c] ...
	I0731 18:13:09.985447   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c"
	I0731 18:13:10.039018   73479 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:13:10.039048   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:13:10.406380   73479 logs.go:123] Gathering logs for kubelet ...
	I0731 18:13:10.406416   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:13:10.459944   73479 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:13:10.459997   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 18:13:10.564092   73479 logs.go:123] Gathering logs for kube-apiserver [895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0] ...
	I0731 18:13:10.564134   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0"
	I0731 18:13:13.124074   73479 system_pods.go:59] 8 kube-system pods found
	I0731 18:13:13.124102   73479 system_pods.go:61] "coredns-5cfdc65f69-k7clq" [e4be77b6-aa7c-45e2-90a1-6a8264fd5101] Running
	I0731 18:13:13.124107   73479 system_pods.go:61] "etcd-no-preload-673754" [d64e9194-dd33-4a28-b8c3-d25462f1dae8] Running
	I0731 18:13:13.124110   73479 system_pods.go:61] "kube-apiserver-no-preload-673754" [4497614d-51de-4a5f-93dc-1397446fe4c8] Running
	I0731 18:13:13.124114   73479 system_pods.go:61] "kube-controller-manager-no-preload-673754" [fe7f714e-d778-49e7-b4ef-44e6efb5bc33] Running
	I0731 18:13:13.124117   73479 system_pods.go:61] "kube-proxy-hqxh6" [4fb623c0-4be3-421c-85d6-1d76a90b874f] Running
	I0731 18:13:13.124119   73479 system_pods.go:61] "kube-scheduler-no-preload-673754" [558e9b78-a6a9-46b8-9db8-004126359c74] Running
	I0731 18:13:13.124125   73479 system_pods.go:61] "metrics-server-78fcd8795b-27pkr" [a63a156b-0446-4bd9-8619-de75edaeb481] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 18:13:13.124129   73479 system_pods.go:61] "storage-provisioner" [a9f0ab70-18f4-4fac-b858-a9177077fe29] Running
	I0731 18:13:13.124135   73479 system_pods.go:74] duration metric: took 3.776302431s to wait for pod list to return data ...
	I0731 18:13:13.124141   73479 default_sa.go:34] waiting for default service account to be created ...
	I0731 18:13:13.127100   73479 default_sa.go:45] found service account: "default"
	I0731 18:13:13.127137   73479 default_sa.go:55] duration metric: took 2.989455ms for default service account to be created ...
	I0731 18:13:13.127148   73479 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 18:13:13.132359   73479 system_pods.go:86] 8 kube-system pods found
	I0731 18:13:13.132379   73479 system_pods.go:89] "coredns-5cfdc65f69-k7clq" [e4be77b6-aa7c-45e2-90a1-6a8264fd5101] Running
	I0731 18:13:13.132387   73479 system_pods.go:89] "etcd-no-preload-673754" [d64e9194-dd33-4a28-b8c3-d25462f1dae8] Running
	I0731 18:13:13.132393   73479 system_pods.go:89] "kube-apiserver-no-preload-673754" [4497614d-51de-4a5f-93dc-1397446fe4c8] Running
	I0731 18:13:13.132399   73479 system_pods.go:89] "kube-controller-manager-no-preload-673754" [fe7f714e-d778-49e7-b4ef-44e6efb5bc33] Running
	I0731 18:13:13.132405   73479 system_pods.go:89] "kube-proxy-hqxh6" [4fb623c0-4be3-421c-85d6-1d76a90b874f] Running
	I0731 18:13:13.132410   73479 system_pods.go:89] "kube-scheduler-no-preload-673754" [558e9b78-a6a9-46b8-9db8-004126359c74] Running
	I0731 18:13:13.132420   73479 system_pods.go:89] "metrics-server-78fcd8795b-27pkr" [a63a156b-0446-4bd9-8619-de75edaeb481] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 18:13:13.132427   73479 system_pods.go:89] "storage-provisioner" [a9f0ab70-18f4-4fac-b858-a9177077fe29] Running
	I0731 18:13:13.132435   73479 system_pods.go:126] duration metric: took 5.281138ms to wait for k8s-apps to be running ...
	I0731 18:13:13.132443   73479 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 18:13:13.132488   73479 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:13:13.148254   73479 system_svc.go:56] duration metric: took 15.802724ms WaitForService to wait for kubelet
	I0731 18:13:13.148281   73479 kubeadm.go:582] duration metric: took 4m26.650509962s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 18:13:13.148315   73479 node_conditions.go:102] verifying NodePressure condition ...
	I0731 18:13:13.151986   73479 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 18:13:13.152006   73479 node_conditions.go:123] node cpu capacity is 2
	I0731 18:13:13.152018   73479 node_conditions.go:105] duration metric: took 3.693857ms to run NodePressure ...
	I0731 18:13:13.152031   73479 start.go:241] waiting for startup goroutines ...
	I0731 18:13:13.152043   73479 start.go:246] waiting for cluster config update ...
	I0731 18:13:13.152058   73479 start.go:255] writing updated cluster config ...
	I0731 18:13:13.152347   73479 ssh_runner.go:195] Run: rm -f paused
	I0731 18:13:13.202434   73479 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0731 18:13:13.205205   73479 out.go:177] * Done! kubectl is now configured to use "no-preload-673754" cluster and "default" namespace by default
	I0731 18:13:13.655618   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:13:13.655843   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:13:33.657356   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:13:33.657560   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:14:13.660934   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:14:13.661161   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:14:13.661183   74203 kubeadm.go:310] 
	I0731 18:14:13.661216   74203 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 18:14:13.661251   74203 kubeadm.go:310] 		timed out waiting for the condition
	I0731 18:14:13.661279   74203 kubeadm.go:310] 
	I0731 18:14:13.661338   74203 kubeadm.go:310] 	This error is likely caused by:
	I0731 18:14:13.661378   74203 kubeadm.go:310] 		- The kubelet is not running
	I0731 18:14:13.661477   74203 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 18:14:13.661483   74203 kubeadm.go:310] 
	I0731 18:14:13.661577   74203 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 18:14:13.661617   74203 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 18:14:13.661646   74203 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 18:14:13.661651   74203 kubeadm.go:310] 
	I0731 18:14:13.661781   74203 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 18:14:13.661897   74203 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 18:14:13.661909   74203 kubeadm.go:310] 
	I0731 18:14:13.662044   74203 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 18:14:13.662164   74203 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 18:14:13.662265   74203 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 18:14:13.662444   74203 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 18:14:13.662477   74203 kubeadm.go:310] 
	I0731 18:14:13.663123   74203 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 18:14:13.663235   74203 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 18:14:13.663331   74203 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0731 18:14:13.663497   74203 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0731 18:14:13.663559   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 18:14:18.956376   74203 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.292787213s)
	I0731 18:14:18.956479   74203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:14:18.970820   74203 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 18:14:18.980747   74203 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 18:14:18.980771   74203 kubeadm.go:157] found existing configuration files:
	
	I0731 18:14:18.980816   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 18:14:18.989985   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 18:14:18.990063   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 18:14:18.999143   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 18:14:19.008740   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 18:14:19.008798   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 18:14:19.018729   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 18:14:19.028953   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 18:14:19.029015   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 18:14:19.039399   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 18:14:19.049072   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 18:14:19.049124   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 18:14:19.059592   74203 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 18:14:19.121542   74203 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0731 18:14:19.121613   74203 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 18:14:19.271989   74203 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 18:14:19.272100   74203 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 18:14:19.272223   74203 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 18:14:19.440224   74203 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 18:14:19.441929   74203 out.go:204]   - Generating certificates and keys ...
	I0731 18:14:19.442025   74203 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 18:14:19.442104   74203 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 18:14:19.442196   74203 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 18:14:19.442245   74203 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 18:14:19.442326   74203 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 18:14:19.442395   74203 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 18:14:19.442498   74203 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 18:14:19.442610   74203 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 18:14:19.442687   74203 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 18:14:19.442770   74203 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 18:14:19.442813   74203 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 18:14:19.442887   74203 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 18:14:19.481696   74203 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 18:14:19.804252   74203 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 18:14:20.038734   74203 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 18:14:20.211133   74203 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 18:14:20.225726   74203 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 18:14:20.227920   74203 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 18:14:20.227977   74203 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 18:14:20.364068   74203 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 18:14:20.365991   74203 out.go:204]   - Booting up control plane ...
	I0731 18:14:20.366094   74203 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 18:14:20.366195   74203 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 18:14:20.366270   74203 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 18:14:20.366379   74203 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 18:14:20.367688   74203 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 18:15:00.365616   74203 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0731 18:15:00.366184   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:15:00.366412   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:15:05.366332   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:15:05.366529   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:15:15.366241   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:15:15.366499   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:15:35.366114   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:15:35.366344   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:16:15.365995   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:16:15.366181   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:16:15.366191   74203 kubeadm.go:310] 
	I0731 18:16:15.366224   74203 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 18:16:15.366448   74203 kubeadm.go:310] 		timed out waiting for the condition
	I0731 18:16:15.366472   74203 kubeadm.go:310] 
	I0731 18:16:15.366517   74203 kubeadm.go:310] 	This error is likely caused by:
	I0731 18:16:15.366568   74203 kubeadm.go:310] 		- The kubelet is not running
	I0731 18:16:15.366723   74203 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 18:16:15.366740   74203 kubeadm.go:310] 
	I0731 18:16:15.366863   74203 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 18:16:15.366924   74203 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 18:16:15.366986   74203 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 18:16:15.366999   74203 kubeadm.go:310] 
	I0731 18:16:15.367153   74203 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 18:16:15.367271   74203 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 18:16:15.367283   74203 kubeadm.go:310] 
	I0731 18:16:15.367418   74203 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 18:16:15.367504   74203 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 18:16:15.367609   74203 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 18:16:15.367725   74203 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 18:16:15.367734   74203 kubeadm.go:310] 
	I0731 18:16:15.369210   74203 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 18:16:15.369361   74203 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 18:16:15.369434   74203 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0731 18:16:15.369496   74203 kubeadm.go:394] duration metric: took 8m6.557607575s to StartCluster
	I0731 18:16:15.369537   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:16:15.369590   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:16:15.432899   74203 cri.go:89] found id: ""
	I0731 18:16:15.432929   74203 logs.go:276] 0 containers: []
	W0731 18:16:15.432941   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:16:15.432947   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:16:15.433005   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:16:15.470506   74203 cri.go:89] found id: ""
	I0731 18:16:15.470534   74203 logs.go:276] 0 containers: []
	W0731 18:16:15.470542   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:16:15.470549   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:16:15.470609   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:16:15.502032   74203 cri.go:89] found id: ""
	I0731 18:16:15.502055   74203 logs.go:276] 0 containers: []
	W0731 18:16:15.502062   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:16:15.502067   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:16:15.502115   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:16:15.533897   74203 cri.go:89] found id: ""
	I0731 18:16:15.533918   74203 logs.go:276] 0 containers: []
	W0731 18:16:15.533925   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:16:15.533930   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:16:15.533980   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:16:15.565275   74203 cri.go:89] found id: ""
	I0731 18:16:15.565311   74203 logs.go:276] 0 containers: []
	W0731 18:16:15.565326   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:16:15.565333   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:16:15.565395   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:16:15.601402   74203 cri.go:89] found id: ""
	I0731 18:16:15.601427   74203 logs.go:276] 0 containers: []
	W0731 18:16:15.601435   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:16:15.601440   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:16:15.601489   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:16:15.638778   74203 cri.go:89] found id: ""
	I0731 18:16:15.638801   74203 logs.go:276] 0 containers: []
	W0731 18:16:15.638808   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:16:15.638813   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:16:15.638861   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:16:15.675697   74203 cri.go:89] found id: ""
	I0731 18:16:15.675720   74203 logs.go:276] 0 containers: []
	W0731 18:16:15.675728   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:16:15.675736   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:16:15.675748   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:16:15.745287   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:16:15.745325   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:16:15.745341   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:16:15.848503   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:16:15.848536   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:16:15.887234   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:16:15.887258   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:16:15.934871   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:16:15.934901   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0731 18:16:15.947727   74203 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0731 18:16:15.947769   74203 out.go:239] * 
	W0731 18:16:15.947817   74203 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 18:16:15.947836   74203 out.go:239] * 
	W0731 18:16:15.948669   74203 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 18:16:15.952286   74203 out.go:177] 
	W0731 18:16:15.953375   74203 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 18:16:15.953424   74203 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0731 18:16:15.953442   74203 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0731 18:16:15.954734   74203 out.go:177] 
	
	
	==> CRI-O <==
	Jul 31 18:16:17 old-k8s-version-276459 crio[645]: time="2024-07-31 18:16:17.764491886Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722449777764467635,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eea0d175-0f9f-406a-badd-4d0e8c9263bf name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:16:17 old-k8s-version-276459 crio[645]: time="2024-07-31 18:16:17.765181483Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=da02a493-29c4-414b-9d20-4bd985f77810 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:16:17 old-k8s-version-276459 crio[645]: time="2024-07-31 18:16:17.765235712Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=da02a493-29c4-414b-9d20-4bd985f77810 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:16:17 old-k8s-version-276459 crio[645]: time="2024-07-31 18:16:17.765265769Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=da02a493-29c4-414b-9d20-4bd985f77810 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:16:17 old-k8s-version-276459 crio[645]: time="2024-07-31 18:16:17.794898018Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1e322208-4ec2-4315-8f26-a7fbd65bb7ac name=/runtime.v1.RuntimeService/Version
	Jul 31 18:16:17 old-k8s-version-276459 crio[645]: time="2024-07-31 18:16:17.794995905Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1e322208-4ec2-4315-8f26-a7fbd65bb7ac name=/runtime.v1.RuntimeService/Version
	Jul 31 18:16:17 old-k8s-version-276459 crio[645]: time="2024-07-31 18:16:17.796045090Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cbacf0f0-9edc-4baa-9c7b-8963dfd33d77 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:16:17 old-k8s-version-276459 crio[645]: time="2024-07-31 18:16:17.796459757Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722449777796384708,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cbacf0f0-9edc-4baa-9c7b-8963dfd33d77 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:16:17 old-k8s-version-276459 crio[645]: time="2024-07-31 18:16:17.796921165Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3eaed1e9-cbbb-4464-b863-fdcc7d84ad6c name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:16:17 old-k8s-version-276459 crio[645]: time="2024-07-31 18:16:17.796989939Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3eaed1e9-cbbb-4464-b863-fdcc7d84ad6c name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:16:17 old-k8s-version-276459 crio[645]: time="2024-07-31 18:16:17.797025535Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=3eaed1e9-cbbb-4464-b863-fdcc7d84ad6c name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:16:17 old-k8s-version-276459 crio[645]: time="2024-07-31 18:16:17.828330899Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=76fcad03-eaf1-42e2-ad28-ddebdfdd94f8 name=/runtime.v1.RuntimeService/Version
	Jul 31 18:16:17 old-k8s-version-276459 crio[645]: time="2024-07-31 18:16:17.828433441Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=76fcad03-eaf1-42e2-ad28-ddebdfdd94f8 name=/runtime.v1.RuntimeService/Version
	Jul 31 18:16:17 old-k8s-version-276459 crio[645]: time="2024-07-31 18:16:17.829905309Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2d0b7cda-ce90-4fe2-9947-19691dbfbbfb name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:16:17 old-k8s-version-276459 crio[645]: time="2024-07-31 18:16:17.830258538Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722449777830237694,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2d0b7cda-ce90-4fe2-9947-19691dbfbbfb name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:16:17 old-k8s-version-276459 crio[645]: time="2024-07-31 18:16:17.830707788Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b5193865-57f5-4e5a-ad34-c44d5b91b8b0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:16:17 old-k8s-version-276459 crio[645]: time="2024-07-31 18:16:17.830770349Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b5193865-57f5-4e5a-ad34-c44d5b91b8b0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:16:17 old-k8s-version-276459 crio[645]: time="2024-07-31 18:16:17.830804556Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b5193865-57f5-4e5a-ad34-c44d5b91b8b0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:16:17 old-k8s-version-276459 crio[645]: time="2024-07-31 18:16:17.859458233Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b0448d56-fb05-4a16-a49a-def1d2d58044 name=/runtime.v1.RuntimeService/Version
	Jul 31 18:16:17 old-k8s-version-276459 crio[645]: time="2024-07-31 18:16:17.859539890Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b0448d56-fb05-4a16-a49a-def1d2d58044 name=/runtime.v1.RuntimeService/Version
	Jul 31 18:16:17 old-k8s-version-276459 crio[645]: time="2024-07-31 18:16:17.860765241Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bcc9f7d9-ab65-4a86-9cbe-39f23fb378cd name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:16:17 old-k8s-version-276459 crio[645]: time="2024-07-31 18:16:17.861106401Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722449777861086945,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bcc9f7d9-ab65-4a86-9cbe-39f23fb378cd name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:16:17 old-k8s-version-276459 crio[645]: time="2024-07-31 18:16:17.861574873Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d0195a12-fe05-47ae-a37b-952e128dea96 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:16:17 old-k8s-version-276459 crio[645]: time="2024-07-31 18:16:17.861621184Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d0195a12-fe05-47ae-a37b-952e128dea96 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:16:17 old-k8s-version-276459 crio[645]: time="2024-07-31 18:16:17.861649345Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d0195a12-fe05-47ae-a37b-952e128dea96 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul31 18:07] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051412] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042688] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.944073] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.816954] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.537194] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul31 18:08] systemd-fstab-generator[565]: Ignoring "noauto" option for root device
	[  +0.060507] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.075418] systemd-fstab-generator[577]: Ignoring "noauto" option for root device
	[  +0.176279] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.160769] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.263587] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +6.137429] systemd-fstab-generator[832]: Ignoring "noauto" option for root device
	[  +0.060102] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.916258] systemd-fstab-generator[956]: Ignoring "noauto" option for root device
	[ +12.736454] kauditd_printk_skb: 46 callbacks suppressed
	[Jul31 18:12] systemd-fstab-generator[5055]: Ignoring "noauto" option for root device
	[Jul31 18:14] systemd-fstab-generator[5341]: Ignoring "noauto" option for root device
	[  +0.064729] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 18:16:18 up 8 min,  0 users,  load average: 0.01, 0.08, 0.05
	Linux old-k8s-version-276459 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 31 18:16:15 old-k8s-version-276459 kubelet[5520]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000ac9bc0, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000b6cdb0, 0x24, 0x60, 0x7f2fbc77d5f0, 0x118, ...)
	Jul 31 18:16:15 old-k8s-version-276459 kubelet[5520]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Jul 31 18:16:15 old-k8s-version-276459 kubelet[5520]: net/http.(*Transport).dial(0xc000a2f2c0, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000b6cdb0, 0x24, 0x0, 0x0, 0x0, ...)
	Jul 31 18:16:15 old-k8s-version-276459 kubelet[5520]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Jul 31 18:16:15 old-k8s-version-276459 kubelet[5520]: net/http.(*Transport).dialConn(0xc000a2f2c0, 0x4f7fe00, 0xc000120018, 0x0, 0xc000b03c20, 0x5, 0xc000b6cdb0, 0x24, 0x0, 0xc000b1efc0, ...)
	Jul 31 18:16:15 old-k8s-version-276459 kubelet[5520]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Jul 31 18:16:15 old-k8s-version-276459 kubelet[5520]: net/http.(*Transport).dialConnFor(0xc000a2f2c0, 0xc000c60000)
	Jul 31 18:16:15 old-k8s-version-276459 kubelet[5520]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Jul 31 18:16:15 old-k8s-version-276459 kubelet[5520]: created by net/http.(*Transport).queueForDial
	Jul 31 18:16:15 old-k8s-version-276459 kubelet[5520]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Jul 31 18:16:15 old-k8s-version-276459 kubelet[5520]: goroutine 163 [select]:
	Jul 31 18:16:15 old-k8s-version-276459 kubelet[5520]: net.(*netFD).connect.func2(0x4f7fe40, 0xc000c07ec0, 0xc000b00c80, 0xc000b03ec0, 0xc000b03e60)
	Jul 31 18:16:15 old-k8s-version-276459 kubelet[5520]:         /usr/local/go/src/net/fd_unix.go:118 +0xc5
	Jul 31 18:16:15 old-k8s-version-276459 kubelet[5520]: created by net.(*netFD).connect
	Jul 31 18:16:15 old-k8s-version-276459 kubelet[5520]:         /usr/local/go/src/net/fd_unix.go:117 +0x234
	Jul 31 18:16:15 old-k8s-version-276459 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 31 18:16:15 old-k8s-version-276459 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 31 18:16:16 old-k8s-version-276459 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Jul 31 18:16:16 old-k8s-version-276459 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 31 18:16:16 old-k8s-version-276459 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 31 18:16:16 old-k8s-version-276459 kubelet[5587]: I0731 18:16:16.153539    5587 server.go:416] Version: v1.20.0
	Jul 31 18:16:16 old-k8s-version-276459 kubelet[5587]: I0731 18:16:16.154009    5587 server.go:837] Client rotation is on, will bootstrap in background
	Jul 31 18:16:16 old-k8s-version-276459 kubelet[5587]: I0731 18:16:16.156800    5587 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 31 18:16:16 old-k8s-version-276459 kubelet[5587]: W0731 18:16:16.157866    5587 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jul 31 18:16:16 old-k8s-version-276459 kubelet[5587]: I0731 18:16:16.158022    5587 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-276459 -n old-k8s-version-276459
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-276459 -n old-k8s-version-276459: exit status 2 (226.300774ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-276459" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (744.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0731 18:12:40.752712   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/flannel-985288/client.crt: no such file or directory
E0731 18:12:57.005178   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/functional-496242/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-094310 -n default-k8s-diff-port-094310
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-31 18:21:40.087879002 +0000 UTC m=+6103.794617179
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-094310 -n default-k8s-diff-port-094310
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-094310 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-094310 logs -n 25: (2.015913144s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-985288                           | enable-default-cni-985288    | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 17:58 UTC |
	|         | sudo cat                                               |                              |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-985288                           | enable-default-cni-985288    | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 17:58 UTC |
	|         | sudo containerd config dump                            |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-985288                           | enable-default-cni-985288    | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 17:58 UTC |
	|         | sudo systemctl status crio                             |                              |         |         |                     |                     |
	|         | --all --full --no-pager                                |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-985288                           | enable-default-cni-985288    | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 17:58 UTC |
	|         | sudo systemctl cat crio                                |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-985288                           | enable-default-cni-985288    | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 17:58 UTC |
	|         | sudo find /etc/crio -type f                            |                              |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                          |                              |         |         |                     |                     |
	|         | \;                                                     |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-985288                           | enable-default-cni-985288    | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 17:58 UTC |
	|         | sudo crio config                                       |                              |         |         |                     |                     |
	| delete  | -p enable-default-cni-985288                           | enable-default-cni-985288    | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 17:58 UTC |
	| delete  | -p                                                     | disable-driver-mounts-280161 | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 17:58 UTC |
	|         | disable-driver-mounts-280161                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-094310 | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 18:00 UTC |
	|         | default-k8s-diff-port-094310                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-673754             | no-preload-673754            | jenkins | v1.33.1 | 31 Jul 24 17:59 UTC | 31 Jul 24 17:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-673754                                   | no-preload-673754            | jenkins | v1.33.1 | 31 Jul 24 17:59 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-094310  | default-k8s-diff-port-094310 | jenkins | v1.33.1 | 31 Jul 24 18:00 UTC | 31 Jul 24 18:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-094310 | jenkins | v1.33.1 | 31 Jul 24 18:00 UTC |                     |
	|         | default-k8s-diff-port-094310                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-436067            | embed-certs-436067           | jenkins | v1.33.1 | 31 Jul 24 18:00 UTC | 31 Jul 24 18:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-436067                                  | embed-certs-436067           | jenkins | v1.33.1 | 31 Jul 24 18:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-276459        | old-k8s-version-276459       | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-673754                  | no-preload-673754            | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-673754 --memory=2200                     | no-preload-673754            | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC | 31 Jul 24 18:13 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-094310       | default-k8s-diff-port-094310 | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-436067                 | embed-certs-436067           | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-094310 | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC | 31 Jul 24 18:12 UTC |
	|         | default-k8s-diff-port-094310                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-436067                                  | embed-certs-436067           | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC | 31 Jul 24 18:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-276459                              | old-k8s-version-276459       | jenkins | v1.33.1 | 31 Jul 24 18:03 UTC | 31 Jul 24 18:03 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-276459             | old-k8s-version-276459       | jenkins | v1.33.1 | 31 Jul 24 18:03 UTC | 31 Jul 24 18:03 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-276459                              | old-k8s-version-276459       | jenkins | v1.33.1 | 31 Jul 24 18:03 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 18:03:55
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 18:03:55.344211   74203 out.go:291] Setting OutFile to fd 1 ...
	I0731 18:03:55.344313   74203 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:03:55.344321   74203 out.go:304] Setting ErrFile to fd 2...
	I0731 18:03:55.344324   74203 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:03:55.344541   74203 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19349-8084/.minikube/bin
	I0731 18:03:55.345055   74203 out.go:298] Setting JSON to false
	I0731 18:03:55.345905   74203 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6379,"bootTime":1722442656,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 18:03:55.345962   74203 start.go:139] virtualization: kvm guest
	I0731 18:03:55.347848   74203 out.go:177] * [old-k8s-version-276459] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 18:03:55.349045   74203 notify.go:220] Checking for updates...
	I0731 18:03:55.349052   74203 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 18:03:55.350359   74203 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 18:03:55.351583   74203 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 18:03:55.352789   74203 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19349-8084/.minikube
	I0731 18:03:55.354046   74203 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 18:03:55.355244   74203 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 18:03:55.356819   74203 config.go:182] Loaded profile config "old-k8s-version-276459": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0731 18:03:55.357218   74203 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:03:55.357268   74203 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:03:55.372081   74203 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38815
	I0731 18:03:55.372424   74203 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:03:55.372950   74203 main.go:141] libmachine: Using API Version  1
	I0731 18:03:55.372972   74203 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:03:55.373263   74203 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:03:55.373466   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:03:55.375198   74203 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0731 18:03:55.376370   74203 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 18:03:55.376714   74203 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:03:55.376748   74203 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:03:55.390924   74203 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46311
	I0731 18:03:55.391380   74203 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:03:55.391853   74203 main.go:141] libmachine: Using API Version  1
	I0731 18:03:55.391875   74203 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:03:55.392165   74203 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:03:55.392389   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:03:55.425283   74203 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 18:03:55.426485   74203 start.go:297] selected driver: kvm2
	I0731 18:03:55.426517   74203 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-276459 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-276459 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:03:55.426632   74203 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 18:03:55.427322   74203 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 18:03:55.427419   74203 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19349-8084/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 18:03:55.441518   74203 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 18:03:55.441891   74203 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 18:03:55.441921   74203 cni.go:84] Creating CNI manager for ""
	I0731 18:03:55.441928   74203 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:03:55.441970   74203 start.go:340] cluster config:
	{Name:old-k8s-version-276459 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-276459 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:03:55.442088   74203 iso.go:125] acquiring lock: {Name:mk4494bb5a63902bf12a909c3109db85229bc516 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 18:03:55.443745   74203 out.go:177] * Starting "old-k8s-version-276459" primary control-plane node in "old-k8s-version-276459" cluster
	I0731 18:03:55.299338   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:03:55.445026   74203 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 18:03:55.445062   74203 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0731 18:03:55.445085   74203 cache.go:56] Caching tarball of preloaded images
	I0731 18:03:55.445157   74203 preload.go:172] Found /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 18:03:55.445167   74203 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0731 18:03:55.445250   74203 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/config.json ...
	I0731 18:03:55.445412   74203 start.go:360] acquireMachinesLock for old-k8s-version-276459: {Name:mkd78eacfab7d6c3058c6674434b3d889ec957e0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 18:03:58.371340   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:04.451379   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:07.523408   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:13.603407   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:16.675437   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:22.755418   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:25.827434   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:31.907379   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:34.979426   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:41.059417   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:44.131434   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:50.211391   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:53.283445   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:59.363428   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:02.435450   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:08.515394   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:11.587394   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:17.667388   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:20.739413   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:26.819368   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:29.891394   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:35.971391   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:39.043445   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:45.123378   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:48.195378   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:54.275417   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:57.347374   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:03.427390   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:06.499378   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:12.579395   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:15.651447   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:21.731394   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:24.803405   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:30.883468   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:33.955397   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:40.035387   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:43.107448   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:49.187413   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:52.259420   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:58.339413   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:07:01.411396   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:07:04.416121   73696 start.go:364] duration metric: took 4m18.256589549s to acquireMachinesLock for "default-k8s-diff-port-094310"
	I0731 18:07:04.416183   73696 start.go:96] Skipping create...Using existing machine configuration
	I0731 18:07:04.416192   73696 fix.go:54] fixHost starting: 
	I0731 18:07:04.416522   73696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:07:04.416570   73696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:07:04.432249   73696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32817
	I0731 18:07:04.432715   73696 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:07:04.433206   73696 main.go:141] libmachine: Using API Version  1
	I0731 18:07:04.433234   73696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:07:04.433616   73696 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:07:04.433833   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:07:04.434001   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetState
	I0731 18:07:04.436061   73696 fix.go:112] recreateIfNeeded on default-k8s-diff-port-094310: state=Stopped err=<nil>
	I0731 18:07:04.436082   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	W0731 18:07:04.436241   73696 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 18:07:04.438139   73696 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-094310" ...
	I0731 18:07:04.439463   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .Start
	I0731 18:07:04.439678   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Ensuring networks are active...
	I0731 18:07:04.440645   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Ensuring network default is active
	I0731 18:07:04.441067   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Ensuring network mk-default-k8s-diff-port-094310 is active
	I0731 18:07:04.441473   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Getting domain xml...
	I0731 18:07:04.442331   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Creating domain...
	I0731 18:07:05.660745   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting to get IP...
	I0731 18:07:05.661963   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:05.662532   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:05.662620   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:05.662524   74854 retry.go:31] will retry after 294.438382ms: waiting for machine to come up
	I0731 18:07:05.959200   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:05.959668   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:05.959699   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:05.959619   74854 retry.go:31] will retry after 331.316387ms: waiting for machine to come up
	I0731 18:07:04.413166   73479 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 18:07:04.413216   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetMachineName
	I0731 18:07:04.413580   73479 buildroot.go:166] provisioning hostname "no-preload-673754"
	I0731 18:07:04.413609   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetMachineName
	I0731 18:07:04.413827   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:07:04.415964   73479 machine.go:97] duration metric: took 4m37.431900974s to provisionDockerMachine
	I0731 18:07:04.416013   73479 fix.go:56] duration metric: took 4m37.452176305s for fixHost
	I0731 18:07:04.416023   73479 start.go:83] releasing machines lock for "no-preload-673754", held for 4m37.452227129s
	W0731 18:07:04.416048   73479 start.go:714] error starting host: provision: host is not running
	W0731 18:07:04.416143   73479 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0731 18:07:04.416157   73479 start.go:729] Will try again in 5 seconds ...
	I0731 18:07:06.292146   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:06.292533   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:06.292555   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:06.292487   74854 retry.go:31] will retry after 324.512889ms: waiting for machine to come up
	I0731 18:07:06.619045   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:06.619440   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:06.619470   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:06.619404   74854 retry.go:31] will retry after 556.332506ms: waiting for machine to come up
	I0731 18:07:07.177224   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:07.177689   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:07.177722   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:07.177631   74854 retry.go:31] will retry after 599.567638ms: waiting for machine to come up
	I0731 18:07:07.778444   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:07.778848   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:07.778885   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:07.778820   74854 retry.go:31] will retry after 944.17246ms: waiting for machine to come up
	I0731 18:07:08.724983   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:08.725484   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:08.725512   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:08.725433   74854 retry.go:31] will retry after 1.077726279s: waiting for machine to come up
	I0731 18:07:09.805196   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:09.805629   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:09.805667   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:09.805575   74854 retry.go:31] will retry after 1.140059854s: waiting for machine to come up
	I0731 18:07:10.951633   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:10.952066   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:10.952091   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:10.952028   74854 retry.go:31] will retry after 1.691707383s: waiting for machine to come up
	I0731 18:07:09.418606   73479 start.go:360] acquireMachinesLock for no-preload-673754: {Name:mkd78eacfab7d6c3058c6674434b3d889ec957e0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 18:07:12.645970   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:12.646588   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:12.646623   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:12.646525   74854 retry.go:31] will retry after 2.257630784s: waiting for machine to come up
	I0731 18:07:14.905494   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:14.905891   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:14.905922   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:14.905833   74854 retry.go:31] will retry after 2.877713561s: waiting for machine to come up
	I0731 18:07:17.786797   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:17.787173   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:17.787194   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:17.787140   74854 retry.go:31] will retry after 3.028611559s: waiting for machine to come up
	I0731 18:07:20.817593   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:20.817898   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Found IP for machine: 192.168.72.197
	I0731 18:07:20.817921   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Reserving static IP address...
	I0731 18:07:20.817934   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has current primary IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:20.818352   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-094310", mac: "52:54:00:a9:b2:ae", ip: "192.168.72.197"} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:20.818379   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Reserved static IP address: 192.168.72.197
	I0731 18:07:20.818400   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | skip adding static IP to network mk-default-k8s-diff-port-094310 - found existing host DHCP lease matching {name: "default-k8s-diff-port-094310", mac: "52:54:00:a9:b2:ae", ip: "192.168.72.197"}
	I0731 18:07:20.818414   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for SSH to be available...
	I0731 18:07:20.818431   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | Getting to WaitForSSH function...
	I0731 18:07:20.820417   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:20.820731   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:20.820758   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:20.820893   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | Using SSH client type: external
	I0731 18:07:20.820916   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | Using SSH private key: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/default-k8s-diff-port-094310/id_rsa (-rw-------)
	I0731 18:07:20.820940   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.197 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19349-8084/.minikube/machines/default-k8s-diff-port-094310/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 18:07:20.820950   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | About to run SSH command:
	I0731 18:07:20.820959   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | exit 0
	I0731 18:07:20.943348   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | SSH cmd err, output: <nil>: 
	I0731 18:07:20.943708   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetConfigRaw
	I0731 18:07:20.944373   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetIP
	I0731 18:07:20.947080   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:20.947465   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:20.947499   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:20.947731   73696 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/default-k8s-diff-port-094310/config.json ...
	I0731 18:07:20.947909   73696 machine.go:94] provisionDockerMachine start ...
	I0731 18:07:20.947926   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:07:20.948124   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:07:20.950698   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:20.951056   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:20.951083   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:20.951228   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:07:20.951443   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:20.951608   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:20.951780   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:07:20.952016   73696 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:20.952208   73696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.197 22 <nil> <nil>}
	I0731 18:07:20.952220   73696 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 18:07:21.051082   73696 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 18:07:21.051137   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetMachineName
	I0731 18:07:21.051424   73696 buildroot.go:166] provisioning hostname "default-k8s-diff-port-094310"
	I0731 18:07:21.051454   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetMachineName
	I0731 18:07:21.051650   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:07:21.054527   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.054913   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:21.054940   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.055151   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:07:21.055377   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:21.055516   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:21.055670   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:07:21.055838   73696 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:21.056037   73696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.197 22 <nil> <nil>}
	I0731 18:07:21.056051   73696 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-094310 && echo "default-k8s-diff-port-094310" | sudo tee /etc/hostname
	I0731 18:07:22.127802   73800 start.go:364] duration metric: took 4m27.5245732s to acquireMachinesLock for "embed-certs-436067"
	I0731 18:07:22.127861   73800 start.go:96] Skipping create...Using existing machine configuration
	I0731 18:07:22.127871   73800 fix.go:54] fixHost starting: 
	I0731 18:07:22.128296   73800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:07:22.128386   73800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:07:22.144783   73800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34313
	I0731 18:07:22.145111   73800 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:07:22.145531   73800 main.go:141] libmachine: Using API Version  1
	I0731 18:07:22.145549   73800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:07:22.145894   73800 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:07:22.146086   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:07:22.146226   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetState
	I0731 18:07:22.147718   73800 fix.go:112] recreateIfNeeded on embed-certs-436067: state=Stopped err=<nil>
	I0731 18:07:22.147737   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	W0731 18:07:22.147878   73800 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 18:07:22.149896   73800 out.go:177] * Restarting existing kvm2 VM for "embed-certs-436067" ...
	I0731 18:07:21.168797   73696 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-094310
	
	I0731 18:07:21.168828   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:07:21.171672   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.172012   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:21.172043   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.172183   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:07:21.172351   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:21.172510   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:21.172633   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:07:21.172800   73696 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:21.172976   73696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.197 22 <nil> <nil>}
	I0731 18:07:21.173010   73696 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-094310' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-094310/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-094310' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 18:07:21.284583   73696 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 18:07:21.284610   73696 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19349-8084/.minikube CaCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19349-8084/.minikube}
	I0731 18:07:21.284633   73696 buildroot.go:174] setting up certificates
	I0731 18:07:21.284645   73696 provision.go:84] configureAuth start
	I0731 18:07:21.284656   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetMachineName
	I0731 18:07:21.284931   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetIP
	I0731 18:07:21.287526   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.287945   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:21.287973   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.288161   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:07:21.290169   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.290469   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:21.290495   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.290602   73696 provision.go:143] copyHostCerts
	I0731 18:07:21.290661   73696 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem, removing ...
	I0731 18:07:21.290673   73696 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem
	I0731 18:07:21.290757   73696 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem (1082 bytes)
	I0731 18:07:21.290844   73696 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem, removing ...
	I0731 18:07:21.290856   73696 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem
	I0731 18:07:21.290881   73696 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem (1123 bytes)
	I0731 18:07:21.290933   73696 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem, removing ...
	I0731 18:07:21.290939   73696 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem
	I0731 18:07:21.290959   73696 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem (1675 bytes)
	I0731 18:07:21.291005   73696 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-094310 san=[127.0.0.1 192.168.72.197 default-k8s-diff-port-094310 localhost minikube]
	I0731 18:07:21.483241   73696 provision.go:177] copyRemoteCerts
	I0731 18:07:21.483314   73696 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 18:07:21.483343   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:07:21.486231   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.486619   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:21.486659   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.486850   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:07:21.487084   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:21.487285   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:07:21.487443   73696 sshutil.go:53] new ssh client: &{IP:192.168.72.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/default-k8s-diff-port-094310/id_rsa Username:docker}
	I0731 18:07:21.568564   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 18:07:21.598766   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0731 18:07:21.621602   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 18:07:21.643361   73696 provision.go:87] duration metric: took 358.702982ms to configureAuth
	I0731 18:07:21.643393   73696 buildroot.go:189] setting minikube options for container-runtime
	I0731 18:07:21.643598   73696 config.go:182] Loaded profile config "default-k8s-diff-port-094310": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:07:21.643699   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:07:21.646487   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.646921   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:21.646967   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.647126   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:07:21.647331   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:21.647527   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:21.647675   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:07:21.647879   73696 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:21.648051   73696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.197 22 <nil> <nil>}
	I0731 18:07:21.648066   73696 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 18:07:21.896109   73696 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 18:07:21.896138   73696 machine.go:97] duration metric: took 948.216479ms to provisionDockerMachine
	I0731 18:07:21.896152   73696 start.go:293] postStartSetup for "default-k8s-diff-port-094310" (driver="kvm2")
	I0731 18:07:21.896166   73696 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 18:07:21.896185   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:07:21.896500   73696 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 18:07:21.896533   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:07:21.899447   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.899784   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:21.899817   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.899936   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:07:21.900136   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:21.900268   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:07:21.900415   73696 sshutil.go:53] new ssh client: &{IP:192.168.72.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/default-k8s-diff-port-094310/id_rsa Username:docker}
	I0731 18:07:21.981347   73696 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 18:07:21.985297   73696 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 18:07:21.985324   73696 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/addons for local assets ...
	I0731 18:07:21.985397   73696 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/files for local assets ...
	I0731 18:07:21.985513   73696 filesync.go:149] local asset: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem -> 152592.pem in /etc/ssl/certs
	I0731 18:07:21.985646   73696 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 18:07:21.994700   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /etc/ssl/certs/152592.pem (1708 bytes)
	I0731 18:07:22.022005   73696 start.go:296] duration metric: took 125.838186ms for postStartSetup
	I0731 18:07:22.022052   73696 fix.go:56] duration metric: took 17.605858897s for fixHost
	I0731 18:07:22.022075   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:07:22.025151   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:22.025445   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:22.025478   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:22.025622   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:07:22.025829   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:22.026023   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:22.026199   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:07:22.026390   73696 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:22.026632   73696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.197 22 <nil> <nil>}
	I0731 18:07:22.026653   73696 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 18:07:22.127643   73696 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722449242.103036947
	
	I0731 18:07:22.127668   73696 fix.go:216] guest clock: 1722449242.103036947
	I0731 18:07:22.127675   73696 fix.go:229] Guest: 2024-07-31 18:07:22.103036947 +0000 UTC Remote: 2024-07-31 18:07:22.022056299 +0000 UTC m=+275.995802468 (delta=80.980648ms)
	I0731 18:07:22.127698   73696 fix.go:200] guest clock delta is within tolerance: 80.980648ms
	I0731 18:07:22.127704   73696 start.go:83] releasing machines lock for "default-k8s-diff-port-094310", held for 17.711543911s
	I0731 18:07:22.127735   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:07:22.128006   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetIP
	I0731 18:07:22.130905   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:22.131291   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:22.131322   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:22.131568   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:07:22.132072   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:07:22.132244   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:07:22.132334   73696 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 18:07:22.132373   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:07:22.132488   73696 ssh_runner.go:195] Run: cat /version.json
	I0731 18:07:22.132511   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:07:22.134976   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:22.135269   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:22.135350   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:22.135386   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:22.135583   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:07:22.135702   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:22.135735   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:22.135751   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:22.135837   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:07:22.135945   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:07:22.135966   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:22.136068   73696 sshutil.go:53] new ssh client: &{IP:192.168.72.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/default-k8s-diff-port-094310/id_rsa Username:docker}
	I0731 18:07:22.136101   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:07:22.136246   73696 sshutil.go:53] new ssh client: &{IP:192.168.72.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/default-k8s-diff-port-094310/id_rsa Username:docker}
	I0731 18:07:22.245752   73696 ssh_runner.go:195] Run: systemctl --version
	I0731 18:07:22.251574   73696 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 18:07:22.391398   73696 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 18:07:22.396765   73696 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 18:07:22.396842   73696 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 18:07:22.412102   73696 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 18:07:22.412119   73696 start.go:495] detecting cgroup driver to use...
	I0731 18:07:22.412170   73696 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 18:07:22.427198   73696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 18:07:22.441511   73696 docker.go:217] disabling cri-docker service (if available) ...
	I0731 18:07:22.441589   73696 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 18:07:22.455498   73696 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 18:07:22.469702   73696 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 18:07:22.584218   73696 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 18:07:22.719105   73696 docker.go:233] disabling docker service ...
	I0731 18:07:22.719195   73696 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 18:07:22.733625   73696 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 18:07:22.746500   73696 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 18:07:22.893624   73696 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 18:07:23.012965   73696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 18:07:23.027132   73696 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 18:07:23.044766   73696 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 18:07:23.044832   73696 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:23.054276   73696 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 18:07:23.054363   73696 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:23.063873   73696 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:23.073392   73696 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:23.082908   73696 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 18:07:23.093468   73696 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:23.103419   73696 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:23.119920   73696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:23.130427   73696 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 18:07:23.139397   73696 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 18:07:23.139465   73696 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 18:07:23.152275   73696 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 18:07:23.162439   73696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:07:23.280030   73696 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 18:07:23.412019   73696 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 18:07:23.412083   73696 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 18:07:23.416884   73696 start.go:563] Will wait 60s for crictl version
	I0731 18:07:23.416930   73696 ssh_runner.go:195] Run: which crictl
	I0731 18:07:23.420518   73696 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 18:07:23.458895   73696 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 18:07:23.458976   73696 ssh_runner.go:195] Run: crio --version
	I0731 18:07:23.486961   73696 ssh_runner.go:195] Run: crio --version
	I0731 18:07:23.519648   73696 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 18:07:22.151159   73800 main.go:141] libmachine: (embed-certs-436067) Calling .Start
	I0731 18:07:22.151319   73800 main.go:141] libmachine: (embed-certs-436067) Ensuring networks are active...
	I0731 18:07:22.151951   73800 main.go:141] libmachine: (embed-certs-436067) Ensuring network default is active
	I0731 18:07:22.152245   73800 main.go:141] libmachine: (embed-certs-436067) Ensuring network mk-embed-certs-436067 is active
	I0731 18:07:22.152747   73800 main.go:141] libmachine: (embed-certs-436067) Getting domain xml...
	I0731 18:07:22.153446   73800 main.go:141] libmachine: (embed-certs-436067) Creating domain...
	I0731 18:07:23.410530   73800 main.go:141] libmachine: (embed-certs-436067) Waiting to get IP...
	I0731 18:07:23.411687   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:23.412152   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:23.412231   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:23.412133   74994 retry.go:31] will retry after 233.281104ms: waiting for machine to come up
	I0731 18:07:23.646659   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:23.647147   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:23.647174   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:23.647069   74994 retry.go:31] will retry after 307.068766ms: waiting for machine to come up
	I0731 18:07:23.955614   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:23.956140   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:23.956166   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:23.956094   74994 retry.go:31] will retry after 410.095032ms: waiting for machine to come up
	I0731 18:07:24.367793   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:24.368231   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:24.368264   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:24.368188   74994 retry.go:31] will retry after 366.242055ms: waiting for machine to come up
	I0731 18:07:23.520927   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetIP
	I0731 18:07:23.524167   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:23.524615   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:23.524663   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:23.524913   73696 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0731 18:07:23.528924   73696 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:07:23.540496   73696 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-094310 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-094310 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.197 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 18:07:23.540633   73696 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 18:07:23.540681   73696 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 18:07:23.579224   73696 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 18:07:23.579295   73696 ssh_runner.go:195] Run: which lz4
	I0731 18:07:23.583060   73696 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 18:07:23.586888   73696 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 18:07:23.586922   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 18:07:24.864241   73696 crio.go:462] duration metric: took 1.281254602s to copy over tarball
	I0731 18:07:24.864321   73696 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 18:07:24.735741   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:24.736325   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:24.736356   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:24.736275   74994 retry.go:31] will retry after 593.179812ms: waiting for machine to come up
	I0731 18:07:25.331004   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:25.331406   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:25.331470   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:25.331381   74994 retry.go:31] will retry after 778.352855ms: waiting for machine to come up
	I0731 18:07:26.111327   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:26.111828   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:26.111855   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:26.111757   74994 retry.go:31] will retry after 993.157171ms: waiting for machine to come up
	I0731 18:07:27.106111   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:27.106543   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:27.106574   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:27.106507   74994 retry.go:31] will retry after 963.581879ms: waiting for machine to come up
	I0731 18:07:28.072100   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:28.072628   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:28.072657   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:28.072560   74994 retry.go:31] will retry after 1.608497907s: waiting for machine to come up
	I0731 18:07:27.052512   73696 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.188157854s)
	I0731 18:07:27.052542   73696 crio.go:469] duration metric: took 2.188269884s to extract the tarball
	I0731 18:07:27.052557   73696 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 18:07:27.089250   73696 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 18:07:27.130507   73696 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 18:07:27.130536   73696 cache_images.go:84] Images are preloaded, skipping loading
	I0731 18:07:27.130546   73696 kubeadm.go:934] updating node { 192.168.72.197 8444 v1.30.3 crio true true} ...
	I0731 18:07:27.130666   73696 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-094310 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.197
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-094310 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 18:07:27.130751   73696 ssh_runner.go:195] Run: crio config
	I0731 18:07:27.176571   73696 cni.go:84] Creating CNI manager for ""
	I0731 18:07:27.176598   73696 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:07:27.176614   73696 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 18:07:27.176640   73696 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.197 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-094310 NodeName:default-k8s-diff-port-094310 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.197"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.197 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 18:07:27.176821   73696 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.197
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-094310"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.197
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.197"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 18:07:27.176904   73696 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 18:07:27.186582   73696 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 18:07:27.186647   73696 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 18:07:27.195571   73696 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0731 18:07:27.211103   73696 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 18:07:27.226226   73696 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0731 18:07:27.241763   73696 ssh_runner.go:195] Run: grep 192.168.72.197	control-plane.minikube.internal$ /etc/hosts
	I0731 18:07:27.245286   73696 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.197	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:07:27.256317   73696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:07:27.377904   73696 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 18:07:27.394151   73696 certs.go:68] Setting up /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/default-k8s-diff-port-094310 for IP: 192.168.72.197
	I0731 18:07:27.394181   73696 certs.go:194] generating shared ca certs ...
	I0731 18:07:27.394201   73696 certs.go:226] acquiring lock for ca certs: {Name:mk49eed439085f9c30746706460ce213570d1997 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:07:27.394382   73696 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key
	I0731 18:07:27.394451   73696 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key
	I0731 18:07:27.394465   73696 certs.go:256] generating profile certs ...
	I0731 18:07:27.394577   73696 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/default-k8s-diff-port-094310/client.key
	I0731 18:07:27.394656   73696 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/default-k8s-diff-port-094310/apiserver.key.5264b27d
	I0731 18:07:27.394703   73696 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/default-k8s-diff-port-094310/proxy-client.key
	I0731 18:07:27.394851   73696 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem (1338 bytes)
	W0731 18:07:27.394896   73696 certs.go:480] ignoring /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259_empty.pem, impossibly tiny 0 bytes
	I0731 18:07:27.394908   73696 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 18:07:27.394935   73696 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem (1082 bytes)
	I0731 18:07:27.394969   73696 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem (1123 bytes)
	I0731 18:07:27.394990   73696 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem (1675 bytes)
	I0731 18:07:27.395028   73696 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem (1708 bytes)
	I0731 18:07:27.395749   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 18:07:27.425292   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 18:07:27.452753   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 18:07:27.481508   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 18:07:27.506990   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/default-k8s-diff-port-094310/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0731 18:07:27.544385   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/default-k8s-diff-port-094310/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 18:07:27.572947   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/default-k8s-diff-port-094310/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 18:07:27.597895   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/default-k8s-diff-port-094310/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 18:07:27.619324   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /usr/share/ca-certificates/152592.pem (1708 bytes)
	I0731 18:07:27.641000   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 18:07:27.662483   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem --> /usr/share/ca-certificates/15259.pem (1338 bytes)
	I0731 18:07:27.684400   73696 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 18:07:27.700058   73696 ssh_runner.go:195] Run: openssl version
	I0731 18:07:27.705637   73696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/152592.pem && ln -fs /usr/share/ca-certificates/152592.pem /etc/ssl/certs/152592.pem"
	I0731 18:07:27.715558   73696 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/152592.pem
	I0731 18:07:27.719545   73696 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 16:54 /usr/share/ca-certificates/152592.pem
	I0731 18:07:27.719611   73696 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/152592.pem
	I0731 18:07:27.725076   73696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/152592.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 18:07:27.736589   73696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 18:07:27.747908   73696 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:07:27.752392   73696 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 16:42 /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:07:27.752448   73696 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:07:27.757939   73696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 18:07:27.769571   73696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15259.pem && ln -fs /usr/share/ca-certificates/15259.pem /etc/ssl/certs/15259.pem"
	I0731 18:07:27.780730   73696 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15259.pem
	I0731 18:07:27.785059   73696 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 16:54 /usr/share/ca-certificates/15259.pem
	I0731 18:07:27.785112   73696 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15259.pem
	I0731 18:07:27.790477   73696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15259.pem /etc/ssl/certs/51391683.0"
	I0731 18:07:27.801519   73696 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 18:07:27.805654   73696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 18:07:27.811381   73696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 18:07:27.816786   73696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 18:07:27.822643   73696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 18:07:27.828371   73696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 18:07:27.833908   73696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 18:07:27.839455   73696 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-094310 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-094310 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.197 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:07:27.839537   73696 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 18:07:27.839605   73696 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 18:07:27.882993   73696 cri.go:89] found id: ""
	I0731 18:07:27.883055   73696 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 18:07:27.894363   73696 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 18:07:27.894386   73696 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 18:07:27.894431   73696 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 18:07:27.905192   73696 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 18:07:27.906138   73696 kubeconfig.go:125] found "default-k8s-diff-port-094310" server: "https://192.168.72.197:8444"
	I0731 18:07:27.908339   73696 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 18:07:27.918565   73696 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.197
	I0731 18:07:27.918603   73696 kubeadm.go:1160] stopping kube-system containers ...
	I0731 18:07:27.918613   73696 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 18:07:27.918663   73696 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 18:07:27.955675   73696 cri.go:89] found id: ""
	I0731 18:07:27.955744   73696 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 18:07:27.972234   73696 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 18:07:27.981273   73696 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 18:07:27.981289   73696 kubeadm.go:157] found existing configuration files:
	
	I0731 18:07:27.981323   73696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0731 18:07:27.989775   73696 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 18:07:27.989837   73696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 18:07:27.998816   73696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0731 18:07:28.007142   73696 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 18:07:28.007197   73696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 18:07:28.016124   73696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0731 18:07:28.024471   73696 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 18:07:28.024519   73696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 18:07:28.033105   73696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0731 18:07:28.041306   73696 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 18:07:28.041355   73696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 18:07:28.049958   73696 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 18:07:28.058718   73696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:28.167720   73696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:29.013539   73696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:29.225696   73696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:29.300822   73696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:29.403471   73696 api_server.go:52] waiting for apiserver process to appear ...
	I0731 18:07:29.403567   73696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:07:29.903755   73696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:07:30.403896   73696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:07:30.904160   73696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:07:29.683622   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:29.684148   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:29.684180   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:29.684088   74994 retry.go:31] will retry after 1.813922887s: waiting for machine to come up
	I0731 18:07:31.500225   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:31.500738   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:31.500769   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:31.500694   74994 retry.go:31] will retry after 2.381670698s: waiting for machine to come up
	I0731 18:07:33.884129   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:33.884564   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:33.884587   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:33.884539   74994 retry.go:31] will retry after 3.269400744s: waiting for machine to come up
	I0731 18:07:31.404093   73696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:07:31.417483   73696 api_server.go:72] duration metric: took 2.014013675s to wait for apiserver process to appear ...
	I0731 18:07:31.417511   73696 api_server.go:88] waiting for apiserver healthz status ...
	I0731 18:07:31.417533   73696 api_server.go:253] Checking apiserver healthz at https://192.168.72.197:8444/healthz ...
	I0731 18:07:34.340211   73696 api_server.go:279] https://192.168.72.197:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 18:07:34.340240   73696 api_server.go:103] status: https://192.168.72.197:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 18:07:34.340274   73696 api_server.go:253] Checking apiserver healthz at https://192.168.72.197:8444/healthz ...
	I0731 18:07:34.426446   73696 api_server.go:279] https://192.168.72.197:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 18:07:34.426504   73696 api_server.go:103] status: https://192.168.72.197:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 18:07:34.426522   73696 api_server.go:253] Checking apiserver healthz at https://192.168.72.197:8444/healthz ...
	I0731 18:07:34.436383   73696 api_server.go:279] https://192.168.72.197:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 18:07:34.436416   73696 api_server.go:103] status: https://192.168.72.197:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 18:07:34.918371   73696 api_server.go:253] Checking apiserver healthz at https://192.168.72.197:8444/healthz ...
	I0731 18:07:34.922668   73696 api_server.go:279] https://192.168.72.197:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 18:07:34.922699   73696 api_server.go:103] status: https://192.168.72.197:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 18:07:35.418265   73696 api_server.go:253] Checking apiserver healthz at https://192.168.72.197:8444/healthz ...
	I0731 18:07:35.435931   73696 api_server.go:279] https://192.168.72.197:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 18:07:35.435966   73696 api_server.go:103] status: https://192.168.72.197:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 18:07:35.918570   73696 api_server.go:253] Checking apiserver healthz at https://192.168.72.197:8444/healthz ...
	I0731 18:07:35.923674   73696 api_server.go:279] https://192.168.72.197:8444/healthz returned 200:
	ok
	I0731 18:07:35.929781   73696 api_server.go:141] control plane version: v1.30.3
	I0731 18:07:35.929809   73696 api_server.go:131] duration metric: took 4.512290009s to wait for apiserver health ...
	I0731 18:07:35.929820   73696 cni.go:84] Creating CNI manager for ""
	I0731 18:07:35.929827   73696 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:07:35.931827   73696 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 18:07:35.933104   73696 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 18:07:35.943548   73696 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 18:07:35.961932   73696 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 18:07:35.977855   73696 system_pods.go:59] 8 kube-system pods found
	I0731 18:07:35.977894   73696 system_pods.go:61] "coredns-7db6d8ff4d-kvxmb" [df8cf19b-5e62-4c38-9124-3257fea48fbb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:07:35.977905   73696 system_pods.go:61] "etcd-default-k8s-diff-port-094310" [fe526f06-bd6c-4708-a0f3-e49b731e3a61] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 18:07:35.977915   73696 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-094310" [f0191941-87ad-4934-a02a-75b07649d5dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 18:07:35.977924   73696 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-094310" [28b4bdc4-4eea-41c0-9182-b07034d7363e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 18:07:35.977936   73696 system_pods.go:61] "kube-proxy-8bgl7" [577052d5-fe7d-4547-bfbf-d3c938884767] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 18:07:35.977946   73696 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-094310" [df25971f-b25a-4344-a91e-c4b0c9ee5282] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 18:07:35.977964   73696 system_pods.go:61] "metrics-server-569cc877fc-64hp4" [847243bf-6568-41ff-a1e4-70b0a89c63dd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 18:07:35.977978   73696 system_pods.go:61] "storage-provisioner" [6493bfa6-e40b-405c-93b6-ee5053efbdf6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 18:07:35.977991   73696 system_pods.go:74] duration metric: took 16.038231ms to wait for pod list to return data ...
	I0731 18:07:35.978003   73696 node_conditions.go:102] verifying NodePressure condition ...
	I0731 18:07:35.983206   73696 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 18:07:35.983234   73696 node_conditions.go:123] node cpu capacity is 2
	I0731 18:07:35.983251   73696 node_conditions.go:105] duration metric: took 5.239492ms to run NodePressure ...
	I0731 18:07:35.983270   73696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:37.155307   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:37.155787   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:37.155822   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:37.155717   74994 retry.go:31] will retry after 3.095991533s: waiting for machine to come up
	I0731 18:07:36.249072   73696 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 18:07:36.253639   73696 kubeadm.go:739] kubelet initialised
	I0731 18:07:36.253661   73696 kubeadm.go:740] duration metric: took 4.559461ms waiting for restarted kubelet to initialise ...
	I0731 18:07:36.253669   73696 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:07:36.258632   73696 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-kvxmb" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:36.262785   73696 pod_ready.go:97] node "default-k8s-diff-port-094310" hosting pod "coredns-7db6d8ff4d-kvxmb" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-094310" has status "Ready":"False"
	I0731 18:07:36.262811   73696 pod_ready.go:81] duration metric: took 4.157359ms for pod "coredns-7db6d8ff4d-kvxmb" in "kube-system" namespace to be "Ready" ...
	E0731 18:07:36.262823   73696 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-094310" hosting pod "coredns-7db6d8ff4d-kvxmb" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-094310" has status "Ready":"False"
	I0731 18:07:36.262831   73696 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:36.269224   73696 pod_ready.go:97] node "default-k8s-diff-port-094310" hosting pod "etcd-default-k8s-diff-port-094310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-094310" has status "Ready":"False"
	I0731 18:07:36.269250   73696 pod_ready.go:81] duration metric: took 6.406018ms for pod "etcd-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	E0731 18:07:36.269263   73696 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-094310" hosting pod "etcd-default-k8s-diff-port-094310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-094310" has status "Ready":"False"
	I0731 18:07:36.269270   73696 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:36.273379   73696 pod_ready.go:97] node "default-k8s-diff-port-094310" hosting pod "kube-apiserver-default-k8s-diff-port-094310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-094310" has status "Ready":"False"
	I0731 18:07:36.273400   73696 pod_ready.go:81] duration metric: took 4.119945ms for pod "kube-apiserver-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	E0731 18:07:36.273408   73696 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-094310" hosting pod "kube-apiserver-default-k8s-diff-port-094310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-094310" has status "Ready":"False"
	I0731 18:07:36.273414   73696 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:36.365153   73696 pod_ready.go:97] node "default-k8s-diff-port-094310" hosting pod "kube-controller-manager-default-k8s-diff-port-094310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-094310" has status "Ready":"False"
	I0731 18:07:36.365183   73696 pod_ready.go:81] duration metric: took 91.758203ms for pod "kube-controller-manager-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	E0731 18:07:36.365195   73696 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-094310" hosting pod "kube-controller-manager-default-k8s-diff-port-094310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-094310" has status "Ready":"False"
	I0731 18:07:36.365201   73696 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8bgl7" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:36.765371   73696 pod_ready.go:92] pod "kube-proxy-8bgl7" in "kube-system" namespace has status "Ready":"True"
	I0731 18:07:36.765393   73696 pod_ready.go:81] duration metric: took 400.181854ms for pod "kube-proxy-8bgl7" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:36.765405   73696 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:38.770757   73696 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-094310" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:40.772702   73696 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-094310" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:41.552094   74203 start.go:364] duration metric: took 3m46.106649241s to acquireMachinesLock for "old-k8s-version-276459"
	I0731 18:07:41.552166   74203 start.go:96] Skipping create...Using existing machine configuration
	I0731 18:07:41.552174   74203 fix.go:54] fixHost starting: 
	I0731 18:07:41.552553   74203 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:07:41.552595   74203 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:07:41.569965   74203 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43329
	I0731 18:07:41.570361   74203 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:07:41.570884   74203 main.go:141] libmachine: Using API Version  1
	I0731 18:07:41.570905   74203 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:07:41.571247   74203 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:07:41.571454   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:07:41.571605   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetState
	I0731 18:07:41.573081   74203 fix.go:112] recreateIfNeeded on old-k8s-version-276459: state=Stopped err=<nil>
	I0731 18:07:41.573114   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	W0731 18:07:41.573276   74203 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 18:07:41.575254   74203 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-276459" ...
	I0731 18:07:40.254868   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.255367   73800 main.go:141] libmachine: (embed-certs-436067) Found IP for machine: 192.168.50.86
	I0731 18:07:40.255385   73800 main.go:141] libmachine: (embed-certs-436067) Reserving static IP address...
	I0731 18:07:40.255405   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has current primary IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.255798   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "embed-certs-436067", mac: "52:54:00:87:1e:25", ip: "192.168.50.86"} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:40.255822   73800 main.go:141] libmachine: (embed-certs-436067) Reserved static IP address: 192.168.50.86
	I0731 18:07:40.255839   73800 main.go:141] libmachine: (embed-certs-436067) DBG | skip adding static IP to network mk-embed-certs-436067 - found existing host DHCP lease matching {name: "embed-certs-436067", mac: "52:54:00:87:1e:25", ip: "192.168.50.86"}
	I0731 18:07:40.255853   73800 main.go:141] libmachine: (embed-certs-436067) DBG | Getting to WaitForSSH function...
	I0731 18:07:40.255865   73800 main.go:141] libmachine: (embed-certs-436067) Waiting for SSH to be available...
	I0731 18:07:40.257994   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.258304   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:40.258331   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.258475   73800 main.go:141] libmachine: (embed-certs-436067) DBG | Using SSH client type: external
	I0731 18:07:40.258492   73800 main.go:141] libmachine: (embed-certs-436067) DBG | Using SSH private key: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/embed-certs-436067/id_rsa (-rw-------)
	I0731 18:07:40.258594   73800 main.go:141] libmachine: (embed-certs-436067) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.86 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19349-8084/.minikube/machines/embed-certs-436067/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 18:07:40.258625   73800 main.go:141] libmachine: (embed-certs-436067) DBG | About to run SSH command:
	I0731 18:07:40.258644   73800 main.go:141] libmachine: (embed-certs-436067) DBG | exit 0
	I0731 18:07:40.387051   73800 main.go:141] libmachine: (embed-certs-436067) DBG | SSH cmd err, output: <nil>: 
	I0731 18:07:40.387459   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetConfigRaw
	I0731 18:07:40.388093   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetIP
	I0731 18:07:40.390805   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.391260   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:40.391306   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.391534   73800 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/embed-certs-436067/config.json ...
	I0731 18:07:40.391769   73800 machine.go:94] provisionDockerMachine start ...
	I0731 18:07:40.391793   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:07:40.392012   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:07:40.394412   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.394809   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:40.394839   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.395029   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:07:40.395209   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:40.395372   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:40.395480   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:07:40.395624   73800 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:40.395808   73800 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.86 22 <nil> <nil>}
	I0731 18:07:40.395817   73800 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 18:07:40.503041   73800 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 18:07:40.503073   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetMachineName
	I0731 18:07:40.503326   73800 buildroot.go:166] provisioning hostname "embed-certs-436067"
	I0731 18:07:40.503352   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetMachineName
	I0731 18:07:40.503539   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:07:40.506604   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.506940   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:40.506967   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.507124   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:07:40.507296   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:40.507438   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:40.507577   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:07:40.507752   73800 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:40.507912   73800 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.86 22 <nil> <nil>}
	I0731 18:07:40.507927   73800 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-436067 && echo "embed-certs-436067" | sudo tee /etc/hostname
	I0731 18:07:40.632627   73800 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-436067
	
	I0731 18:07:40.632678   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:07:40.635632   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.635989   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:40.636017   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.636168   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:07:40.636386   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:40.636554   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:40.636751   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:07:40.636963   73800 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:40.637192   73800 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.86 22 <nil> <nil>}
	I0731 18:07:40.637213   73800 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-436067' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-436067/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-436067' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 18:07:40.755249   73800 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 18:07:40.755273   73800 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19349-8084/.minikube CaCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19349-8084/.minikube}
	I0731 18:07:40.755291   73800 buildroot.go:174] setting up certificates
	I0731 18:07:40.755301   73800 provision.go:84] configureAuth start
	I0731 18:07:40.755310   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetMachineName
	I0731 18:07:40.755602   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetIP
	I0731 18:07:40.758306   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.758705   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:40.758731   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.758865   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:07:40.760790   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.761061   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:40.761090   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.761244   73800 provision.go:143] copyHostCerts
	I0731 18:07:40.761299   73800 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem, removing ...
	I0731 18:07:40.761323   73800 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem
	I0731 18:07:40.761376   73800 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem (1082 bytes)
	I0731 18:07:40.761479   73800 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem, removing ...
	I0731 18:07:40.761488   73800 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem
	I0731 18:07:40.761509   73800 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem (1123 bytes)
	I0731 18:07:40.761562   73800 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem, removing ...
	I0731 18:07:40.761569   73800 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem
	I0731 18:07:40.761586   73800 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem (1675 bytes)
	I0731 18:07:40.761635   73800 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem org=jenkins.embed-certs-436067 san=[127.0.0.1 192.168.50.86 embed-certs-436067 localhost minikube]
	I0731 18:07:40.874612   73800 provision.go:177] copyRemoteCerts
	I0731 18:07:40.874666   73800 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 18:07:40.874691   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:07:40.877623   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.878044   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:40.878075   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.878206   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:07:40.878403   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:40.878556   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:07:40.878706   73800 sshutil.go:53] new ssh client: &{IP:192.168.50.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/embed-certs-436067/id_rsa Username:docker}
	I0731 18:07:40.965720   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 18:07:40.987836   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0731 18:07:41.012423   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 18:07:41.036366   73800 provision.go:87] duration metric: took 281.054266ms to configureAuth
	I0731 18:07:41.036392   73800 buildroot.go:189] setting minikube options for container-runtime
	I0731 18:07:41.036561   73800 config.go:182] Loaded profile config "embed-certs-436067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:07:41.036626   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:07:41.039204   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.039587   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:41.039615   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.039814   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:07:41.040021   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:41.040162   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:41.040293   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:07:41.040462   73800 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:41.040642   73800 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.86 22 <nil> <nil>}
	I0731 18:07:41.040663   73800 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 18:07:41.307915   73800 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 18:07:41.307945   73800 machine.go:97] duration metric: took 916.161297ms to provisionDockerMachine
	I0731 18:07:41.307958   73800 start.go:293] postStartSetup for "embed-certs-436067" (driver="kvm2")
	I0731 18:07:41.307971   73800 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 18:07:41.307990   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:07:41.308383   73800 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 18:07:41.308409   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:07:41.311172   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.311532   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:41.311559   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.311712   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:07:41.311940   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:41.312132   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:07:41.312251   73800 sshutil.go:53] new ssh client: &{IP:192.168.50.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/embed-certs-436067/id_rsa Username:docker}
	I0731 18:07:41.397229   73800 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 18:07:41.401356   73800 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 18:07:41.401380   73800 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/addons for local assets ...
	I0731 18:07:41.401458   73800 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/files for local assets ...
	I0731 18:07:41.401571   73800 filesync.go:149] local asset: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem -> 152592.pem in /etc/ssl/certs
	I0731 18:07:41.401696   73800 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 18:07:41.410540   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /etc/ssl/certs/152592.pem (1708 bytes)
	I0731 18:07:41.434298   73800 start.go:296] duration metric: took 126.324424ms for postStartSetup
	I0731 18:07:41.434342   73800 fix.go:56] duration metric: took 19.306472215s for fixHost
	I0731 18:07:41.434363   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:07:41.437502   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.438007   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:41.438038   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.438221   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:07:41.438435   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:41.438613   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:41.438752   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:07:41.438932   73800 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:41.439086   73800 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.86 22 <nil> <nil>}
	I0731 18:07:41.439095   73800 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 18:07:41.551915   73800 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722449261.529568895
	
	I0731 18:07:41.551937   73800 fix.go:216] guest clock: 1722449261.529568895
	I0731 18:07:41.551944   73800 fix.go:229] Guest: 2024-07-31 18:07:41.529568895 +0000 UTC Remote: 2024-07-31 18:07:41.434346377 +0000 UTC m=+286.960766339 (delta=95.222518ms)
	I0731 18:07:41.551999   73800 fix.go:200] guest clock delta is within tolerance: 95.222518ms
	I0731 18:07:41.552010   73800 start.go:83] releasing machines lock for "embed-certs-436067", held for 19.42417291s
	I0731 18:07:41.552036   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:07:41.552377   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetIP
	I0731 18:07:41.554945   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.555385   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:41.555415   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.555583   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:07:41.556139   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:07:41.556362   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:07:41.556448   73800 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 18:07:41.556507   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:07:41.556619   73800 ssh_runner.go:195] Run: cat /version.json
	I0731 18:07:41.556634   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:07:41.559700   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.559847   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.560160   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:41.560227   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.560277   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:07:41.560374   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:41.560440   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.560582   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:41.560652   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:07:41.560697   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:07:41.560745   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:41.560833   73800 sshutil.go:53] new ssh client: &{IP:192.168.50.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/embed-certs-436067/id_rsa Username:docker}
	I0731 18:07:41.560909   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:07:41.561060   73800 sshutil.go:53] new ssh client: &{IP:192.168.50.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/embed-certs-436067/id_rsa Username:docker}
	I0731 18:07:41.640796   73800 ssh_runner.go:195] Run: systemctl --version
	I0731 18:07:41.671461   73800 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 18:07:41.820881   73800 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 18:07:41.826610   73800 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 18:07:41.826673   73800 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 18:07:41.841766   73800 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 18:07:41.841789   73800 start.go:495] detecting cgroup driver to use...
	I0731 18:07:41.841872   73800 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 18:07:41.858636   73800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 18:07:41.873090   73800 docker.go:217] disabling cri-docker service (if available) ...
	I0731 18:07:41.873152   73800 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 18:07:41.890967   73800 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 18:07:41.907886   73800 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 18:07:42.022724   73800 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 18:07:42.173885   73800 docker.go:233] disabling docker service ...
	I0731 18:07:42.173969   73800 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 18:07:42.190959   73800 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 18:07:42.205274   73800 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 18:07:42.358130   73800 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 18:07:42.497981   73800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 18:07:42.513774   73800 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 18:07:42.532713   73800 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 18:07:42.532808   73800 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:42.544367   73800 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 18:07:42.544427   73800 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:42.556427   73800 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:42.566399   73800 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:42.576633   73800 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 18:07:42.588508   73800 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:42.600011   73800 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:42.618858   73800 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:42.630437   73800 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 18:07:42.641459   73800 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 18:07:42.641528   73800 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 18:07:42.655000   73800 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 18:07:42.664912   73800 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:07:42.791781   73800 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 18:07:42.936709   73800 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 18:07:42.936778   73800 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 18:07:42.941132   73800 start.go:563] Will wait 60s for crictl version
	I0731 18:07:42.941189   73800 ssh_runner.go:195] Run: which crictl
	I0731 18:07:42.944870   73800 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 18:07:42.983069   73800 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 18:07:42.983181   73800 ssh_runner.go:195] Run: crio --version
	I0731 18:07:43.011636   73800 ssh_runner.go:195] Run: crio --version
	I0731 18:07:43.043295   73800 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 18:07:43.044545   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetIP
	I0731 18:07:43.047635   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:43.048049   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:43.048080   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:43.048330   73800 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0731 18:07:43.052269   73800 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:07:43.064116   73800 kubeadm.go:883] updating cluster {Name:embed-certs-436067 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-436067 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.86 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 18:07:43.064283   73800 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 18:07:43.064361   73800 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 18:07:43.100437   73800 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 18:07:43.100516   73800 ssh_runner.go:195] Run: which lz4
	I0731 18:07:43.104627   73800 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 18:07:43.108552   73800 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 18:07:43.108586   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 18:07:44.368238   73800 crio.go:462] duration metric: took 1.263636259s to copy over tarball
	I0731 18:07:44.368322   73800 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 18:07:41.576648   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .Start
	I0731 18:07:41.576823   74203 main.go:141] libmachine: (old-k8s-version-276459) Ensuring networks are active...
	I0731 18:07:41.577511   74203 main.go:141] libmachine: (old-k8s-version-276459) Ensuring network default is active
	I0731 18:07:41.578015   74203 main.go:141] libmachine: (old-k8s-version-276459) Ensuring network mk-old-k8s-version-276459 is active
	I0731 18:07:41.578469   74203 main.go:141] libmachine: (old-k8s-version-276459) Getting domain xml...
	I0731 18:07:41.579474   74203 main.go:141] libmachine: (old-k8s-version-276459) Creating domain...
	I0731 18:07:42.876409   74203 main.go:141] libmachine: (old-k8s-version-276459) Waiting to get IP...
	I0731 18:07:42.877345   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:42.877788   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:42.877841   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:42.877763   75164 retry.go:31] will retry after 218.764988ms: waiting for machine to come up
	I0731 18:07:43.098230   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:43.098697   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:43.098722   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:43.098650   75164 retry.go:31] will retry after 285.579707ms: waiting for machine to come up
	I0731 18:07:43.386356   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:43.386897   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:43.386928   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:43.386852   75164 retry.go:31] will retry after 389.197253ms: waiting for machine to come up
	I0731 18:07:43.778183   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:43.778672   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:43.778698   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:43.778622   75164 retry.go:31] will retry after 484.5108ms: waiting for machine to come up
	I0731 18:07:44.264412   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:44.265042   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:44.265073   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:44.264955   75164 retry.go:31] will retry after 621.551625ms: waiting for machine to come up
	I0731 18:07:44.887986   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:44.888534   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:44.888563   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:44.888489   75164 retry.go:31] will retry after 610.567971ms: waiting for machine to come up
	I0731 18:07:42.773583   73696 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-094310" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:44.272853   73696 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-094310" in "kube-system" namespace has status "Ready":"True"
	I0731 18:07:44.272874   73696 pod_ready.go:81] duration metric: took 7.507462023s for pod "kube-scheduler-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:44.272886   73696 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:46.689701   73800 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.321340678s)
	I0731 18:07:46.689730   73800 crio.go:469] duration metric: took 2.321463484s to extract the tarball
	I0731 18:07:46.689738   73800 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 18:07:46.749205   73800 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 18:07:46.805950   73800 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 18:07:46.805979   73800 cache_images.go:84] Images are preloaded, skipping loading
	I0731 18:07:46.805990   73800 kubeadm.go:934] updating node { 192.168.50.86 8443 v1.30.3 crio true true} ...
	I0731 18:07:46.806135   73800 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-436067 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.86
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-436067 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 18:07:46.806233   73800 ssh_runner.go:195] Run: crio config
	I0731 18:07:46.865815   73800 cni.go:84] Creating CNI manager for ""
	I0731 18:07:46.865838   73800 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:07:46.865852   73800 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 18:07:46.865873   73800 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.86 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-436067 NodeName:embed-certs-436067 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.86"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.86 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 18:07:46.866048   73800 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.86
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-436067"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.86
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.86"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 18:07:46.866121   73800 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 18:07:46.875722   73800 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 18:07:46.875786   73800 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 18:07:46.885107   73800 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0731 18:07:46.903868   73800 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 18:07:46.919585   73800 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0731 18:07:46.939034   73800 ssh_runner.go:195] Run: grep 192.168.50.86	control-plane.minikube.internal$ /etc/hosts
	I0731 18:07:46.943460   73800 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.86	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:07:46.957699   73800 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:07:47.065714   73800 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 18:07:47.080655   73800 certs.go:68] Setting up /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/embed-certs-436067 for IP: 192.168.50.86
	I0731 18:07:47.080681   73800 certs.go:194] generating shared ca certs ...
	I0731 18:07:47.080717   73800 certs.go:226] acquiring lock for ca certs: {Name:mk49eed439085f9c30746706460ce213570d1997 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:07:47.080879   73800 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key
	I0731 18:07:47.080938   73800 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key
	I0731 18:07:47.080950   73800 certs.go:256] generating profile certs ...
	I0731 18:07:47.081046   73800 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/embed-certs-436067/client.key
	I0731 18:07:47.081113   73800 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/embed-certs-436067/apiserver.key.7b8160da
	I0731 18:07:47.081168   73800 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/embed-certs-436067/proxy-client.key
	I0731 18:07:47.081312   73800 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem (1338 bytes)
	W0731 18:07:47.081367   73800 certs.go:480] ignoring /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259_empty.pem, impossibly tiny 0 bytes
	I0731 18:07:47.081380   73800 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 18:07:47.081413   73800 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem (1082 bytes)
	I0731 18:07:47.081438   73800 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem (1123 bytes)
	I0731 18:07:47.081468   73800 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem (1675 bytes)
	I0731 18:07:47.081508   73800 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem (1708 bytes)
	I0731 18:07:47.082355   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 18:07:47.130037   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 18:07:47.171218   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 18:07:47.215745   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 18:07:47.244883   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/embed-certs-436067/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0731 18:07:47.270032   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/embed-certs-436067/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 18:07:47.294900   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/embed-certs-436067/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 18:07:47.317285   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/embed-certs-436067/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 18:07:47.343000   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 18:07:47.369906   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem --> /usr/share/ca-certificates/15259.pem (1338 bytes)
	I0731 18:07:47.392022   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /usr/share/ca-certificates/152592.pem (1708 bytes)
	I0731 18:07:47.414219   73800 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 18:07:47.431931   73800 ssh_runner.go:195] Run: openssl version
	I0731 18:07:47.437602   73800 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 18:07:47.447585   73800 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:07:47.451779   73800 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 16:42 /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:07:47.451833   73800 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:07:47.457309   73800 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 18:07:47.466917   73800 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15259.pem && ln -fs /usr/share/ca-certificates/15259.pem /etc/ssl/certs/15259.pem"
	I0731 18:07:47.476211   73800 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15259.pem
	I0731 18:07:47.480149   73800 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 16:54 /usr/share/ca-certificates/15259.pem
	I0731 18:07:47.480215   73800 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15259.pem
	I0731 18:07:47.485412   73800 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15259.pem /etc/ssl/certs/51391683.0"
	I0731 18:07:47.494852   73800 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/152592.pem && ln -fs /usr/share/ca-certificates/152592.pem /etc/ssl/certs/152592.pem"
	I0731 18:07:47.504407   73800 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/152592.pem
	I0731 18:07:47.509594   73800 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 16:54 /usr/share/ca-certificates/152592.pem
	I0731 18:07:47.509658   73800 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/152592.pem
	I0731 18:07:47.515728   73800 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/152592.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 18:07:47.525660   73800 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 18:07:47.529953   73800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 18:07:47.535576   73800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 18:07:47.541158   73800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 18:07:47.546633   73800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 18:07:47.551827   73800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 18:07:47.557100   73800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 18:07:47.562447   73800 kubeadm.go:392] StartCluster: {Name:embed-certs-436067 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-436067 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.86 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:07:47.562551   73800 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 18:07:47.562616   73800 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 18:07:47.610318   73800 cri.go:89] found id: ""
	I0731 18:07:47.610382   73800 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 18:07:47.623036   73800 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 18:07:47.623053   73800 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 18:07:47.623101   73800 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 18:07:47.631709   73800 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 18:07:47.632699   73800 kubeconfig.go:125] found "embed-certs-436067" server: "https://192.168.50.86:8443"
	I0731 18:07:47.634724   73800 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 18:07:47.643183   73800 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.86
	I0731 18:07:47.643207   73800 kubeadm.go:1160] stopping kube-system containers ...
	I0731 18:07:47.643218   73800 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 18:07:47.643264   73800 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 18:07:47.677438   73800 cri.go:89] found id: ""
	I0731 18:07:47.677527   73800 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 18:07:47.693427   73800 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 18:07:47.702889   73800 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 18:07:47.702907   73800 kubeadm.go:157] found existing configuration files:
	
	I0731 18:07:47.702956   73800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 18:07:47.713958   73800 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 18:07:47.714017   73800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 18:07:47.723931   73800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 18:07:47.732615   73800 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 18:07:47.732673   73800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 18:07:47.741168   73800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 18:07:47.749164   73800 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 18:07:47.749217   73800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 18:07:47.757691   73800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 18:07:47.765479   73800 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 18:07:47.765530   73800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 18:07:47.774002   73800 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 18:07:47.783757   73800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:47.890835   73800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:48.951421   73800 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.060547503s)
	I0731 18:07:48.951466   73800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:49.152745   73800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:49.224334   73800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:49.341066   73800 api_server.go:52] waiting for apiserver process to appear ...
	I0731 18:07:49.341147   73800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:07:45.500400   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:45.500938   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:45.500966   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:45.500890   75164 retry.go:31] will retry after 1.069889786s: waiting for machine to come up
	I0731 18:07:46.572634   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:46.573085   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:46.573128   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:46.572979   75164 retry.go:31] will retry after 1.047722466s: waiting for machine to come up
	I0731 18:07:47.622035   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:47.622479   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:47.622507   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:47.622435   75164 retry.go:31] will retry after 1.292658555s: waiting for machine to come up
	I0731 18:07:48.916255   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:48.916755   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:48.916778   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:48.916701   75164 retry.go:31] will retry after 2.006539925s: waiting for machine to come up
	I0731 18:07:46.281654   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:49.189881   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:49.841397   73800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:07:50.341264   73800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:07:50.409398   73800 api_server.go:72] duration metric: took 1.068329172s to wait for apiserver process to appear ...
	I0731 18:07:50.409432   73800 api_server.go:88] waiting for apiserver healthz status ...
	I0731 18:07:50.409457   73800 api_server.go:253] Checking apiserver healthz at https://192.168.50.86:8443/healthz ...
	I0731 18:07:50.410135   73800 api_server.go:269] stopped: https://192.168.50.86:8443/healthz: Get "https://192.168.50.86:8443/healthz": dial tcp 192.168.50.86:8443: connect: connection refused
	I0731 18:07:50.909802   73800 api_server.go:253] Checking apiserver healthz at https://192.168.50.86:8443/healthz ...
	I0731 18:07:52.636930   73800 api_server.go:279] https://192.168.50.86:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 18:07:52.636972   73800 api_server.go:103] status: https://192.168.50.86:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 18:07:52.636989   73800 api_server.go:253] Checking apiserver healthz at https://192.168.50.86:8443/healthz ...
	I0731 18:07:52.666947   73800 api_server.go:279] https://192.168.50.86:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 18:07:52.666980   73800 api_server.go:103] status: https://192.168.50.86:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 18:07:52.910391   73800 api_server.go:253] Checking apiserver healthz at https://192.168.50.86:8443/healthz ...
	I0731 18:07:52.916305   73800 api_server.go:279] https://192.168.50.86:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 18:07:52.916342   73800 api_server.go:103] status: https://192.168.50.86:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 18:07:53.409623   73800 api_server.go:253] Checking apiserver healthz at https://192.168.50.86:8443/healthz ...
	I0731 18:07:53.419159   73800 api_server.go:279] https://192.168.50.86:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 18:07:53.419205   73800 api_server.go:103] status: https://192.168.50.86:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 18:07:53.909654   73800 api_server.go:253] Checking apiserver healthz at https://192.168.50.86:8443/healthz ...
	I0731 18:07:53.913518   73800 api_server.go:279] https://192.168.50.86:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 18:07:53.913541   73800 api_server.go:103] status: https://192.168.50.86:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 18:07:54.409879   73800 api_server.go:253] Checking apiserver healthz at https://192.168.50.86:8443/healthz ...
	I0731 18:07:54.413948   73800 api_server.go:279] https://192.168.50.86:8443/healthz returned 200:
	ok
	I0731 18:07:54.422414   73800 api_server.go:141] control plane version: v1.30.3
	I0731 18:07:54.422444   73800 api_server.go:131] duration metric: took 4.013003689s to wait for apiserver health ...
	I0731 18:07:54.422458   73800 cni.go:84] Creating CNI manager for ""
	I0731 18:07:54.422467   73800 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:07:54.424680   73800 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 18:07:54.425887   73800 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 18:07:54.436394   73800 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 18:07:54.454533   73800 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 18:07:54.464268   73800 system_pods.go:59] 8 kube-system pods found
	I0731 18:07:54.464304   73800 system_pods.go:61] "coredns-7db6d8ff4d-h6ckp" [84faf557-0c8d-4026-b620-37265e017ed0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:07:54.464315   73800 system_pods.go:61] "etcd-embed-certs-436067" [787466df-6e3f-4209-a996-037875d63dc8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 18:07:54.464326   73800 system_pods.go:61] "kube-apiserver-embed-certs-436067" [6366e38e-21f3-41a4-af7a-433953b70eaa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 18:07:54.464335   73800 system_pods.go:61] "kube-controller-manager-embed-certs-436067" [a97f6a49-40cf-433a-8196-c433e3cda8e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 18:07:54.464341   73800 system_pods.go:61] "kube-proxy-tl9pj" [0124eb62-5c00-4f75-a73f-c3e92ddc4a42] Running
	I0731 18:07:54.464354   73800 system_pods.go:61] "kube-scheduler-embed-certs-436067" [afbb9117-f229-44ea-8939-d28c4a402c6b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 18:07:54.464366   73800 system_pods.go:61] "metrics-server-569cc877fc-fzxrw" [2ecdab2a-8ce8-4771-bd94-4e24dee34386] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 18:07:54.464374   73800 system_pods.go:61] "storage-provisioner" [29b17f6d-f9e4-4272-b6da-368431264701] Running
	I0731 18:07:54.464382   73800 system_pods.go:74] duration metric: took 9.82125ms to wait for pod list to return data ...
	I0731 18:07:54.464395   73800 node_conditions.go:102] verifying NodePressure condition ...
	I0731 18:07:54.467718   73800 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 18:07:54.467748   73800 node_conditions.go:123] node cpu capacity is 2
	I0731 18:07:54.467761   73800 node_conditions.go:105] duration metric: took 3.3602ms to run NodePressure ...
	I0731 18:07:54.467779   73800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:50.925369   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:50.925835   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:50.925856   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:50.925790   75164 retry.go:31] will retry after 2.875577792s: waiting for machine to come up
	I0731 18:07:53.802729   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:53.803164   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:53.803192   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:53.803122   75164 retry.go:31] will retry after 2.352020729s: waiting for machine to come up
	I0731 18:07:51.279883   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:53.279992   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:55.778812   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:54.732921   73800 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 18:07:54.736779   73800 kubeadm.go:739] kubelet initialised
	I0731 18:07:54.736798   73800 kubeadm.go:740] duration metric: took 3.850446ms waiting for restarted kubelet to initialise ...
	I0731 18:07:54.736809   73800 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:07:54.741733   73800 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-h6ckp" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:54.745722   73800 pod_ready.go:97] node "embed-certs-436067" hosting pod "coredns-7db6d8ff4d-h6ckp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436067" has status "Ready":"False"
	I0731 18:07:54.745742   73800 pod_ready.go:81] duration metric: took 3.986968ms for pod "coredns-7db6d8ff4d-h6ckp" in "kube-system" namespace to be "Ready" ...
	E0731 18:07:54.745751   73800 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-436067" hosting pod "coredns-7db6d8ff4d-h6ckp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436067" has status "Ready":"False"
	I0731 18:07:54.745757   73800 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:54.749650   73800 pod_ready.go:97] node "embed-certs-436067" hosting pod "etcd-embed-certs-436067" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436067" has status "Ready":"False"
	I0731 18:07:54.749666   73800 pod_ready.go:81] duration metric: took 3.895483ms for pod "etcd-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	E0731 18:07:54.749673   73800 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-436067" hosting pod "etcd-embed-certs-436067" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436067" has status "Ready":"False"
	I0731 18:07:54.749679   73800 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:54.753326   73800 pod_ready.go:97] node "embed-certs-436067" hosting pod "kube-apiserver-embed-certs-436067" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436067" has status "Ready":"False"
	I0731 18:07:54.753351   73800 pod_ready.go:81] duration metric: took 3.66496ms for pod "kube-apiserver-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	E0731 18:07:54.753362   73800 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-436067" hosting pod "kube-apiserver-embed-certs-436067" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436067" has status "Ready":"False"
	I0731 18:07:54.753370   73800 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:54.857956   73800 pod_ready.go:97] node "embed-certs-436067" hosting pod "kube-controller-manager-embed-certs-436067" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436067" has status "Ready":"False"
	I0731 18:07:54.857978   73800 pod_ready.go:81] duration metric: took 104.599259ms for pod "kube-controller-manager-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	E0731 18:07:54.857988   73800 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-436067" hosting pod "kube-controller-manager-embed-certs-436067" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436067" has status "Ready":"False"
	I0731 18:07:54.857995   73800 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-tl9pj" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:55.257589   73800 pod_ready.go:92] pod "kube-proxy-tl9pj" in "kube-system" namespace has status "Ready":"True"
	I0731 18:07:55.257621   73800 pod_ready.go:81] duration metric: took 399.617003ms for pod "kube-proxy-tl9pj" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:55.257630   73800 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:57.262770   73800 pod_ready.go:102] pod "kube-scheduler-embed-certs-436067" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:59.271094   73800 pod_ready.go:102] pod "kube-scheduler-embed-certs-436067" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:56.157721   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:56.158176   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:56.158216   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:56.158110   75164 retry.go:31] will retry after 3.552824334s: waiting for machine to come up
	I0731 18:07:59.712249   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.712759   74203 main.go:141] libmachine: (old-k8s-version-276459) Found IP for machine: 192.168.39.26
	I0731 18:07:59.712783   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has current primary IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.712793   74203 main.go:141] libmachine: (old-k8s-version-276459) Reserving static IP address...
	I0731 18:07:59.713268   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "old-k8s-version-276459", mac: "52:54:00:79:9d:96", ip: "192.168.39.26"} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:07:59.713297   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | skip adding static IP to network mk-old-k8s-version-276459 - found existing host DHCP lease matching {name: "old-k8s-version-276459", mac: "52:54:00:79:9d:96", ip: "192.168.39.26"}
	I0731 18:07:59.713320   74203 main.go:141] libmachine: (old-k8s-version-276459) Reserved static IP address: 192.168.39.26
	I0731 18:07:59.713343   74203 main.go:141] libmachine: (old-k8s-version-276459) Waiting for SSH to be available...
	I0731 18:07:59.713355   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | Getting to WaitForSSH function...
	I0731 18:07:59.716068   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.716456   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:07:59.716490   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.716701   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | Using SSH client type: external
	I0731 18:07:59.716725   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | Using SSH private key: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459/id_rsa (-rw-------)
	I0731 18:07:59.716762   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.26 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 18:07:59.716776   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | About to run SSH command:
	I0731 18:07:59.716792   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | exit 0
	I0731 18:07:59.847720   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | SSH cmd err, output: <nil>: 
	I0731 18:07:59.848089   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetConfigRaw
	I0731 18:07:59.848847   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetIP
	I0731 18:07:59.851632   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.852004   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:07:59.852030   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.852321   74203 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/config.json ...
	I0731 18:07:59.852505   74203 machine.go:94] provisionDockerMachine start ...
	I0731 18:07:59.852524   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:07:59.852752   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:07:59.855198   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.855596   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:07:59.855626   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.855756   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:07:59.855920   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:07:59.856071   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:07:59.856208   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:07:59.856372   74203 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:59.856601   74203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0731 18:07:59.856614   74203 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 18:07:59.963492   74203 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 18:07:59.963524   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetMachineName
	I0731 18:07:59.963762   74203 buildroot.go:166] provisioning hostname "old-k8s-version-276459"
	I0731 18:07:59.963794   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetMachineName
	I0731 18:07:59.963992   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:07:59.967261   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.967725   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:07:59.967762   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.967938   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:07:59.968131   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:07:59.968316   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:07:59.968487   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:07:59.968687   74203 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:59.968872   74203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0731 18:07:59.968890   74203 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-276459 && echo "old-k8s-version-276459" | sudo tee /etc/hostname
	I0731 18:08:00.084360   74203 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-276459
	
	I0731 18:08:00.084390   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:08:00.087433   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.087833   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.087862   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.088016   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:08:00.088187   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.088371   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.088521   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:08:00.088719   74203 main.go:141] libmachine: Using SSH client type: native
	I0731 18:08:00.088893   74203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0731 18:08:00.088915   74203 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-276459' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-276459/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-276459' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 18:08:00.200012   74203 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 18:08:00.200038   74203 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19349-8084/.minikube CaCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19349-8084/.minikube}
	I0731 18:08:00.200069   74203 buildroot.go:174] setting up certificates
	I0731 18:08:00.200081   74203 provision.go:84] configureAuth start
	I0731 18:08:00.200093   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetMachineName
	I0731 18:08:00.200360   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetIP
	I0731 18:08:00.203352   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.203694   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.203721   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.203951   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:08:00.206061   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.206398   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.206432   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.206510   74203 provision.go:143] copyHostCerts
	I0731 18:08:00.206580   74203 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem, removing ...
	I0731 18:08:00.206591   74203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem
	I0731 18:08:00.206654   74203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem (1082 bytes)
	I0731 18:08:00.206759   74203 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem, removing ...
	I0731 18:08:00.206769   74203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem
	I0731 18:08:00.206799   74203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem (1123 bytes)
	I0731 18:08:00.206876   74203 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem, removing ...
	I0731 18:08:00.206885   74203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem
	I0731 18:08:00.206913   74203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem (1675 bytes)
	I0731 18:08:00.207047   74203 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-276459 san=[127.0.0.1 192.168.39.26 localhost minikube old-k8s-version-276459]
	I0731 18:08:00.279363   74203 provision.go:177] copyRemoteCerts
	I0731 18:08:00.279423   74203 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 18:08:00.279456   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:08:00.282234   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.282601   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.282630   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.282751   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:08:00.283004   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.283178   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:08:00.283361   74203 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459/id_rsa Username:docker}
	I0731 18:08:00.935990   73479 start.go:364] duration metric: took 51.517312901s to acquireMachinesLock for "no-preload-673754"
	I0731 18:08:00.936054   73479 start.go:96] Skipping create...Using existing machine configuration
	I0731 18:08:00.936066   73479 fix.go:54] fixHost starting: 
	I0731 18:08:00.936534   73479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:08:00.936589   73479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:08:00.954868   73479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39013
	I0731 18:08:00.955405   73479 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:08:00.955980   73479 main.go:141] libmachine: Using API Version  1
	I0731 18:08:00.956012   73479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:08:00.956386   73479 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:08:00.956589   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 18:08:00.956752   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetState
	I0731 18:08:00.958461   73479 fix.go:112] recreateIfNeeded on no-preload-673754: state=Stopped err=<nil>
	I0731 18:08:00.958485   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	W0731 18:08:00.958655   73479 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 18:08:00.960117   73479 out.go:177] * Restarting existing kvm2 VM for "no-preload-673754" ...
	I0731 18:07:57.779258   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:59.780834   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:00.961340   73479 main.go:141] libmachine: (no-preload-673754) Calling .Start
	I0731 18:08:00.961543   73479 main.go:141] libmachine: (no-preload-673754) Ensuring networks are active...
	I0731 18:08:00.962332   73479 main.go:141] libmachine: (no-preload-673754) Ensuring network default is active
	I0731 18:08:00.962661   73479 main.go:141] libmachine: (no-preload-673754) Ensuring network mk-no-preload-673754 is active
	I0731 18:08:00.963165   73479 main.go:141] libmachine: (no-preload-673754) Getting domain xml...
	I0731 18:08:00.963982   73479 main.go:141] libmachine: (no-preload-673754) Creating domain...
	I0731 18:08:00.365254   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 18:08:00.389729   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0731 18:08:00.413143   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 18:08:00.436040   74203 provision.go:87] duration metric: took 235.932619ms to configureAuth
	I0731 18:08:00.436080   74203 buildroot.go:189] setting minikube options for container-runtime
	I0731 18:08:00.436288   74203 config.go:182] Loaded profile config "old-k8s-version-276459": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0731 18:08:00.436403   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:08:00.439184   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.439543   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.439575   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.439734   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:08:00.439898   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.440093   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.440271   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:08:00.440450   74203 main.go:141] libmachine: Using SSH client type: native
	I0731 18:08:00.440661   74203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0731 18:08:00.440679   74203 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 18:08:00.707438   74203 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 18:08:00.707467   74203 machine.go:97] duration metric: took 854.948491ms to provisionDockerMachine
	I0731 18:08:00.707482   74203 start.go:293] postStartSetup for "old-k8s-version-276459" (driver="kvm2")
	I0731 18:08:00.707494   74203 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 18:08:00.707510   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:08:00.707811   74203 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 18:08:00.707837   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:08:00.710726   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.711285   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.711315   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.711458   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:08:00.711703   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.711895   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:08:00.712049   74203 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459/id_rsa Username:docker}
	I0731 18:08:00.793719   74203 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 18:08:00.797858   74203 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 18:08:00.797888   74203 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/addons for local assets ...
	I0731 18:08:00.797960   74203 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/files for local assets ...
	I0731 18:08:00.798038   74203 filesync.go:149] local asset: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem -> 152592.pem in /etc/ssl/certs
	I0731 18:08:00.798130   74203 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 18:08:00.807013   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /etc/ssl/certs/152592.pem (1708 bytes)
	I0731 18:08:00.829440   74203 start.go:296] duration metric: took 121.944271ms for postStartSetup
	I0731 18:08:00.829487   74203 fix.go:56] duration metric: took 19.277312964s for fixHost
	I0731 18:08:00.829518   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:08:00.832718   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.833048   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.833082   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.833317   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:08:00.833533   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.833752   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.833887   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:08:00.834189   74203 main.go:141] libmachine: Using SSH client type: native
	I0731 18:08:00.834364   74203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0731 18:08:00.834377   74203 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 18:08:00.935834   74203 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722449280.899364873
	
	I0731 18:08:00.935853   74203 fix.go:216] guest clock: 1722449280.899364873
	I0731 18:08:00.935860   74203 fix.go:229] Guest: 2024-07-31 18:08:00.899364873 +0000 UTC Remote: 2024-07-31 18:08:00.829491013 +0000 UTC m=+245.518063325 (delta=69.87386ms)
	I0731 18:08:00.935894   74203 fix.go:200] guest clock delta is within tolerance: 69.87386ms
	I0731 18:08:00.935899   74203 start.go:83] releasing machines lock for "old-k8s-version-276459", held for 19.38376262s
	I0731 18:08:00.935937   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:08:00.936220   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetIP
	I0731 18:08:00.939282   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.939691   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.939722   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.939911   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:08:00.940506   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:08:00.940704   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:08:00.940790   74203 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 18:08:00.940831   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:08:00.940960   74203 ssh_runner.go:195] Run: cat /version.json
	I0731 18:08:00.941043   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:08:00.943883   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.943909   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.944361   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.944405   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.944429   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.944442   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.944542   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:08:00.944639   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:08:00.944766   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.944817   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.944899   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:08:00.944979   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:08:00.945039   74203 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459/id_rsa Username:docker}
	I0731 18:08:00.945110   74203 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459/id_rsa Username:docker}
	I0731 18:08:01.023818   74203 ssh_runner.go:195] Run: systemctl --version
	I0731 18:08:01.063390   74203 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 18:08:01.205084   74203 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 18:08:01.210972   74203 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 18:08:01.211049   74203 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 18:08:01.226156   74203 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 18:08:01.226180   74203 start.go:495] detecting cgroup driver to use...
	I0731 18:08:01.226257   74203 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 18:08:01.241506   74203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 18:08:01.256615   74203 docker.go:217] disabling cri-docker service (if available) ...
	I0731 18:08:01.256671   74203 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 18:08:01.271515   74203 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 18:08:01.287213   74203 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 18:08:01.415827   74203 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 18:08:01.578122   74203 docker.go:233] disabling docker service ...
	I0731 18:08:01.578208   74203 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 18:08:01.596564   74203 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 18:08:01.611984   74203 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 18:08:01.748972   74203 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 18:08:01.896911   74203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 18:08:01.912921   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 18:08:01.931671   74203 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0731 18:08:01.931749   74203 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:01.943737   74203 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 18:08:01.943798   74203 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:01.954571   74203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:01.964733   74203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:01.976087   74203 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 18:08:01.987193   74203 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 18:08:01.996620   74203 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 18:08:01.996670   74203 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 18:08:02.011046   74203 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 18:08:02.022199   74203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:08:02.147855   74203 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 18:08:02.309868   74203 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 18:08:02.309940   74203 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 18:08:02.314966   74203 start.go:563] Will wait 60s for crictl version
	I0731 18:08:02.315031   74203 ssh_runner.go:195] Run: which crictl
	I0731 18:08:02.318685   74203 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 18:08:02.359361   74203 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 18:08:02.359460   74203 ssh_runner.go:195] Run: crio --version
	I0731 18:08:02.387053   74203 ssh_runner.go:195] Run: crio --version
	I0731 18:08:02.417054   74203 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0731 18:08:01.265323   73800 pod_ready.go:92] pod "kube-scheduler-embed-certs-436067" in "kube-system" namespace has status "Ready":"True"
	I0731 18:08:01.265363   73800 pod_ready.go:81] duration metric: took 6.007715949s for pod "kube-scheduler-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:01.265376   73800 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:03.271693   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:02.418272   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetIP
	I0731 18:08:02.421211   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:02.421714   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:02.421743   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:02.421949   74203 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 18:08:02.425878   74203 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:08:02.438082   74203 kubeadm.go:883] updating cluster {Name:old-k8s-version-276459 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-276459 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 18:08:02.438222   74203 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 18:08:02.438293   74203 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 18:08:02.484113   74203 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0731 18:08:02.484189   74203 ssh_runner.go:195] Run: which lz4
	I0731 18:08:02.488365   74203 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 18:08:02.492321   74203 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 18:08:02.492352   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0731 18:08:03.946187   74203 crio.go:462] duration metric: took 1.457852426s to copy over tarball
	I0731 18:08:03.946261   74203 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 18:08:01.781606   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:03.781786   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:02.287159   73479 main.go:141] libmachine: (no-preload-673754) Waiting to get IP...
	I0731 18:08:02.288338   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:02.288812   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:02.288879   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:02.288799   75356 retry.go:31] will retry after 229.074083ms: waiting for machine to come up
	I0731 18:08:02.519266   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:02.519697   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:02.519720   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:02.519663   75356 retry.go:31] will retry after 328.345922ms: waiting for machine to come up
	I0731 18:08:02.849290   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:02.849839   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:02.849871   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:02.849787   75356 retry.go:31] will retry after 339.030371ms: waiting for machine to come up
	I0731 18:08:03.190065   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:03.190587   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:03.190620   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:03.190539   75356 retry.go:31] will retry after 514.955663ms: waiting for machine to come up
	I0731 18:08:03.707808   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:03.708382   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:03.708418   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:03.708349   75356 retry.go:31] will retry after 543.558992ms: waiting for machine to come up
	I0731 18:08:04.253224   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:04.253760   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:04.253781   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:04.253708   75356 retry.go:31] will retry after 925.348689ms: waiting for machine to come up
	I0731 18:08:05.180439   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:05.180833   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:05.180857   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:05.180786   75356 retry.go:31] will retry after 1.014666798s: waiting for machine to come up
	I0731 18:08:06.196879   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:06.197321   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:06.197355   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:06.197258   75356 retry.go:31] will retry after 1.163649074s: waiting for machine to come up
	I0731 18:08:05.278001   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:07.771870   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:06.945760   74203 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.99946679s)
	I0731 18:08:06.945790   74203 crio.go:469] duration metric: took 2.999576832s to extract the tarball
	I0731 18:08:06.945800   74203 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 18:08:06.989081   74203 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 18:08:07.024521   74203 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0731 18:08:07.024545   74203 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 18:08:07.024615   74203 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:08:07.024645   74203 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 18:08:07.024695   74203 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 18:08:07.024729   74203 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 18:08:07.024718   74203 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0731 18:08:07.024780   74203 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0731 18:08:07.024822   74203 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0731 18:08:07.024716   74203 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 18:08:07.026228   74203 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 18:08:07.026237   74203 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 18:08:07.026242   74203 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0731 18:08:07.026263   74203 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:08:07.026231   74203 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 18:08:07.026231   74203 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0731 18:08:07.026863   74203 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 18:08:07.027091   74203 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0731 18:08:07.282735   74203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0731 18:08:07.284464   74203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0731 18:08:07.287001   74203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 18:08:07.305873   74203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0731 18:08:07.307144   74203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0731 18:08:07.311401   74203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0731 18:08:07.318119   74203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0731 18:08:07.366929   74203 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0731 18:08:07.366979   74203 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0731 18:08:07.367041   74203 ssh_runner.go:195] Run: which crictl
	I0731 18:08:07.393481   74203 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0731 18:08:07.393534   74203 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0731 18:08:07.393594   74203 ssh_runner.go:195] Run: which crictl
	I0731 18:08:07.441987   74203 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0731 18:08:07.442036   74203 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 18:08:07.442083   74203 ssh_runner.go:195] Run: which crictl
	I0731 18:08:07.449033   74203 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0731 18:08:07.449085   74203 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 18:08:07.449137   74203 ssh_runner.go:195] Run: which crictl
	I0731 18:08:07.465248   74203 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0731 18:08:07.465291   74203 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 18:08:07.465341   74203 ssh_runner.go:195] Run: which crictl
	I0731 18:08:07.476013   74203 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0731 18:08:07.476053   74203 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0731 18:08:07.476074   74203 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 18:08:07.476090   74203 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0731 18:08:07.476111   74203 ssh_runner.go:195] Run: which crictl
	I0731 18:08:07.476129   74203 ssh_runner.go:195] Run: which crictl
	I0731 18:08:07.476146   74203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0731 18:08:07.476111   74203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0731 18:08:07.476196   74203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 18:08:07.476220   74203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0731 18:08:07.476273   74203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0731 18:08:07.592532   74203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0731 18:08:07.592677   74203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0731 18:08:07.592709   74203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0731 18:08:07.592797   74203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0731 18:08:07.637254   74203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0731 18:08:07.637276   74203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0731 18:08:07.637288   74203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0731 18:08:07.637292   74203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0731 18:08:07.640419   74203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0731 18:08:07.860814   74203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:08:08.002115   74203 cache_images.go:92] duration metric: took 977.553376ms to LoadCachedImages
	W0731 18:08:08.002248   74203 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0731 18:08:08.002267   74203 kubeadm.go:934] updating node { 192.168.39.26 8443 v1.20.0 crio true true} ...
	I0731 18:08:08.002404   74203 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-276459 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.26
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-276459 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 18:08:08.002500   74203 ssh_runner.go:195] Run: crio config
	I0731 18:08:08.059237   74203 cni.go:84] Creating CNI manager for ""
	I0731 18:08:08.059264   74203 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:08:08.059281   74203 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 18:08:08.059313   74203 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.26 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-276459 NodeName:old-k8s-version-276459 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.26"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.26 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0731 18:08:08.059503   74203 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.26
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-276459"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.26
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.26"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 18:08:08.059575   74203 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0731 18:08:08.070299   74203 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 18:08:08.070388   74203 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 18:08:08.082083   74203 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0731 18:08:08.101728   74203 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 18:08:08.120721   74203 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0731 18:08:08.137997   74203 ssh_runner.go:195] Run: grep 192.168.39.26	control-plane.minikube.internal$ /etc/hosts
	I0731 18:08:08.141797   74203 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.26	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:08:08.156861   74203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:08:08.287700   74203 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 18:08:08.307598   74203 certs.go:68] Setting up /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459 for IP: 192.168.39.26
	I0731 18:08:08.307623   74203 certs.go:194] generating shared ca certs ...
	I0731 18:08:08.307644   74203 certs.go:226] acquiring lock for ca certs: {Name:mk49eed439085f9c30746706460ce213570d1997 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:08:08.307811   74203 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key
	I0731 18:08:08.307855   74203 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key
	I0731 18:08:08.307868   74203 certs.go:256] generating profile certs ...
	I0731 18:08:08.307987   74203 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/client.key
	I0731 18:08:08.308062   74203 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/apiserver.key.7c620cac
	I0731 18:08:08.308123   74203 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/proxy-client.key
	I0731 18:08:08.308283   74203 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem (1338 bytes)
	W0731 18:08:08.308315   74203 certs.go:480] ignoring /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259_empty.pem, impossibly tiny 0 bytes
	I0731 18:08:08.308324   74203 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 18:08:08.308362   74203 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem (1082 bytes)
	I0731 18:08:08.308382   74203 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem (1123 bytes)
	I0731 18:08:08.308402   74203 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem (1675 bytes)
	I0731 18:08:08.308438   74203 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem (1708 bytes)
	I0731 18:08:08.309095   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 18:08:08.355508   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 18:08:08.391999   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 18:08:08.427937   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 18:08:08.456268   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0731 18:08:08.486991   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 18:08:08.519564   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 18:08:08.557029   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 18:08:08.583971   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /usr/share/ca-certificates/152592.pem (1708 bytes)
	I0731 18:08:08.608505   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 18:08:08.630279   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem --> /usr/share/ca-certificates/15259.pem (1338 bytes)
	I0731 18:08:08.655012   74203 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 18:08:08.671907   74203 ssh_runner.go:195] Run: openssl version
	I0731 18:08:08.677538   74203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/152592.pem && ln -fs /usr/share/ca-certificates/152592.pem /etc/ssl/certs/152592.pem"
	I0731 18:08:08.687877   74203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/152592.pem
	I0731 18:08:08.692201   74203 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 16:54 /usr/share/ca-certificates/152592.pem
	I0731 18:08:08.692258   74203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/152592.pem
	I0731 18:08:08.698563   74203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/152592.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 18:08:08.708986   74203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 18:08:08.719132   74203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:08:08.723242   74203 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 16:42 /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:08:08.723299   74203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:08:08.729032   74203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 18:08:08.739306   74203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15259.pem && ln -fs /usr/share/ca-certificates/15259.pem /etc/ssl/certs/15259.pem"
	I0731 18:08:08.749759   74203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15259.pem
	I0731 18:08:08.754167   74203 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 16:54 /usr/share/ca-certificates/15259.pem
	I0731 18:08:08.754228   74203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15259.pem
	I0731 18:08:08.759786   74203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15259.pem /etc/ssl/certs/51391683.0"
	I0731 18:08:08.770180   74203 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 18:08:08.775414   74203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 18:08:08.781830   74203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 18:08:08.787876   74203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 18:08:08.793927   74203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 18:08:08.800090   74203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 18:08:08.806169   74203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 18:08:08.811895   74203 kubeadm.go:392] StartCluster: {Name:old-k8s-version-276459 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-276459 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:08:08.811983   74203 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 18:08:08.812040   74203 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 18:08:08.853889   74203 cri.go:89] found id: ""
	I0731 18:08:08.853989   74203 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 18:08:08.863817   74203 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 18:08:08.863837   74203 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 18:08:08.863887   74203 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 18:08:08.873411   74203 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 18:08:08.874616   74203 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-276459" does not appear in /home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 18:08:08.875356   74203 kubeconfig.go:62] /home/jenkins/minikube-integration/19349-8084/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-276459" cluster setting kubeconfig missing "old-k8s-version-276459" context setting]
	I0731 18:08:08.876650   74203 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/kubeconfig: {Name:mkf2340bc1ada3c683f132aca37228d7588e1fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:08:08.918433   74203 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 18:08:08.931013   74203 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.26
	I0731 18:08:08.931067   74203 kubeadm.go:1160] stopping kube-system containers ...
	I0731 18:08:08.931083   74203 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 18:08:08.931163   74203 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 18:08:08.964683   74203 cri.go:89] found id: ""
	I0731 18:08:08.964759   74203 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 18:08:08.980459   74203 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 18:08:08.989969   74203 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 18:08:08.989997   74203 kubeadm.go:157] found existing configuration files:
	
	I0731 18:08:08.990049   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 18:08:08.999015   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 18:08:08.999074   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 18:08:09.008055   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 18:08:09.016532   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 18:08:09.016599   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 18:08:09.025791   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 18:08:09.034160   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 18:08:09.034227   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 18:08:09.043381   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 18:08:09.053419   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 18:08:09.053832   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 18:08:09.064966   74203 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 18:08:09.073962   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:09.198503   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:10.048258   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:10.283812   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:06.285091   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:08.779998   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:10.780198   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:07.362756   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:07.363299   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:07.363328   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:07.363231   75356 retry.go:31] will retry after 1.508296616s: waiting for machine to come up
	I0731 18:08:08.873528   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:08.874013   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:08.874051   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:08.873971   75356 retry.go:31] will retry after 2.281343566s: waiting for machine to come up
	I0731 18:08:11.157083   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:11.157578   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:11.157609   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:11.157537   75356 retry.go:31] will retry after 2.49049752s: waiting for machine to come up
	I0731 18:08:09.802010   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:12.271900   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:10.390012   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:10.477969   74203 api_server.go:52] waiting for apiserver process to appear ...
	I0731 18:08:10.478093   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:10.978427   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:11.478715   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:11.978685   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:12.478211   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:12.978218   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:13.478493   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:13.978778   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:14.478489   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:14.978983   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:13.278943   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:15.778760   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:13.650131   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:13.650459   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:13.650480   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:13.650428   75356 retry.go:31] will retry after 3.437877467s: waiting for machine to come up
	I0731 18:08:14.771879   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:17.272673   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:15.478444   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:15.978399   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:16.478641   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:16.979036   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:17.479053   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:17.978819   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:18.478280   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:18.978448   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:19.479056   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:19.978969   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:18.279604   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:20.778532   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:17.089986   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:17.090556   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:17.090590   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:17.090509   75356 retry.go:31] will retry after 2.95036051s: waiting for machine to come up
	I0731 18:08:20.044455   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.044914   73479 main.go:141] libmachine: (no-preload-673754) Found IP for machine: 192.168.61.126
	I0731 18:08:20.044935   73479 main.go:141] libmachine: (no-preload-673754) Reserving static IP address...
	I0731 18:08:20.044948   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has current primary IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.045286   73479 main.go:141] libmachine: (no-preload-673754) Reserved static IP address: 192.168.61.126
	I0731 18:08:20.045308   73479 main.go:141] libmachine: (no-preload-673754) Waiting for SSH to be available...
	I0731 18:08:20.045331   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "no-preload-673754", mac: "52:54:00:5a:ec:78", ip: "192.168.61.126"} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:20.045352   73479 main.go:141] libmachine: (no-preload-673754) DBG | skip adding static IP to network mk-no-preload-673754 - found existing host DHCP lease matching {name: "no-preload-673754", mac: "52:54:00:5a:ec:78", ip: "192.168.61.126"}
	I0731 18:08:20.045367   73479 main.go:141] libmachine: (no-preload-673754) DBG | Getting to WaitForSSH function...
	I0731 18:08:20.047574   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.047913   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:20.047939   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.048069   73479 main.go:141] libmachine: (no-preload-673754) DBG | Using SSH client type: external
	I0731 18:08:20.048106   73479 main.go:141] libmachine: (no-preload-673754) DBG | Using SSH private key: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/no-preload-673754/id_rsa (-rw-------)
	I0731 18:08:20.048150   73479 main.go:141] libmachine: (no-preload-673754) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.126 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19349-8084/.minikube/machines/no-preload-673754/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 18:08:20.048168   73479 main.go:141] libmachine: (no-preload-673754) DBG | About to run SSH command:
	I0731 18:08:20.048181   73479 main.go:141] libmachine: (no-preload-673754) DBG | exit 0
	I0731 18:08:20.175606   73479 main.go:141] libmachine: (no-preload-673754) DBG | SSH cmd err, output: <nil>: 
	I0731 18:08:20.175917   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetConfigRaw
	I0731 18:08:20.176508   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetIP
	I0731 18:08:20.179035   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.179374   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:20.179404   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.179686   73479 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/no-preload-673754/config.json ...
	I0731 18:08:20.179869   73479 machine.go:94] provisionDockerMachine start ...
	I0731 18:08:20.179885   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 18:08:20.180088   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:20.182345   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.182702   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:20.182727   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.182848   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:20.183060   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:20.183227   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:20.183414   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:20.183572   73479 main.go:141] libmachine: Using SSH client type: native
	I0731 18:08:20.183747   73479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I0731 18:08:20.183757   73479 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 18:08:20.295090   73479 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 18:08:20.295149   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetMachineName
	I0731 18:08:20.295424   73479 buildroot.go:166] provisioning hostname "no-preload-673754"
	I0731 18:08:20.295454   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetMachineName
	I0731 18:08:20.295631   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:20.298467   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.298771   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:20.298815   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.298897   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:20.299094   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:20.299276   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:20.299462   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:20.299652   73479 main.go:141] libmachine: Using SSH client type: native
	I0731 18:08:20.299806   73479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I0731 18:08:20.299817   73479 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-673754 && echo "no-preload-673754" | sudo tee /etc/hostname
	I0731 18:08:20.424901   73479 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-673754
	
	I0731 18:08:20.424951   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:20.427679   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.428049   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:20.428083   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.428230   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:20.428419   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:20.428601   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:20.428767   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:20.428965   73479 main.go:141] libmachine: Using SSH client type: native
	I0731 18:08:20.429127   73479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I0731 18:08:20.429142   73479 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-673754' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-673754/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-673754' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 18:08:20.546853   73479 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 18:08:20.546884   73479 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19349-8084/.minikube CaCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19349-8084/.minikube}
	I0731 18:08:20.546938   73479 buildroot.go:174] setting up certificates
	I0731 18:08:20.546955   73479 provision.go:84] configureAuth start
	I0731 18:08:20.546971   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetMachineName
	I0731 18:08:20.547275   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetIP
	I0731 18:08:20.550019   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.550372   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:20.550400   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.550525   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:20.552914   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.553261   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:20.553290   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.553416   73479 provision.go:143] copyHostCerts
	I0731 18:08:20.553479   73479 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem, removing ...
	I0731 18:08:20.553490   73479 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem
	I0731 18:08:20.553547   73479 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem (1082 bytes)
	I0731 18:08:20.553675   73479 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem, removing ...
	I0731 18:08:20.553687   73479 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem
	I0731 18:08:20.553718   73479 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem (1123 bytes)
	I0731 18:08:20.553796   73479 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem, removing ...
	I0731 18:08:20.553806   73479 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem
	I0731 18:08:20.553826   73479 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem (1675 bytes)
	I0731 18:08:20.553883   73479 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem org=jenkins.no-preload-673754 san=[127.0.0.1 192.168.61.126 localhost minikube no-preload-673754]
	I0731 18:08:20.878891   73479 provision.go:177] copyRemoteCerts
	I0731 18:08:20.878963   73479 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 18:08:20.878990   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:20.881529   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.881868   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:20.881900   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.882053   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:20.882245   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:20.882450   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:20.882617   73479 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/no-preload-673754/id_rsa Username:docker}
	I0731 18:08:20.968757   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 18:08:20.992136   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0731 18:08:21.013768   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 18:08:21.035808   73479 provision.go:87] duration metric: took 488.837788ms to configureAuth
	I0731 18:08:21.035839   73479 buildroot.go:189] setting minikube options for container-runtime
	I0731 18:08:21.036018   73479 config.go:182] Loaded profile config "no-preload-673754": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 18:08:21.036099   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:21.038949   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.039335   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:21.039363   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.039556   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:21.039756   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:21.039960   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:21.040071   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:21.040219   73479 main.go:141] libmachine: Using SSH client type: native
	I0731 18:08:21.040380   73479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I0731 18:08:21.040396   73479 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 18:08:21.319623   73479 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 18:08:21.319657   73479 machine.go:97] duration metric: took 1.139776085s to provisionDockerMachine
	I0731 18:08:21.319672   73479 start.go:293] postStartSetup for "no-preload-673754" (driver="kvm2")
	I0731 18:08:21.319689   73479 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 18:08:21.319710   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 18:08:21.320049   73479 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 18:08:21.320076   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:21.322963   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.323436   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:21.323465   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.323634   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:21.323809   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:21.324003   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:21.324127   73479 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/no-preload-673754/id_rsa Username:docker}
	I0731 18:08:21.409076   73479 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 18:08:21.412884   73479 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 18:08:21.412917   73479 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/addons for local assets ...
	I0731 18:08:21.413020   73479 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/files for local assets ...
	I0731 18:08:21.413108   73479 filesync.go:149] local asset: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem -> 152592.pem in /etc/ssl/certs
	I0731 18:08:21.413233   73479 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 18:08:21.421812   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /etc/ssl/certs/152592.pem (1708 bytes)
	I0731 18:08:21.447124   73479 start.go:296] duration metric: took 127.423498ms for postStartSetup
	I0731 18:08:21.447196   73479 fix.go:56] duration metric: took 20.511108968s for fixHost
	I0731 18:08:21.447226   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:21.450022   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.450408   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:21.450431   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.450628   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:21.450846   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:21.451009   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:21.451161   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:21.451327   73479 main.go:141] libmachine: Using SSH client type: native
	I0731 18:08:21.451527   73479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I0731 18:08:21.451541   73479 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 18:08:21.563653   73479 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722449301.536356236
	
	I0731 18:08:21.563672   73479 fix.go:216] guest clock: 1722449301.536356236
	I0731 18:08:21.563679   73479 fix.go:229] Guest: 2024-07-31 18:08:21.536356236 +0000 UTC Remote: 2024-07-31 18:08:21.447206545 +0000 UTC m=+354.621330953 (delta=89.149691ms)
	I0731 18:08:21.563702   73479 fix.go:200] guest clock delta is within tolerance: 89.149691ms
	I0731 18:08:21.563709   73479 start.go:83] releasing machines lock for "no-preload-673754", held for 20.627680156s
	I0731 18:08:21.563734   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 18:08:21.563992   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetIP
	I0731 18:08:21.566875   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.567265   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:21.567290   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.567505   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 18:08:21.568045   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 18:08:21.568237   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 18:08:21.568368   73479 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 18:08:21.568408   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:21.568465   73479 ssh_runner.go:195] Run: cat /version.json
	I0731 18:08:21.568492   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:21.571178   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.571554   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:21.571603   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.571653   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.571729   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:21.571902   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:21.572071   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:21.572213   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:21.572240   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.572256   73479 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/no-preload-673754/id_rsa Username:docker}
	I0731 18:08:21.572373   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:21.572505   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:21.572609   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:21.572739   73479 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/no-preload-673754/id_rsa Username:docker}
	I0731 18:08:21.682894   73479 ssh_runner.go:195] Run: systemctl --version
	I0731 18:08:21.689126   73479 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 18:08:21.829572   73479 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 18:08:21.836507   73479 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 18:08:21.836589   73479 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 18:08:21.855127   73479 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 18:08:21.855176   73479 start.go:495] detecting cgroup driver to use...
	I0731 18:08:21.855256   73479 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 18:08:21.870886   73479 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 18:08:21.884762   73479 docker.go:217] disabling cri-docker service (if available) ...
	I0731 18:08:21.884833   73479 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 18:08:21.899480   73479 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 18:08:21.912438   73479 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 18:08:22.024528   73479 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 18:08:22.177400   73479 docker.go:233] disabling docker service ...
	I0731 18:08:22.177500   73479 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 18:08:22.191225   73479 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 18:08:22.204004   73479 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 18:08:22.327408   73479 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 18:08:22.449116   73479 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 18:08:22.463031   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 18:08:22.481864   73479 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0731 18:08:22.481935   73479 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:22.491687   73479 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 18:08:22.491768   73479 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:22.501686   73479 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:22.511207   73479 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:22.521390   73479 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 18:08:22.531355   73479 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:22.541544   73479 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:22.556829   73479 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:22.566012   73479 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 18:08:22.574865   73479 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 18:08:22.574938   73479 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 18:08:22.588125   73479 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 18:08:22.597257   73479 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:08:22.716379   73479 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 18:08:22.855465   73479 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 18:08:22.855526   73479 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 18:08:22.860016   73479 start.go:563] Will wait 60s for crictl version
	I0731 18:08:22.860088   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:08:22.863395   73479 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 18:08:22.904523   73479 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 18:08:22.904611   73479 ssh_runner.go:195] Run: crio --version
	I0731 18:08:22.934571   73479 ssh_runner.go:195] Run: crio --version
	I0731 18:08:22.965884   73479 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0731 18:08:19.771740   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:22.272491   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:20.478866   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:20.978311   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:21.478333   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:21.978289   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:22.479138   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:22.978406   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:23.478138   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:23.979189   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:24.478688   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:24.978795   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:22.779215   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:24.782366   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:22.967087   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetIP
	I0731 18:08:22.969442   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:22.969722   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:22.969746   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:22.970005   73479 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0731 18:08:22.974229   73479 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:08:22.986153   73479 kubeadm.go:883] updating cluster {Name:no-preload-673754 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-673754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.126 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 18:08:22.986292   73479 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0731 18:08:22.986321   73479 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 18:08:23.020129   73479 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0731 18:08:23.020153   73479 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 18:08:23.020215   73479 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:08:23.020234   73479 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 18:08:23.020266   73479 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 18:08:23.020322   73479 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 18:08:23.020337   73479 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0731 18:08:23.020390   73479 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0731 18:08:23.020431   73479 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 18:08:23.020457   73479 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 18:08:23.021820   73479 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:08:23.021821   73479 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0731 18:08:23.021820   73479 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 18:08:23.021901   73479 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0731 18:08:23.021978   73479 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 18:08:23.021833   73479 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 18:08:23.021826   73479 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 18:08:23.021821   73479 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 18:08:23.254700   73479 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 18:08:23.268999   73479 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0731 18:08:23.271466   73479 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0731 18:08:23.272011   73479 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 18:08:23.275695   73479 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 18:08:23.298363   73479 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 18:08:23.320031   73479 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0731 18:08:23.340960   73479 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0731 18:08:23.341004   73479 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 18:08:23.341050   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:08:23.381391   73479 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0731 18:08:23.381441   73479 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 18:08:23.381511   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:08:23.508590   73479 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0731 18:08:23.508650   73479 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 18:08:23.508676   73479 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0731 18:08:23.508702   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:08:23.508716   73479 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 18:08:23.508729   73479 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0731 18:08:23.508751   73479 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 18:08:23.508772   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:08:23.508781   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:08:23.508800   73479 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0731 18:08:23.508830   73479 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0731 18:08:23.508838   73479 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 18:08:23.508860   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:08:23.508879   73479 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0731 18:08:23.519809   73479 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 18:08:23.519834   73479 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 18:08:23.519907   73479 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 18:08:23.595474   73479 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0731 18:08:23.595484   73479 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0731 18:08:23.595590   73479 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0731 18:08:23.595628   73479 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0731 18:08:23.595683   73479 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0731 18:08:23.622893   73479 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0731 18:08:23.623024   73479 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0731 18:08:23.629140   73479 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0731 18:08:23.629173   73479 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0731 18:08:23.629242   73479 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0731 18:08:23.629246   73479 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0731 18:08:23.659281   73479 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0731 18:08:23.659321   73479 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0731 18:08:23.659336   73479 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0731 18:08:23.659379   73479 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0731 18:08:23.659385   73479 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0731 18:08:23.659425   73479 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0731 18:08:23.659381   73479 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0731 18:08:23.659465   73479 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0731 18:08:23.659494   73479 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0731 18:08:23.857129   73479 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:08:26.136212   73479 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.476802709s)
	I0731 18:08:26.136251   73479 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0731 18:08:26.136264   73479 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0: (2.476807388s)
	I0731 18:08:26.136276   73479 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0731 18:08:26.136293   73479 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0731 18:08:26.136329   73479 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0731 18:08:26.136366   73479 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.279204335s)
	I0731 18:08:26.136423   73479 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0731 18:08:26.136474   73479 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:08:26.136521   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:08:24.770974   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:26.771954   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:29.274931   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:25.478432   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:25.978823   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:26.478416   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:26.979075   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:27.478228   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:27.978970   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:28.478319   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:28.979028   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:29.479060   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:29.978544   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:27.278482   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:29.279820   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:27.993828   73479 ssh_runner.go:235] Completed: which crictl: (1.857279777s)
	I0731 18:08:27.993908   73479 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:08:27.993918   73479 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.857561411s)
	I0731 18:08:27.993947   73479 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0731 18:08:27.993981   73479 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0731 18:08:27.994029   73479 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0731 18:08:28.037163   73479 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0731 18:08:28.037288   73479 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0731 18:08:29.880343   73479 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.843037657s)
	I0731 18:08:29.880392   73479 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0731 18:08:29.880339   73479 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.886261639s)
	I0731 18:08:29.880412   73479 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0731 18:08:29.880442   73479 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0731 18:08:29.880509   73479 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0731 18:08:31.229448   73479 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.348909634s)
	I0731 18:08:31.229478   73479 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0731 18:08:31.229512   73479 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0731 18:08:31.229575   73479 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0731 18:08:31.771695   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:34.271817   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:30.478387   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:30.978443   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:31.478484   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:31.979231   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:32.478928   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:32.978790   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:33.478426   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:33.978839   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:34.478411   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:34.978378   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:31.280261   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:33.780411   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:35.783181   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:33.084098   73479 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.854499641s)
	I0731 18:08:33.084136   73479 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0731 18:08:33.084175   73479 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0731 18:08:33.084255   73479 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0731 18:08:36.378466   73479 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.294181026s)
	I0731 18:08:36.378501   73479 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0731 18:08:36.378530   73479 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0731 18:08:36.378575   73479 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0731 18:08:36.772963   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:39.270915   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:35.478287   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:35.978546   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:36.479138   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:36.979173   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:37.478319   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:37.978768   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:38.479161   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:38.979129   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:39.478128   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:39.979147   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:38.278970   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:40.279298   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:37.022757   73479 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0731 18:08:37.022807   73479 cache_images.go:123] Successfully loaded all cached images
	I0731 18:08:37.022815   73479 cache_images.go:92] duration metric: took 14.002647196s to LoadCachedImages
	I0731 18:08:37.022829   73479 kubeadm.go:934] updating node { 192.168.61.126 8443 v1.31.0-beta.0 crio true true} ...
	I0731 18:08:37.022954   73479 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-673754 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.126
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-673754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 18:08:37.023035   73479 ssh_runner.go:195] Run: crio config
	I0731 18:08:37.064803   73479 cni.go:84] Creating CNI manager for ""
	I0731 18:08:37.064825   73479 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:08:37.064834   73479 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 18:08:37.064856   73479 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.126 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-673754 NodeName:no-preload-673754 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.126"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.126 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 18:08:37.065028   73479 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.126
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-673754"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.126
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.126"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 18:08:37.065108   73479 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0731 18:08:37.077141   73479 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 18:08:37.077215   73479 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 18:08:37.086553   73479 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0731 18:08:37.102646   73479 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0731 18:08:37.118113   73479 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0731 18:08:37.134702   73479 ssh_runner.go:195] Run: grep 192.168.61.126	control-plane.minikube.internal$ /etc/hosts
	I0731 18:08:37.138593   73479 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.126	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:08:37.151319   73479 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:08:37.270019   73479 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 18:08:37.287378   73479 certs.go:68] Setting up /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/no-preload-673754 for IP: 192.168.61.126
	I0731 18:08:37.287400   73479 certs.go:194] generating shared ca certs ...
	I0731 18:08:37.287413   73479 certs.go:226] acquiring lock for ca certs: {Name:mk49eed439085f9c30746706460ce213570d1997 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:08:37.287540   73479 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key
	I0731 18:08:37.287577   73479 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key
	I0731 18:08:37.287584   73479 certs.go:256] generating profile certs ...
	I0731 18:08:37.287692   73479 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/no-preload-673754/client.key
	I0731 18:08:37.287761   73479 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/no-preload-673754/apiserver.key.3fff3ffc
	I0731 18:08:37.287803   73479 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/no-preload-673754/proxy-client.key
	I0731 18:08:37.287938   73479 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem (1338 bytes)
	W0731 18:08:37.287973   73479 certs.go:480] ignoring /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259_empty.pem, impossibly tiny 0 bytes
	I0731 18:08:37.287985   73479 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 18:08:37.288020   73479 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem (1082 bytes)
	I0731 18:08:37.288049   73479 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem (1123 bytes)
	I0731 18:08:37.288079   73479 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem (1675 bytes)
	I0731 18:08:37.288143   73479 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem (1708 bytes)
	I0731 18:08:37.288831   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 18:08:37.334317   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 18:08:37.370553   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 18:08:37.403436   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 18:08:37.449133   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/no-preload-673754/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0731 18:08:37.486169   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/no-preload-673754/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 18:08:37.517241   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/no-preload-673754/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 18:08:37.541089   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/no-preload-673754/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 18:08:37.563068   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem --> /usr/share/ca-certificates/15259.pem (1338 bytes)
	I0731 18:08:37.585396   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /usr/share/ca-certificates/152592.pem (1708 bytes)
	I0731 18:08:37.608142   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 18:08:37.630178   73479 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 18:08:37.645994   73479 ssh_runner.go:195] Run: openssl version
	I0731 18:08:37.651663   73479 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15259.pem && ln -fs /usr/share/ca-certificates/15259.pem /etc/ssl/certs/15259.pem"
	I0731 18:08:37.661494   73479 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15259.pem
	I0731 18:08:37.665519   73479 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 16:54 /usr/share/ca-certificates/15259.pem
	I0731 18:08:37.665575   73479 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15259.pem
	I0731 18:08:37.671143   73479 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15259.pem /etc/ssl/certs/51391683.0"
	I0731 18:08:37.681076   73479 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/152592.pem && ln -fs /usr/share/ca-certificates/152592.pem /etc/ssl/certs/152592.pem"
	I0731 18:08:37.692253   73479 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/152592.pem
	I0731 18:08:37.696802   73479 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 16:54 /usr/share/ca-certificates/152592.pem
	I0731 18:08:37.696850   73479 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/152592.pem
	I0731 18:08:37.702282   73479 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/152592.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 18:08:37.713051   73479 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 18:08:37.723644   73479 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:08:37.728170   73479 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 16:42 /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:08:37.728225   73479 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:08:37.733912   73479 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 18:08:37.744004   73479 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 18:08:37.748076   73479 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 18:08:37.753645   73479 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 18:08:37.759077   73479 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 18:08:37.764344   73479 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 18:08:37.769735   73479 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 18:08:37.775894   73479 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 18:08:37.781699   73479 kubeadm.go:392] StartCluster: {Name:no-preload-673754 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-673754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.126 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:08:37.781771   73479 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 18:08:37.781833   73479 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 18:08:37.825614   73479 cri.go:89] found id: ""
	I0731 18:08:37.825685   73479 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 18:08:37.835584   73479 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 18:08:37.835604   73479 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 18:08:37.835659   73479 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 18:08:37.844529   73479 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 18:08:37.845534   73479 kubeconfig.go:125] found "no-preload-673754" server: "https://192.168.61.126:8443"
	I0731 18:08:37.847698   73479 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 18:08:37.856360   73479 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.126
	I0731 18:08:37.856386   73479 kubeadm.go:1160] stopping kube-system containers ...
	I0731 18:08:37.856396   73479 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 18:08:37.856440   73479 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 18:08:37.894614   73479 cri.go:89] found id: ""
	I0731 18:08:37.894689   73479 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 18:08:37.910921   73479 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 18:08:37.919796   73479 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 18:08:37.919814   73479 kubeadm.go:157] found existing configuration files:
	
	I0731 18:08:37.919859   73479 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 18:08:37.928562   73479 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 18:08:37.928617   73479 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 18:08:37.937099   73479 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 18:08:37.945298   73479 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 18:08:37.945378   73479 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 18:08:37.953976   73479 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 18:08:37.962069   73479 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 18:08:37.962119   73479 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 18:08:37.970719   73479 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 18:08:37.979265   73479 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 18:08:37.979318   73479 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 18:08:37.988286   73479 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 18:08:37.997742   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:38.105503   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:39.403672   73479 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.298131314s)
	I0731 18:08:39.403710   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:39.609739   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:39.677484   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:39.773387   73479 api_server.go:52] waiting for apiserver process to appear ...
	I0731 18:08:39.773469   73479 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:40.274185   73479 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:40.774562   73479 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:40.792346   73479 api_server.go:72] duration metric: took 1.018961231s to wait for apiserver process to appear ...
	I0731 18:08:40.792368   73479 api_server.go:88] waiting for apiserver healthz status ...
	I0731 18:08:40.792384   73479 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0731 18:08:41.271890   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:43.771546   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:43.476911   73479 api_server.go:279] https://192.168.61.126:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 18:08:43.476938   73479 api_server.go:103] status: https://192.168.61.126:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 18:08:43.476952   73479 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0731 18:08:43.536762   73479 api_server.go:279] https://192.168.61.126:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 18:08:43.536794   73479 api_server.go:103] status: https://192.168.61.126:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 18:08:43.793157   73479 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0731 18:08:43.798895   73479 api_server.go:279] https://192.168.61.126:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 18:08:43.798924   73479 api_server.go:103] status: https://192.168.61.126:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 18:08:44.292527   73479 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0731 18:08:44.300596   73479 api_server.go:279] https://192.168.61.126:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 18:08:44.300632   73479 api_server.go:103] status: https://192.168.61.126:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 18:08:44.793206   73479 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0731 18:08:44.797982   73479 api_server.go:279] https://192.168.61.126:8443/healthz returned 200:
	ok
	I0731 18:08:44.806150   73479 api_server.go:141] control plane version: v1.31.0-beta.0
	I0731 18:08:44.806172   73479 api_server.go:131] duration metric: took 4.013797537s to wait for apiserver health ...
	I0731 18:08:44.806183   73479 cni.go:84] Creating CNI manager for ""
	I0731 18:08:44.806191   73479 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:08:44.807774   73479 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 18:08:40.478967   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:40.978610   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:41.479192   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:41.978737   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:42.479051   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:42.978274   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:43.478957   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:43.978973   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:44.478269   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:44.978737   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:42.778330   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:44.779163   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:44.809068   73479 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 18:08:44.823284   73479 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 18:08:44.878894   73479 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 18:08:44.892969   73479 system_pods.go:59] 8 kube-system pods found
	I0731 18:08:44.893020   73479 system_pods.go:61] "coredns-5cfdc65f69-k7clq" [e4be77b6-aa7c-45e2-90a1-6a8264fd5101] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:08:44.893031   73479 system_pods.go:61] "etcd-no-preload-673754" [d64e9194-dd33-4a28-b8c3-d25462f1dae8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 18:08:44.893042   73479 system_pods.go:61] "kube-apiserver-no-preload-673754" [4497614d-51de-4a5f-93dc-1397446fe4c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 18:08:44.893055   73479 system_pods.go:61] "kube-controller-manager-no-preload-673754" [fe7f714e-d778-49e7-b4ef-44e6efb5bc33] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 18:08:44.893067   73479 system_pods.go:61] "kube-proxy-hqxh6" [4fb623c0-4be3-421c-85d6-1d76a90b874f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 18:08:44.893078   73479 system_pods.go:61] "kube-scheduler-no-preload-673754" [558e9b78-a6a9-46b8-9db8-004126359c74] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 18:08:44.893088   73479 system_pods.go:61] "metrics-server-78fcd8795b-27pkr" [a63a156b-0446-4bd9-8619-de75edaeb481] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 18:08:44.893098   73479 system_pods.go:61] "storage-provisioner" [a9f0ab70-18f4-4fac-b858-a9177077fe29] Running
	I0731 18:08:44.893109   73479 system_pods.go:74] duration metric: took 14.191984ms to wait for pod list to return data ...
	I0731 18:08:44.893120   73479 node_conditions.go:102] verifying NodePressure condition ...
	I0731 18:08:44.908236   73479 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 18:08:44.908270   73479 node_conditions.go:123] node cpu capacity is 2
	I0731 18:08:44.908283   73479 node_conditions.go:105] duration metric: took 15.154491ms to run NodePressure ...
	I0731 18:08:44.908307   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:45.248571   73479 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 18:08:45.252305   73479 kubeadm.go:739] kubelet initialised
	I0731 18:08:45.252332   73479 kubeadm.go:740] duration metric: took 3.734022ms waiting for restarted kubelet to initialise ...
	I0731 18:08:45.252342   73479 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:08:45.256748   73479 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-k7clq" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:45.261130   73479 pod_ready.go:97] node "no-preload-673754" hosting pod "coredns-5cfdc65f69-k7clq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:45.261149   73479 pod_ready.go:81] duration metric: took 4.373068ms for pod "coredns-5cfdc65f69-k7clq" in "kube-system" namespace to be "Ready" ...
	E0731 18:08:45.261157   73479 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-673754" hosting pod "coredns-5cfdc65f69-k7clq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:45.261162   73479 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:45.265115   73479 pod_ready.go:97] node "no-preload-673754" hosting pod "etcd-no-preload-673754" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:45.265135   73479 pod_ready.go:81] duration metric: took 3.965586ms for pod "etcd-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	E0731 18:08:45.265142   73479 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-673754" hosting pod "etcd-no-preload-673754" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:45.265147   73479 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:45.269566   73479 pod_ready.go:97] node "no-preload-673754" hosting pod "kube-apiserver-no-preload-673754" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:45.269585   73479 pod_ready.go:81] duration metric: took 4.431367ms for pod "kube-apiserver-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	E0731 18:08:45.269595   73479 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-673754" hosting pod "kube-apiserver-no-preload-673754" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:45.269603   73479 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:45.281026   73479 pod_ready.go:97] node "no-preload-673754" hosting pod "kube-controller-manager-no-preload-673754" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:45.281048   73479 pod_ready.go:81] duration metric: took 11.435327ms for pod "kube-controller-manager-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	E0731 18:08:45.281057   73479 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-673754" hosting pod "kube-controller-manager-no-preload-673754" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:45.281065   73479 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-hqxh6" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:45.684313   73479 pod_ready.go:97] node "no-preload-673754" hosting pod "kube-proxy-hqxh6" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:45.684347   73479 pod_ready.go:81] duration metric: took 403.272559ms for pod "kube-proxy-hqxh6" in "kube-system" namespace to be "Ready" ...
	E0731 18:08:45.684356   73479 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-673754" hosting pod "kube-proxy-hqxh6" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:45.684362   73479 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:46.082388   73479 pod_ready.go:97] node "no-preload-673754" hosting pod "kube-scheduler-no-preload-673754" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:46.082419   73479 pod_ready.go:81] duration metric: took 398.048808ms for pod "kube-scheduler-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	E0731 18:08:46.082432   73479 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-673754" hosting pod "kube-scheduler-no-preload-673754" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:46.082442   73479 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:46.482445   73479 pod_ready.go:97] node "no-preload-673754" hosting pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:46.482472   73479 pod_ready.go:81] duration metric: took 400.02111ms for pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace to be "Ready" ...
	E0731 18:08:46.482486   73479 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-673754" hosting pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:46.482493   73479 pod_ready.go:38] duration metric: took 1.230141723s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:08:46.482509   73479 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 18:08:46.495481   73479 ops.go:34] apiserver oom_adj: -16
	I0731 18:08:46.495502   73479 kubeadm.go:597] duration metric: took 8.65989212s to restartPrimaryControlPlane
	I0731 18:08:46.495513   73479 kubeadm.go:394] duration metric: took 8.71382049s to StartCluster
	I0731 18:08:46.495533   73479 settings.go:142] acquiring lock: {Name:mk8ac8269296bf0b887a00ff74caaad180005f89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:08:46.495615   73479 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 18:08:46.497426   73479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/kubeconfig: {Name:mkf2340bc1ada3c683f132aca37228d7588e1fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:08:46.497742   73479 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.126 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 18:08:46.497816   73479 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 18:08:46.497911   73479 addons.go:69] Setting storage-provisioner=true in profile "no-preload-673754"
	I0731 18:08:46.497929   73479 addons.go:69] Setting default-storageclass=true in profile "no-preload-673754"
	I0731 18:08:46.497956   73479 addons.go:69] Setting metrics-server=true in profile "no-preload-673754"
	I0731 18:08:46.497973   73479 config.go:182] Loaded profile config "no-preload-673754": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 18:08:46.497979   73479 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-673754"
	I0731 18:08:46.497988   73479 addons.go:234] Setting addon metrics-server=true in "no-preload-673754"
	W0731 18:08:46.498008   73479 addons.go:243] addon metrics-server should already be in state true
	I0731 18:08:46.497946   73479 addons.go:234] Setting addon storage-provisioner=true in "no-preload-673754"
	I0731 18:08:46.498056   73479 host.go:66] Checking if "no-preload-673754" exists ...
	W0731 18:08:46.498064   73479 addons.go:243] addon storage-provisioner should already be in state true
	I0731 18:08:46.498109   73479 host.go:66] Checking if "no-preload-673754" exists ...
	I0731 18:08:46.498315   73479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:08:46.498315   73479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:08:46.498333   73479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:08:46.498340   73479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:08:46.498448   73479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:08:46.498470   73479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:08:46.501144   73479 out.go:177] * Verifying Kubernetes components...
	I0731 18:08:46.502755   73479 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:08:46.514922   73479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46797
	I0731 18:08:46.514923   73479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44241
	I0731 18:08:46.515418   73479 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:08:46.515618   73479 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:08:46.515928   73479 main.go:141] libmachine: Using API Version  1
	I0731 18:08:46.515950   73479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:08:46.516066   73479 main.go:141] libmachine: Using API Version  1
	I0731 18:08:46.516089   73479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:08:46.516370   73479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40219
	I0731 18:08:46.516440   73479 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:08:46.516663   73479 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:08:46.516809   73479 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:08:46.516811   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetState
	I0731 18:08:46.517213   73479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:08:46.517247   73479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:08:46.517280   73479 main.go:141] libmachine: Using API Version  1
	I0731 18:08:46.517302   73479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:08:46.517618   73479 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:08:46.518191   73479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:08:46.518220   73479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:08:46.520511   73479 addons.go:234] Setting addon default-storageclass=true in "no-preload-673754"
	W0731 18:08:46.520536   73479 addons.go:243] addon default-storageclass should already be in state true
	I0731 18:08:46.520566   73479 host.go:66] Checking if "no-preload-673754" exists ...
	I0731 18:08:46.520917   73479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:08:46.520968   73479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:08:46.533349   73479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38669
	I0731 18:08:46.533802   73479 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:08:46.534250   73479 main.go:141] libmachine: Using API Version  1
	I0731 18:08:46.534272   73479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:08:46.534582   73479 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:08:46.534720   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetState
	I0731 18:08:46.535556   73479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46055
	I0731 18:08:46.535979   73479 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:08:46.536648   73479 main.go:141] libmachine: Using API Version  1
	I0731 18:08:46.536667   73479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:08:46.537080   73479 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:08:46.537331   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 18:08:46.537398   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetState
	I0731 18:08:46.538365   73479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39435
	I0731 18:08:46.538929   73479 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:08:46.539194   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 18:08:46.539401   73479 main.go:141] libmachine: Using API Version  1
	I0731 18:08:46.539419   73479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:08:46.539766   73479 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:08:46.540360   73479 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:08:46.540447   73479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:08:46.540801   73479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:08:46.541139   73479 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0731 18:08:46.541916   73479 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 18:08:46.541932   73479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 18:08:46.541952   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:46.542506   73479 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 18:08:46.542524   73479 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 18:08:46.542541   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:46.545293   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:46.545631   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:46.545759   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:46.545829   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:46.545985   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:46.546116   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:46.546256   73479 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/no-preload-673754/id_rsa Username:docker}
	I0731 18:08:46.546384   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:46.546888   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:46.546907   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:46.546924   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:46.547090   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:46.547256   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:46.547434   73479 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/no-preload-673754/id_rsa Username:docker}
	I0731 18:08:46.570759   73479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40485
	I0731 18:08:46.571222   73479 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:08:46.571668   73479 main.go:141] libmachine: Using API Version  1
	I0731 18:08:46.571688   73479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:08:46.572207   73479 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:08:46.572367   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetState
	I0731 18:08:46.574368   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 18:08:46.574582   73479 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 18:08:46.574607   73479 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 18:08:46.574627   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:46.577768   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:46.578542   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:46.578567   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:46.578741   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:46.578911   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:46.579047   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:46.579459   73479 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/no-preload-673754/id_rsa Username:docker}
	I0731 18:08:46.700752   73479 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 18:08:46.720967   73479 node_ready.go:35] waiting up to 6m0s for node "no-preload-673754" to be "Ready" ...
	I0731 18:08:46.798188   73479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 18:08:46.802534   73479 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 18:08:46.802564   73479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0731 18:08:46.828038   73479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 18:08:46.859309   73479 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 18:08:46.859337   73479 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 18:08:46.921507   73479 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 18:08:46.921536   73479 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 18:08:46.958759   73479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 18:08:48.106542   73479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.278462071s)
	I0731 18:08:48.106599   73479 main.go:141] libmachine: Making call to close driver server
	I0731 18:08:48.106608   73479 main.go:141] libmachine: (no-preload-673754) Calling .Close
	I0731 18:08:48.107151   73479 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:08:48.107177   73479 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:08:48.107187   73479 main.go:141] libmachine: Making call to close driver server
	I0731 18:08:48.107196   73479 main.go:141] libmachine: (no-preload-673754) Calling .Close
	I0731 18:08:48.107601   73479 main.go:141] libmachine: (no-preload-673754) DBG | Closing plugin on server side
	I0731 18:08:48.107604   73479 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:08:48.107631   73479 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:08:48.107831   73479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.309610972s)
	I0731 18:08:48.107872   73479 main.go:141] libmachine: Making call to close driver server
	I0731 18:08:48.107882   73479 main.go:141] libmachine: (no-preload-673754) Calling .Close
	I0731 18:08:48.108105   73479 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:08:48.108119   73479 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:08:48.108138   73479 main.go:141] libmachine: Making call to close driver server
	I0731 18:08:48.108150   73479 main.go:141] libmachine: (no-preload-673754) Calling .Close
	I0731 18:08:48.108351   73479 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:08:48.108367   73479 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:08:48.118038   73479 main.go:141] libmachine: Making call to close driver server
	I0731 18:08:48.118055   73479 main.go:141] libmachine: (no-preload-673754) Calling .Close
	I0731 18:08:48.118329   73479 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:08:48.118349   73479 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:08:48.128563   73479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.169765123s)
	I0731 18:08:48.128606   73479 main.go:141] libmachine: Making call to close driver server
	I0731 18:08:48.128619   73479 main.go:141] libmachine: (no-preload-673754) Calling .Close
	I0731 18:08:48.128901   73479 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:08:48.128915   73479 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:08:48.128924   73479 main.go:141] libmachine: Making call to close driver server
	I0731 18:08:48.128932   73479 main.go:141] libmachine: (no-preload-673754) Calling .Close
	I0731 18:08:48.129137   73479 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:08:48.129152   73479 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:08:48.129162   73479 addons.go:475] Verifying addon metrics-server=true in "no-preload-673754"
	I0731 18:08:48.129174   73479 main.go:141] libmachine: (no-preload-673754) DBG | Closing plugin on server side
	I0731 18:08:48.130887   73479 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0731 18:08:46.271648   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:48.271754   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:45.478411   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:45.978802   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:46.478407   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:46.978134   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:47.479125   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:47.978991   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:48.478597   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:48.978742   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:49.479320   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:49.978288   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:46.779263   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:48.779361   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:48.131964   73479 addons.go:510] duration metric: took 1.634151286s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0731 18:08:48.725682   73479 node_ready.go:53] node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:51.231081   73479 node_ready.go:53] node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:50.771387   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:52.771438   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:50.478112   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:50.978272   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:51.478319   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:51.978880   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:52.479176   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:52.979001   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:53.478508   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:53.978517   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:54.478857   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:54.978290   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:51.278348   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:53.278456   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:55.278495   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:53.725153   73479 node_ready.go:53] node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:54.224475   73479 node_ready.go:49] node "no-preload-673754" has status "Ready":"True"
	I0731 18:08:54.224505   73479 node_ready.go:38] duration metric: took 7.503503116s for node "no-preload-673754" to be "Ready" ...
	I0731 18:08:54.224517   73479 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:08:54.231434   73479 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-k7clq" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:56.237804   73479 pod_ready.go:102] pod "coredns-5cfdc65f69-k7clq" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:54.772597   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:57.271778   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:55.478727   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:55.978552   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:56.478246   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:56.978732   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:57.478262   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:57.978216   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:58.478212   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:58.978406   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:59.478270   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:59.978221   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:57.781459   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:00.278913   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:58.740148   73479 pod_ready.go:102] pod "coredns-5cfdc65f69-k7clq" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:01.237849   73479 pod_ready.go:92] pod "coredns-5cfdc65f69-k7clq" in "kube-system" namespace has status "Ready":"True"
	I0731 18:09:01.237874   73479 pod_ready.go:81] duration metric: took 7.00641308s for pod "coredns-5cfdc65f69-k7clq" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.237887   73479 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.242105   73479 pod_ready.go:92] pod "etcd-no-preload-673754" in "kube-system" namespace has status "Ready":"True"
	I0731 18:09:01.242122   73479 pod_ready.go:81] duration metric: took 4.229266ms for pod "etcd-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.242133   73479 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.246652   73479 pod_ready.go:92] pod "kube-apiserver-no-preload-673754" in "kube-system" namespace has status "Ready":"True"
	I0731 18:09:01.246674   73479 pod_ready.go:81] duration metric: took 4.534937ms for pod "kube-apiserver-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.246686   73479 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.251284   73479 pod_ready.go:92] pod "kube-controller-manager-no-preload-673754" in "kube-system" namespace has status "Ready":"True"
	I0731 18:09:01.251302   73479 pod_ready.go:81] duration metric: took 4.608584ms for pod "kube-controller-manager-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.251321   73479 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hqxh6" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.255030   73479 pod_ready.go:92] pod "kube-proxy-hqxh6" in "kube-system" namespace has status "Ready":"True"
	I0731 18:09:01.255045   73479 pod_ready.go:81] duration metric: took 3.718917ms for pod "kube-proxy-hqxh6" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.255052   73479 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.636799   73479 pod_ready.go:92] pod "kube-scheduler-no-preload-673754" in "kube-system" namespace has status "Ready":"True"
	I0731 18:09:01.636826   73479 pod_ready.go:81] duration metric: took 381.767881ms for pod "kube-scheduler-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.636835   73479 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:59.771686   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:02.271396   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:00.478785   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:00.978350   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:01.478635   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:01.978192   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:02.478480   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:02.979021   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:03.478366   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:03.978984   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:04.479143   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:04.978913   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:02.279613   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:04.778482   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:03.642978   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:05.644941   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:04.771938   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:07.271165   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:05.478608   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:05.978345   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:06.478435   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:06.978551   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:07.478131   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:07.978354   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:08.478977   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:08.979122   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:09.478279   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:09.978350   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:06.780364   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:09.278573   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:08.142974   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:10.643136   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:09.771950   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:11.772464   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:13.773164   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:10.479086   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:10.479175   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:10.516364   74203 cri.go:89] found id: ""
	I0731 18:09:10.516389   74203 logs.go:276] 0 containers: []
	W0731 18:09:10.516405   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:10.516411   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:10.516464   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:10.549398   74203 cri.go:89] found id: ""
	I0731 18:09:10.549422   74203 logs.go:276] 0 containers: []
	W0731 18:09:10.549433   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:10.549440   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:10.549503   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:10.584290   74203 cri.go:89] found id: ""
	I0731 18:09:10.584314   74203 logs.go:276] 0 containers: []
	W0731 18:09:10.584322   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:10.584327   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:10.584381   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:10.615832   74203 cri.go:89] found id: ""
	I0731 18:09:10.615860   74203 logs.go:276] 0 containers: []
	W0731 18:09:10.615871   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:10.615878   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:10.615941   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:10.647597   74203 cri.go:89] found id: ""
	I0731 18:09:10.647617   74203 logs.go:276] 0 containers: []
	W0731 18:09:10.647624   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:10.647629   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:10.647686   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:10.680981   74203 cri.go:89] found id: ""
	I0731 18:09:10.681016   74203 logs.go:276] 0 containers: []
	W0731 18:09:10.681027   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:10.681033   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:10.681093   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:10.713798   74203 cri.go:89] found id: ""
	I0731 18:09:10.713839   74203 logs.go:276] 0 containers: []
	W0731 18:09:10.713851   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:10.713865   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:10.713937   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:10.746378   74203 cri.go:89] found id: ""
	I0731 18:09:10.746405   74203 logs.go:276] 0 containers: []
	W0731 18:09:10.746413   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:10.746423   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:10.746439   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:10.799156   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:10.799187   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:10.812388   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:10.812413   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:10.932251   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:10.932271   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:10.932285   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:10.996810   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:10.996840   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:13.533936   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:13.549194   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:13.549250   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:13.599350   74203 cri.go:89] found id: ""
	I0731 18:09:13.599389   74203 logs.go:276] 0 containers: []
	W0731 18:09:13.599400   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:13.599407   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:13.599466   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:13.651736   74203 cri.go:89] found id: ""
	I0731 18:09:13.651771   74203 logs.go:276] 0 containers: []
	W0731 18:09:13.651791   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:13.651798   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:13.651855   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:13.699804   74203 cri.go:89] found id: ""
	I0731 18:09:13.699832   74203 logs.go:276] 0 containers: []
	W0731 18:09:13.699841   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:13.699846   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:13.699906   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:13.732760   74203 cri.go:89] found id: ""
	I0731 18:09:13.732781   74203 logs.go:276] 0 containers: []
	W0731 18:09:13.732788   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:13.732794   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:13.732849   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:13.766865   74203 cri.go:89] found id: ""
	I0731 18:09:13.766892   74203 logs.go:276] 0 containers: []
	W0731 18:09:13.766902   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:13.766910   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:13.766964   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:13.804706   74203 cri.go:89] found id: ""
	I0731 18:09:13.804733   74203 logs.go:276] 0 containers: []
	W0731 18:09:13.804743   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:13.804757   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:13.804821   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:13.838432   74203 cri.go:89] found id: ""
	I0731 18:09:13.838461   74203 logs.go:276] 0 containers: []
	W0731 18:09:13.838472   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:13.838479   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:13.838534   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:13.870455   74203 cri.go:89] found id: ""
	I0731 18:09:13.870480   74203 logs.go:276] 0 containers: []
	W0731 18:09:13.870490   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:13.870498   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:13.870510   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:13.922911   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:13.922947   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:13.936075   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:13.936098   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:14.006766   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:14.006790   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:14.006810   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:14.071066   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:14.071100   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:11.278892   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:13.279644   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:15.280298   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:12.643341   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:14.643636   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:16.280976   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:18.772338   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:16.615212   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:16.627439   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:16.627499   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:16.660764   74203 cri.go:89] found id: ""
	I0731 18:09:16.660785   74203 logs.go:276] 0 containers: []
	W0731 18:09:16.660792   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:16.660798   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:16.660842   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:16.697154   74203 cri.go:89] found id: ""
	I0731 18:09:16.697182   74203 logs.go:276] 0 containers: []
	W0731 18:09:16.697196   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:16.697201   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:16.697259   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:16.730263   74203 cri.go:89] found id: ""
	I0731 18:09:16.730284   74203 logs.go:276] 0 containers: []
	W0731 18:09:16.730291   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:16.730318   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:16.730369   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:16.765226   74203 cri.go:89] found id: ""
	I0731 18:09:16.765249   74203 logs.go:276] 0 containers: []
	W0731 18:09:16.765257   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:16.765262   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:16.765336   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:16.800502   74203 cri.go:89] found id: ""
	I0731 18:09:16.800528   74203 logs.go:276] 0 containers: []
	W0731 18:09:16.800535   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:16.800541   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:16.800599   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:16.837391   74203 cri.go:89] found id: ""
	I0731 18:09:16.837418   74203 logs.go:276] 0 containers: []
	W0731 18:09:16.837427   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:16.837435   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:16.837490   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:16.867606   74203 cri.go:89] found id: ""
	I0731 18:09:16.867628   74203 logs.go:276] 0 containers: []
	W0731 18:09:16.867637   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:16.867642   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:16.867696   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:16.901639   74203 cri.go:89] found id: ""
	I0731 18:09:16.901669   74203 logs.go:276] 0 containers: []
	W0731 18:09:16.901681   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:16.901693   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:16.901707   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:16.951692   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:16.951729   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:16.965069   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:16.965101   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:17.040337   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:17.040358   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:17.040371   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:17.115058   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:17.115093   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:19.651538   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:19.663682   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:19.663739   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:19.697851   74203 cri.go:89] found id: ""
	I0731 18:09:19.697879   74203 logs.go:276] 0 containers: []
	W0731 18:09:19.697894   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:19.697900   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:19.697996   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:19.732745   74203 cri.go:89] found id: ""
	I0731 18:09:19.732772   74203 logs.go:276] 0 containers: []
	W0731 18:09:19.732783   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:19.732790   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:19.732855   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:19.763843   74203 cri.go:89] found id: ""
	I0731 18:09:19.763865   74203 logs.go:276] 0 containers: []
	W0731 18:09:19.763873   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:19.763878   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:19.763934   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:19.797398   74203 cri.go:89] found id: ""
	I0731 18:09:19.797422   74203 logs.go:276] 0 containers: []
	W0731 18:09:19.797429   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:19.797434   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:19.797504   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:19.833239   74203 cri.go:89] found id: ""
	I0731 18:09:19.833268   74203 logs.go:276] 0 containers: []
	W0731 18:09:19.833278   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:19.833284   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:19.833340   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:19.866135   74203 cri.go:89] found id: ""
	I0731 18:09:19.866163   74203 logs.go:276] 0 containers: []
	W0731 18:09:19.866173   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:19.866181   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:19.866242   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:19.900581   74203 cri.go:89] found id: ""
	I0731 18:09:19.900606   74203 logs.go:276] 0 containers: []
	W0731 18:09:19.900615   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:19.900621   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:19.900720   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:19.936451   74203 cri.go:89] found id: ""
	I0731 18:09:19.936475   74203 logs.go:276] 0 containers: []
	W0731 18:09:19.936487   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:19.936496   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:19.936508   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:19.990522   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:19.990559   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:20.003460   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:20.003487   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:20.070869   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:20.070893   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:20.070912   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:20.148316   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:20.148354   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:17.779144   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:19.781539   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:17.143894   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:19.642139   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:21.642234   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:21.271074   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:23.771002   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:22.685964   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:22.698740   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:22.698814   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:22.735321   74203 cri.go:89] found id: ""
	I0731 18:09:22.735350   74203 logs.go:276] 0 containers: []
	W0731 18:09:22.735360   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:22.735367   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:22.735428   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:22.767689   74203 cri.go:89] found id: ""
	I0731 18:09:22.767718   74203 logs.go:276] 0 containers: []
	W0731 18:09:22.767729   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:22.767736   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:22.767795   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:22.804010   74203 cri.go:89] found id: ""
	I0731 18:09:22.804036   74203 logs.go:276] 0 containers: []
	W0731 18:09:22.804045   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:22.804050   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:22.804101   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:22.836820   74203 cri.go:89] found id: ""
	I0731 18:09:22.836847   74203 logs.go:276] 0 containers: []
	W0731 18:09:22.836858   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:22.836874   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:22.836933   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:22.870163   74203 cri.go:89] found id: ""
	I0731 18:09:22.870187   74203 logs.go:276] 0 containers: []
	W0731 18:09:22.870194   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:22.870199   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:22.870270   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:22.905926   74203 cri.go:89] found id: ""
	I0731 18:09:22.905951   74203 logs.go:276] 0 containers: []
	W0731 18:09:22.905959   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:22.905965   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:22.906020   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:22.938926   74203 cri.go:89] found id: ""
	I0731 18:09:22.938949   74203 logs.go:276] 0 containers: []
	W0731 18:09:22.938957   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:22.938963   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:22.939008   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:22.975150   74203 cri.go:89] found id: ""
	I0731 18:09:22.975185   74203 logs.go:276] 0 containers: []
	W0731 18:09:22.975194   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:22.975204   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:22.975219   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:23.043265   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:23.043290   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:23.043302   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:23.122681   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:23.122717   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:23.161745   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:23.161769   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:23.211274   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:23.211305   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:22.278664   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:24.778771   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:23.643871   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:26.143509   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:25.771922   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:27.772156   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:25.724702   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:25.739335   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:25.739415   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:25.778238   74203 cri.go:89] found id: ""
	I0731 18:09:25.778264   74203 logs.go:276] 0 containers: []
	W0731 18:09:25.778274   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:25.778282   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:25.778349   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:25.816530   74203 cri.go:89] found id: ""
	I0731 18:09:25.816566   74203 logs.go:276] 0 containers: []
	W0731 18:09:25.816579   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:25.816587   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:25.816652   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:25.853524   74203 cri.go:89] found id: ""
	I0731 18:09:25.853562   74203 logs.go:276] 0 containers: []
	W0731 18:09:25.853575   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:25.853583   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:25.853661   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:25.889690   74203 cri.go:89] found id: ""
	I0731 18:09:25.889719   74203 logs.go:276] 0 containers: []
	W0731 18:09:25.889728   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:25.889734   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:25.889803   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:25.922409   74203 cri.go:89] found id: ""
	I0731 18:09:25.922441   74203 logs.go:276] 0 containers: []
	W0731 18:09:25.922452   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:25.922459   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:25.922512   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:25.956849   74203 cri.go:89] found id: ""
	I0731 18:09:25.956876   74203 logs.go:276] 0 containers: []
	W0731 18:09:25.956886   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:25.956893   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:25.956958   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:25.994190   74203 cri.go:89] found id: ""
	I0731 18:09:25.994212   74203 logs.go:276] 0 containers: []
	W0731 18:09:25.994220   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:25.994225   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:25.994270   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:26.027980   74203 cri.go:89] found id: ""
	I0731 18:09:26.028005   74203 logs.go:276] 0 containers: []
	W0731 18:09:26.028014   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:26.028025   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:26.028044   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:26.076627   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:26.076661   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:26.089439   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:26.089464   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:26.167298   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:26.167319   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:26.167333   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:26.244611   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:26.244644   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:28.787238   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:28.800136   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:28.800221   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:28.843038   74203 cri.go:89] found id: ""
	I0731 18:09:28.843062   74203 logs.go:276] 0 containers: []
	W0731 18:09:28.843070   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:28.843076   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:28.843154   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:28.876979   74203 cri.go:89] found id: ""
	I0731 18:09:28.877010   74203 logs.go:276] 0 containers: []
	W0731 18:09:28.877021   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:28.877028   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:28.877095   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:28.913105   74203 cri.go:89] found id: ""
	I0731 18:09:28.913137   74203 logs.go:276] 0 containers: []
	W0731 18:09:28.913147   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:28.913155   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:28.913216   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:28.949113   74203 cri.go:89] found id: ""
	I0731 18:09:28.949144   74203 logs.go:276] 0 containers: []
	W0731 18:09:28.949153   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:28.949160   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:28.949208   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:28.983159   74203 cri.go:89] found id: ""
	I0731 18:09:28.983187   74203 logs.go:276] 0 containers: []
	W0731 18:09:28.983195   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:28.983200   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:28.983276   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:29.016316   74203 cri.go:89] found id: ""
	I0731 18:09:29.016356   74203 logs.go:276] 0 containers: []
	W0731 18:09:29.016364   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:29.016370   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:29.016419   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:29.050015   74203 cri.go:89] found id: ""
	I0731 18:09:29.050047   74203 logs.go:276] 0 containers: []
	W0731 18:09:29.050058   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:29.050069   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:29.050124   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:29.084711   74203 cri.go:89] found id: ""
	I0731 18:09:29.084739   74203 logs.go:276] 0 containers: []
	W0731 18:09:29.084749   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:29.084760   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:29.084777   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:29.135474   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:29.135516   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:29.149989   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:29.150022   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:29.223652   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:29.223676   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:29.223688   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:29.307949   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:29.307983   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:26.779082   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:29.280030   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:28.143957   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:30.643349   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:30.271524   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:32.271862   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:31.848760   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:31.861409   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:31.861470   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:31.894485   74203 cri.go:89] found id: ""
	I0731 18:09:31.894505   74203 logs.go:276] 0 containers: []
	W0731 18:09:31.894513   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:31.894518   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:31.894563   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:31.926760   74203 cri.go:89] found id: ""
	I0731 18:09:31.926784   74203 logs.go:276] 0 containers: []
	W0731 18:09:31.926791   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:31.926797   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:31.926857   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:31.963010   74203 cri.go:89] found id: ""
	I0731 18:09:31.963042   74203 logs.go:276] 0 containers: []
	W0731 18:09:31.963055   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:31.963062   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:31.963165   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:31.995221   74203 cri.go:89] found id: ""
	I0731 18:09:31.995249   74203 logs.go:276] 0 containers: []
	W0731 18:09:31.995260   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:31.995268   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:31.995333   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:32.033912   74203 cri.go:89] found id: ""
	I0731 18:09:32.033942   74203 logs.go:276] 0 containers: []
	W0731 18:09:32.033955   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:32.033963   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:32.034038   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:32.066416   74203 cri.go:89] found id: ""
	I0731 18:09:32.066446   74203 logs.go:276] 0 containers: []
	W0731 18:09:32.066477   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:32.066486   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:32.066549   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:32.100097   74203 cri.go:89] found id: ""
	I0731 18:09:32.100121   74203 logs.go:276] 0 containers: []
	W0731 18:09:32.100129   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:32.100135   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:32.100191   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:32.133061   74203 cri.go:89] found id: ""
	I0731 18:09:32.133088   74203 logs.go:276] 0 containers: []
	W0731 18:09:32.133096   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:32.133106   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:32.133120   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:32.169869   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:32.169897   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:32.218668   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:32.218707   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:32.231016   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:32.231039   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:32.304319   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:32.304342   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:32.304353   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:34.880423   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:34.893775   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:34.893853   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:34.925073   74203 cri.go:89] found id: ""
	I0731 18:09:34.925101   74203 logs.go:276] 0 containers: []
	W0731 18:09:34.925109   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:34.925115   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:34.925178   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:34.960870   74203 cri.go:89] found id: ""
	I0731 18:09:34.960896   74203 logs.go:276] 0 containers: []
	W0731 18:09:34.960904   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:34.960910   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:34.960961   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:34.996290   74203 cri.go:89] found id: ""
	I0731 18:09:34.996332   74203 logs.go:276] 0 containers: []
	W0731 18:09:34.996341   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:34.996347   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:34.996401   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:35.027900   74203 cri.go:89] found id: ""
	I0731 18:09:35.027932   74203 logs.go:276] 0 containers: []
	W0731 18:09:35.027940   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:35.027945   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:35.028004   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:35.060533   74203 cri.go:89] found id: ""
	I0731 18:09:35.060562   74203 logs.go:276] 0 containers: []
	W0731 18:09:35.060579   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:35.060586   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:35.060653   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:35.095307   74203 cri.go:89] found id: ""
	I0731 18:09:35.095339   74203 logs.go:276] 0 containers: []
	W0731 18:09:35.095348   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:35.095355   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:35.095421   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:35.127060   74203 cri.go:89] found id: ""
	I0731 18:09:35.127082   74203 logs.go:276] 0 containers: []
	W0731 18:09:35.127090   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:35.127095   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:35.127169   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:35.161300   74203 cri.go:89] found id: ""
	I0731 18:09:35.161328   74203 logs.go:276] 0 containers: []
	W0731 18:09:35.161339   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:35.161350   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:35.161369   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:35.233033   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:35.233060   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:35.233074   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:35.313279   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:35.313311   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:31.779160   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:33.779209   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:32.644329   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:35.143744   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:34.774758   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:37.271690   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:35.356120   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:35.356145   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:35.408231   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:35.408263   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:37.921242   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:37.933986   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:37.934044   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:37.964524   74203 cri.go:89] found id: ""
	I0731 18:09:37.964558   74203 logs.go:276] 0 containers: []
	W0731 18:09:37.964567   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:37.964574   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:37.964632   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:37.998157   74203 cri.go:89] found id: ""
	I0731 18:09:37.998183   74203 logs.go:276] 0 containers: []
	W0731 18:09:37.998191   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:37.998196   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:37.998257   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:38.034611   74203 cri.go:89] found id: ""
	I0731 18:09:38.034637   74203 logs.go:276] 0 containers: []
	W0731 18:09:38.034645   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:38.034650   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:38.034708   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:38.068005   74203 cri.go:89] found id: ""
	I0731 18:09:38.068029   74203 logs.go:276] 0 containers: []
	W0731 18:09:38.068039   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:38.068047   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:38.068104   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:38.106110   74203 cri.go:89] found id: ""
	I0731 18:09:38.106133   74203 logs.go:276] 0 containers: []
	W0731 18:09:38.106141   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:38.106146   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:38.106192   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:38.138337   74203 cri.go:89] found id: ""
	I0731 18:09:38.138364   74203 logs.go:276] 0 containers: []
	W0731 18:09:38.138375   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:38.138383   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:38.138440   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:38.171517   74203 cri.go:89] found id: ""
	I0731 18:09:38.171546   74203 logs.go:276] 0 containers: []
	W0731 18:09:38.171557   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:38.171564   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:38.171643   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:38.208708   74203 cri.go:89] found id: ""
	I0731 18:09:38.208733   74203 logs.go:276] 0 containers: []
	W0731 18:09:38.208741   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:38.208750   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:38.208760   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:38.243711   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:38.243736   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:38.298673   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:38.298705   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:38.311936   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:38.311962   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:38.384023   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:38.384049   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:38.384067   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:36.278948   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:38.279423   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:40.281213   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:37.644041   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:40.143131   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:39.772098   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:42.272096   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:40.959426   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:40.972581   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:40.972645   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:41.008917   74203 cri.go:89] found id: ""
	I0731 18:09:41.008941   74203 logs.go:276] 0 containers: []
	W0731 18:09:41.008950   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:41.008957   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:41.009018   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:41.045342   74203 cri.go:89] found id: ""
	I0731 18:09:41.045375   74203 logs.go:276] 0 containers: []
	W0731 18:09:41.045384   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:41.045390   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:41.045454   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:41.081385   74203 cri.go:89] found id: ""
	I0731 18:09:41.081409   74203 logs.go:276] 0 containers: []
	W0731 18:09:41.081417   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:41.081423   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:41.081469   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:41.118028   74203 cri.go:89] found id: ""
	I0731 18:09:41.118051   74203 logs.go:276] 0 containers: []
	W0731 18:09:41.118062   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:41.118067   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:41.118114   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:41.154162   74203 cri.go:89] found id: ""
	I0731 18:09:41.154190   74203 logs.go:276] 0 containers: []
	W0731 18:09:41.154201   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:41.154209   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:41.154271   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:41.190789   74203 cri.go:89] found id: ""
	I0731 18:09:41.190814   74203 logs.go:276] 0 containers: []
	W0731 18:09:41.190822   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:41.190827   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:41.190887   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:41.226281   74203 cri.go:89] found id: ""
	I0731 18:09:41.226312   74203 logs.go:276] 0 containers: []
	W0731 18:09:41.226321   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:41.226327   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:41.226382   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:41.258270   74203 cri.go:89] found id: ""
	I0731 18:09:41.258299   74203 logs.go:276] 0 containers: []
	W0731 18:09:41.258309   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:41.258321   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:41.258335   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:41.342713   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:41.342749   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:41.389772   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:41.389795   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:41.442645   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:41.442676   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:41.455850   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:41.455874   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:41.522017   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:44.022439   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:44.035190   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:44.035258   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:44.070759   74203 cri.go:89] found id: ""
	I0731 18:09:44.070783   74203 logs.go:276] 0 containers: []
	W0731 18:09:44.070790   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:44.070796   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:44.070857   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:44.105313   74203 cri.go:89] found id: ""
	I0731 18:09:44.105350   74203 logs.go:276] 0 containers: []
	W0731 18:09:44.105358   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:44.105364   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:44.105416   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:44.140159   74203 cri.go:89] found id: ""
	I0731 18:09:44.140208   74203 logs.go:276] 0 containers: []
	W0731 18:09:44.140220   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:44.140229   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:44.140301   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:44.176407   74203 cri.go:89] found id: ""
	I0731 18:09:44.176429   74203 logs.go:276] 0 containers: []
	W0731 18:09:44.176437   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:44.176442   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:44.176490   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:44.210875   74203 cri.go:89] found id: ""
	I0731 18:09:44.210899   74203 logs.go:276] 0 containers: []
	W0731 18:09:44.210907   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:44.210916   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:44.210969   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:44.247021   74203 cri.go:89] found id: ""
	I0731 18:09:44.247045   74203 logs.go:276] 0 containers: []
	W0731 18:09:44.247055   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:44.247061   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:44.247141   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:44.282983   74203 cri.go:89] found id: ""
	I0731 18:09:44.283011   74203 logs.go:276] 0 containers: []
	W0731 18:09:44.283021   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:44.283029   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:44.283092   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:44.319717   74203 cri.go:89] found id: ""
	I0731 18:09:44.319742   74203 logs.go:276] 0 containers: []
	W0731 18:09:44.319750   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:44.319766   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:44.319781   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:44.398602   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:44.398636   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:44.435350   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:44.435384   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:44.488021   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:44.488053   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:44.501790   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:44.501813   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:44.578374   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:42.779304   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:45.279008   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:42.143287   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:44.144123   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:46.643499   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:44.771059   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:46.771846   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:48.772300   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:47.079192   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:47.093516   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:47.093597   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:47.132872   74203 cri.go:89] found id: ""
	I0731 18:09:47.132899   74203 logs.go:276] 0 containers: []
	W0731 18:09:47.132907   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:47.132913   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:47.132969   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:47.167428   74203 cri.go:89] found id: ""
	I0731 18:09:47.167460   74203 logs.go:276] 0 containers: []
	W0731 18:09:47.167472   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:47.167480   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:47.167551   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:47.202206   74203 cri.go:89] found id: ""
	I0731 18:09:47.202237   74203 logs.go:276] 0 containers: []
	W0731 18:09:47.202250   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:47.202256   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:47.202308   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:47.238513   74203 cri.go:89] found id: ""
	I0731 18:09:47.238537   74203 logs.go:276] 0 containers: []
	W0731 18:09:47.238545   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:47.238551   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:47.238604   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:47.271732   74203 cri.go:89] found id: ""
	I0731 18:09:47.271755   74203 logs.go:276] 0 containers: []
	W0731 18:09:47.271764   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:47.271770   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:47.271828   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:47.305906   74203 cri.go:89] found id: ""
	I0731 18:09:47.305932   74203 logs.go:276] 0 containers: []
	W0731 18:09:47.305943   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:47.305948   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:47.305996   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:47.338427   74203 cri.go:89] found id: ""
	I0731 18:09:47.338452   74203 logs.go:276] 0 containers: []
	W0731 18:09:47.338461   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:47.338468   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:47.338526   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:47.374909   74203 cri.go:89] found id: ""
	I0731 18:09:47.374943   74203 logs.go:276] 0 containers: []
	W0731 18:09:47.374954   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:47.374963   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:47.374976   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:47.387739   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:47.387765   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:47.480479   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:47.480505   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:47.480519   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:47.562857   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:47.562890   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:47.608435   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:47.608466   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:50.164351   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:50.177485   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:50.177546   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:50.211474   74203 cri.go:89] found id: ""
	I0731 18:09:50.211502   74203 logs.go:276] 0 containers: []
	W0731 18:09:50.211512   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:50.211520   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:50.211583   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:50.248167   74203 cri.go:89] found id: ""
	I0731 18:09:50.248190   74203 logs.go:276] 0 containers: []
	W0731 18:09:50.248197   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:50.248203   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:50.248250   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:50.286323   74203 cri.go:89] found id: ""
	I0731 18:09:50.286358   74203 logs.go:276] 0 containers: []
	W0731 18:09:50.286366   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:50.286372   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:50.286420   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:50.316634   74203 cri.go:89] found id: ""
	I0731 18:09:50.316661   74203 logs.go:276] 0 containers: []
	W0731 18:09:50.316670   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:50.316675   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:50.316726   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:47.279198   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:49.280511   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:49.144581   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:51.642915   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:51.272079   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:53.272815   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:50.349881   74203 cri.go:89] found id: ""
	I0731 18:09:50.349909   74203 logs.go:276] 0 containers: []
	W0731 18:09:50.349919   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:50.349926   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:50.349989   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:50.384147   74203 cri.go:89] found id: ""
	I0731 18:09:50.384181   74203 logs.go:276] 0 containers: []
	W0731 18:09:50.384194   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:50.384203   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:50.384272   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:50.418024   74203 cri.go:89] found id: ""
	I0731 18:09:50.418052   74203 logs.go:276] 0 containers: []
	W0731 18:09:50.418062   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:50.418069   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:50.418130   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:50.454484   74203 cri.go:89] found id: ""
	I0731 18:09:50.454517   74203 logs.go:276] 0 containers: []
	W0731 18:09:50.454525   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:50.454533   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:50.454544   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:50.505508   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:50.505545   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:50.518504   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:50.518529   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:50.587950   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:50.587974   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:50.587989   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:50.669268   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:50.669302   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:53.209229   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:53.222114   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:53.222175   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:53.255330   74203 cri.go:89] found id: ""
	I0731 18:09:53.255356   74203 logs.go:276] 0 containers: []
	W0731 18:09:53.255365   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:53.255371   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:53.255432   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:53.290354   74203 cri.go:89] found id: ""
	I0731 18:09:53.290375   74203 logs.go:276] 0 containers: []
	W0731 18:09:53.290382   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:53.290387   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:53.290438   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:53.323621   74203 cri.go:89] found id: ""
	I0731 18:09:53.323645   74203 logs.go:276] 0 containers: []
	W0731 18:09:53.323653   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:53.323658   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:53.323718   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:53.355850   74203 cri.go:89] found id: ""
	I0731 18:09:53.355877   74203 logs.go:276] 0 containers: []
	W0731 18:09:53.355887   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:53.355894   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:53.355957   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:53.388686   74203 cri.go:89] found id: ""
	I0731 18:09:53.388716   74203 logs.go:276] 0 containers: []
	W0731 18:09:53.388726   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:53.388733   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:53.388785   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:53.426924   74203 cri.go:89] found id: ""
	I0731 18:09:53.426952   74203 logs.go:276] 0 containers: []
	W0731 18:09:53.426961   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:53.426967   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:53.427019   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:53.462041   74203 cri.go:89] found id: ""
	I0731 18:09:53.462067   74203 logs.go:276] 0 containers: []
	W0731 18:09:53.462078   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:53.462084   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:53.462145   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:53.493810   74203 cri.go:89] found id: ""
	I0731 18:09:53.493833   74203 logs.go:276] 0 containers: []
	W0731 18:09:53.493842   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:53.493852   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:53.493867   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:53.530019   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:53.530053   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:53.580749   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:53.580782   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:53.594457   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:53.594482   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:53.662096   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:53.662116   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:53.662134   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:51.778292   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:53.779043   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:53.643914   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:56.142699   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:55.772106   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:58.271063   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:56.238479   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:56.251272   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:56.251350   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:56.287380   74203 cri.go:89] found id: ""
	I0731 18:09:56.287406   74203 logs.go:276] 0 containers: []
	W0731 18:09:56.287414   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:56.287419   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:56.287471   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:56.322490   74203 cri.go:89] found id: ""
	I0731 18:09:56.322512   74203 logs.go:276] 0 containers: []
	W0731 18:09:56.322520   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:56.322526   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:56.322572   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:56.355845   74203 cri.go:89] found id: ""
	I0731 18:09:56.355874   74203 logs.go:276] 0 containers: []
	W0731 18:09:56.355885   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:56.355895   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:56.355958   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:56.388304   74203 cri.go:89] found id: ""
	I0731 18:09:56.388330   74203 logs.go:276] 0 containers: []
	W0731 18:09:56.388340   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:56.388348   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:56.388411   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:56.420837   74203 cri.go:89] found id: ""
	I0731 18:09:56.420867   74203 logs.go:276] 0 containers: []
	W0731 18:09:56.420877   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:56.420884   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:56.420950   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:56.453095   74203 cri.go:89] found id: ""
	I0731 18:09:56.453135   74203 logs.go:276] 0 containers: []
	W0731 18:09:56.453146   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:56.453155   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:56.453214   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:56.484245   74203 cri.go:89] found id: ""
	I0731 18:09:56.484272   74203 logs.go:276] 0 containers: []
	W0731 18:09:56.484282   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:56.484296   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:56.484366   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:56.519473   74203 cri.go:89] found id: ""
	I0731 18:09:56.519501   74203 logs.go:276] 0 containers: []
	W0731 18:09:56.519508   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:56.519516   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:56.519530   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:56.532178   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:56.532203   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:56.600092   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:56.600122   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:56.600137   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:56.679176   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:56.679208   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:56.715464   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:56.715499   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:59.267214   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:59.280666   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:59.280740   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:59.312898   74203 cri.go:89] found id: ""
	I0731 18:09:59.312928   74203 logs.go:276] 0 containers: []
	W0731 18:09:59.312940   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:59.312947   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:59.313013   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:59.347881   74203 cri.go:89] found id: ""
	I0731 18:09:59.347907   74203 logs.go:276] 0 containers: []
	W0731 18:09:59.347915   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:59.347919   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:59.347978   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:59.382566   74203 cri.go:89] found id: ""
	I0731 18:09:59.382603   74203 logs.go:276] 0 containers: []
	W0731 18:09:59.382615   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:59.382629   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:59.382691   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:59.417123   74203 cri.go:89] found id: ""
	I0731 18:09:59.417148   74203 logs.go:276] 0 containers: []
	W0731 18:09:59.417157   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:59.417163   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:59.417220   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:59.452674   74203 cri.go:89] found id: ""
	I0731 18:09:59.452699   74203 logs.go:276] 0 containers: []
	W0731 18:09:59.452709   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:59.452715   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:59.452775   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:59.488879   74203 cri.go:89] found id: ""
	I0731 18:09:59.488905   74203 logs.go:276] 0 containers: []
	W0731 18:09:59.488913   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:59.488921   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:59.488981   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:59.521773   74203 cri.go:89] found id: ""
	I0731 18:09:59.521801   74203 logs.go:276] 0 containers: []
	W0731 18:09:59.521809   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:59.521816   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:59.521876   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:59.566619   74203 cri.go:89] found id: ""
	I0731 18:09:59.566649   74203 logs.go:276] 0 containers: []
	W0731 18:09:59.566659   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:59.566670   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:59.566687   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:59.638301   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:59.638351   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:59.638367   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:59.721561   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:59.721597   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:59.759371   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:59.759402   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:59.811223   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:59.811255   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:56.280351   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:58.777896   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:00.779028   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:58.144006   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:00.643536   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:00.772456   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:03.270710   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:02.325339   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:02.337908   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:02.337963   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:02.369343   74203 cri.go:89] found id: ""
	I0731 18:10:02.369369   74203 logs.go:276] 0 containers: []
	W0731 18:10:02.369378   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:02.369384   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:02.369442   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:02.406207   74203 cri.go:89] found id: ""
	I0731 18:10:02.406234   74203 logs.go:276] 0 containers: []
	W0731 18:10:02.406242   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:02.406247   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:02.406297   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:02.442001   74203 cri.go:89] found id: ""
	I0731 18:10:02.442031   74203 logs.go:276] 0 containers: []
	W0731 18:10:02.442041   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:02.442049   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:02.442109   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:02.478407   74203 cri.go:89] found id: ""
	I0731 18:10:02.478431   74203 logs.go:276] 0 containers: []
	W0731 18:10:02.478439   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:02.478444   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:02.478491   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:02.513832   74203 cri.go:89] found id: ""
	I0731 18:10:02.513875   74203 logs.go:276] 0 containers: []
	W0731 18:10:02.513888   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:02.513896   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:02.513962   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:02.550830   74203 cri.go:89] found id: ""
	I0731 18:10:02.550856   74203 logs.go:276] 0 containers: []
	W0731 18:10:02.550867   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:02.550874   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:02.550937   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:02.584649   74203 cri.go:89] found id: ""
	I0731 18:10:02.584676   74203 logs.go:276] 0 containers: []
	W0731 18:10:02.584684   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:02.584691   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:02.584752   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:02.617436   74203 cri.go:89] found id: ""
	I0731 18:10:02.617464   74203 logs.go:276] 0 containers: []
	W0731 18:10:02.617475   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:02.617485   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:02.617500   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:02.671571   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:02.671609   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:02.686657   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:02.686694   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:02.755974   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:02.756008   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:02.756025   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:02.837976   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:02.838012   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:02.779666   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:04.779994   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:02.644075   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:05.142859   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:05.272500   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:07.771599   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:05.375212   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:05.388635   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:05.388703   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:05.427583   74203 cri.go:89] found id: ""
	I0731 18:10:05.427610   74203 logs.go:276] 0 containers: []
	W0731 18:10:05.427617   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:05.427622   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:05.427673   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:05.462550   74203 cri.go:89] found id: ""
	I0731 18:10:05.462575   74203 logs.go:276] 0 containers: []
	W0731 18:10:05.462583   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:05.462589   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:05.462645   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:05.501768   74203 cri.go:89] found id: ""
	I0731 18:10:05.501790   74203 logs.go:276] 0 containers: []
	W0731 18:10:05.501797   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:05.501802   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:05.501860   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:05.539692   74203 cri.go:89] found id: ""
	I0731 18:10:05.539719   74203 logs.go:276] 0 containers: []
	W0731 18:10:05.539731   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:05.539737   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:05.539798   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:05.573844   74203 cri.go:89] found id: ""
	I0731 18:10:05.573872   74203 logs.go:276] 0 containers: []
	W0731 18:10:05.573884   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:05.573891   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:05.573953   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:05.607827   74203 cri.go:89] found id: ""
	I0731 18:10:05.607848   74203 logs.go:276] 0 containers: []
	W0731 18:10:05.607858   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:05.607863   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:05.607913   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:05.639644   74203 cri.go:89] found id: ""
	I0731 18:10:05.639673   74203 logs.go:276] 0 containers: []
	W0731 18:10:05.639684   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:05.639691   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:05.639753   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:05.673164   74203 cri.go:89] found id: ""
	I0731 18:10:05.673188   74203 logs.go:276] 0 containers: []
	W0731 18:10:05.673195   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:05.673203   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:05.673215   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:05.755189   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:05.755221   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:05.793686   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:05.793715   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:05.844930   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:05.844965   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:05.859150   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:05.859176   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:05.929945   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:08.430669   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:08.444918   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:08.444989   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:08.482598   74203 cri.go:89] found id: ""
	I0731 18:10:08.482625   74203 logs.go:276] 0 containers: []
	W0731 18:10:08.482635   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:08.482642   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:08.482708   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:08.519687   74203 cri.go:89] found id: ""
	I0731 18:10:08.519717   74203 logs.go:276] 0 containers: []
	W0731 18:10:08.519726   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:08.519734   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:08.519795   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:08.551600   74203 cri.go:89] found id: ""
	I0731 18:10:08.551638   74203 logs.go:276] 0 containers: []
	W0731 18:10:08.551649   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:08.551657   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:08.551713   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:08.585233   74203 cri.go:89] found id: ""
	I0731 18:10:08.585263   74203 logs.go:276] 0 containers: []
	W0731 18:10:08.585274   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:08.585282   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:08.585343   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:08.622464   74203 cri.go:89] found id: ""
	I0731 18:10:08.622492   74203 logs.go:276] 0 containers: []
	W0731 18:10:08.622502   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:08.622510   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:08.622569   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:08.658360   74203 cri.go:89] found id: ""
	I0731 18:10:08.658390   74203 logs.go:276] 0 containers: []
	W0731 18:10:08.658402   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:08.658410   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:08.658471   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:08.692076   74203 cri.go:89] found id: ""
	I0731 18:10:08.692100   74203 logs.go:276] 0 containers: []
	W0731 18:10:08.692109   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:08.692116   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:08.692179   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:08.729584   74203 cri.go:89] found id: ""
	I0731 18:10:08.729612   74203 logs.go:276] 0 containers: []
	W0731 18:10:08.729622   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:08.729633   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:08.729647   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:08.806395   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:08.806457   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:08.806485   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:08.884008   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:08.884046   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:08.924359   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:08.924398   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:08.978161   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:08.978195   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:07.279327   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:09.281214   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:07.143145   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:09.143995   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:11.643254   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:09.773024   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:12.272862   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:14.273615   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:11.491784   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:11.504711   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:11.504784   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:11.541314   74203 cri.go:89] found id: ""
	I0731 18:10:11.541353   74203 logs.go:276] 0 containers: []
	W0731 18:10:11.541361   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:11.541366   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:11.541424   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:11.576481   74203 cri.go:89] found id: ""
	I0731 18:10:11.576509   74203 logs.go:276] 0 containers: []
	W0731 18:10:11.576527   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:11.576535   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:11.576597   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:11.610370   74203 cri.go:89] found id: ""
	I0731 18:10:11.610395   74203 logs.go:276] 0 containers: []
	W0731 18:10:11.610404   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:11.610412   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:11.610470   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:11.645559   74203 cri.go:89] found id: ""
	I0731 18:10:11.645586   74203 logs.go:276] 0 containers: []
	W0731 18:10:11.645593   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:11.645598   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:11.645654   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:11.677576   74203 cri.go:89] found id: ""
	I0731 18:10:11.677613   74203 logs.go:276] 0 containers: []
	W0731 18:10:11.677624   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:11.677631   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:11.677681   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:11.710173   74203 cri.go:89] found id: ""
	I0731 18:10:11.710199   74203 logs.go:276] 0 containers: []
	W0731 18:10:11.710208   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:11.710215   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:11.710273   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:11.743722   74203 cri.go:89] found id: ""
	I0731 18:10:11.743752   74203 logs.go:276] 0 containers: []
	W0731 18:10:11.743763   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:11.743782   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:11.743857   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:11.776730   74203 cri.go:89] found id: ""
	I0731 18:10:11.776752   74203 logs.go:276] 0 containers: []
	W0731 18:10:11.776759   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:11.776766   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:11.776777   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:11.846385   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:11.846404   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:11.846415   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:11.923748   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:11.923779   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:11.959700   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:11.959734   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:12.009971   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:12.010002   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:14.524097   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:14.537349   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:14.537449   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:14.569907   74203 cri.go:89] found id: ""
	I0731 18:10:14.569934   74203 logs.go:276] 0 containers: []
	W0731 18:10:14.569941   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:14.569947   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:14.569999   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:14.605058   74203 cri.go:89] found id: ""
	I0731 18:10:14.605085   74203 logs.go:276] 0 containers: []
	W0731 18:10:14.605095   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:14.605102   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:14.605155   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:14.640941   74203 cri.go:89] found id: ""
	I0731 18:10:14.640964   74203 logs.go:276] 0 containers: []
	W0731 18:10:14.640975   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:14.640982   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:14.641039   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:14.678774   74203 cri.go:89] found id: ""
	I0731 18:10:14.678803   74203 logs.go:276] 0 containers: []
	W0731 18:10:14.678814   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:14.678822   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:14.678880   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:14.714123   74203 cri.go:89] found id: ""
	I0731 18:10:14.714152   74203 logs.go:276] 0 containers: []
	W0731 18:10:14.714163   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:14.714171   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:14.714230   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:14.750212   74203 cri.go:89] found id: ""
	I0731 18:10:14.750243   74203 logs.go:276] 0 containers: []
	W0731 18:10:14.750255   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:14.750262   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:14.750322   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:14.786820   74203 cri.go:89] found id: ""
	I0731 18:10:14.786842   74203 logs.go:276] 0 containers: []
	W0731 18:10:14.786850   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:14.786856   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:14.786904   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:14.819667   74203 cri.go:89] found id: ""
	I0731 18:10:14.819689   74203 logs.go:276] 0 containers: []
	W0731 18:10:14.819697   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:14.819705   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:14.819725   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:14.832525   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:14.832550   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:14.901190   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:14.901216   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:14.901229   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:14.977123   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:14.977158   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:15.014882   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:15.014912   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:11.779007   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:14.279638   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:14.142303   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:16.143713   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:16.770910   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:18.771058   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:17.564989   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:17.578676   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:17.578740   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:17.610077   74203 cri.go:89] found id: ""
	I0731 18:10:17.610103   74203 logs.go:276] 0 containers: []
	W0731 18:10:17.610112   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:17.610117   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:17.610169   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:17.643143   74203 cri.go:89] found id: ""
	I0731 18:10:17.643166   74203 logs.go:276] 0 containers: []
	W0731 18:10:17.643173   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:17.643179   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:17.643225   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:17.677979   74203 cri.go:89] found id: ""
	I0731 18:10:17.678002   74203 logs.go:276] 0 containers: []
	W0731 18:10:17.678010   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:17.678016   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:17.678086   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:17.711905   74203 cri.go:89] found id: ""
	I0731 18:10:17.711941   74203 logs.go:276] 0 containers: []
	W0731 18:10:17.711953   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:17.711960   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:17.712013   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:17.745842   74203 cri.go:89] found id: ""
	I0731 18:10:17.745870   74203 logs.go:276] 0 containers: []
	W0731 18:10:17.745881   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:17.745889   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:17.745949   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:17.778170   74203 cri.go:89] found id: ""
	I0731 18:10:17.778242   74203 logs.go:276] 0 containers: []
	W0731 18:10:17.778260   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:17.778272   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:17.778340   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:17.810717   74203 cri.go:89] found id: ""
	I0731 18:10:17.810744   74203 logs.go:276] 0 containers: []
	W0731 18:10:17.810755   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:17.810762   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:17.810823   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:17.843237   74203 cri.go:89] found id: ""
	I0731 18:10:17.843268   74203 logs.go:276] 0 containers: []
	W0731 18:10:17.843278   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:17.843288   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:17.843303   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:17.894338   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:17.894376   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:17.907898   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:17.907927   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:17.977115   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:17.977133   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:17.977145   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:18.059924   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:18.059968   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:16.279697   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:18.780698   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:18.144063   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:20.643891   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:20.772956   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:23.270974   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:20.600903   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:20.613609   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:20.613680   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:20.646352   74203 cri.go:89] found id: ""
	I0731 18:10:20.646379   74203 logs.go:276] 0 containers: []
	W0731 18:10:20.646388   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:20.646395   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:20.646453   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:20.680448   74203 cri.go:89] found id: ""
	I0731 18:10:20.680475   74203 logs.go:276] 0 containers: []
	W0731 18:10:20.680486   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:20.680493   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:20.680555   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:20.716330   74203 cri.go:89] found id: ""
	I0731 18:10:20.716365   74203 logs.go:276] 0 containers: []
	W0731 18:10:20.716378   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:20.716387   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:20.716448   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:20.748630   74203 cri.go:89] found id: ""
	I0731 18:10:20.748657   74203 logs.go:276] 0 containers: []
	W0731 18:10:20.748665   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:20.748670   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:20.748736   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:20.787769   74203 cri.go:89] found id: ""
	I0731 18:10:20.787793   74203 logs.go:276] 0 containers: []
	W0731 18:10:20.787802   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:20.787809   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:20.787869   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:20.819884   74203 cri.go:89] found id: ""
	I0731 18:10:20.819911   74203 logs.go:276] 0 containers: []
	W0731 18:10:20.819921   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:20.819929   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:20.819988   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:20.853414   74203 cri.go:89] found id: ""
	I0731 18:10:20.853437   74203 logs.go:276] 0 containers: []
	W0731 18:10:20.853445   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:20.853450   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:20.853508   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:20.889198   74203 cri.go:89] found id: ""
	I0731 18:10:20.889224   74203 logs.go:276] 0 containers: []
	W0731 18:10:20.889231   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:20.889239   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:20.889251   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:20.903240   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:20.903268   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:20.971003   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:20.971032   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:20.971051   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:21.045856   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:21.045888   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:21.086089   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:21.086121   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:23.639664   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:23.652573   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:23.652632   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:23.684719   74203 cri.go:89] found id: ""
	I0731 18:10:23.684746   74203 logs.go:276] 0 containers: []
	W0731 18:10:23.684757   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:23.684765   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:23.684820   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:23.717315   74203 cri.go:89] found id: ""
	I0731 18:10:23.717350   74203 logs.go:276] 0 containers: []
	W0731 18:10:23.717362   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:23.717369   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:23.717432   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:23.750251   74203 cri.go:89] found id: ""
	I0731 18:10:23.750275   74203 logs.go:276] 0 containers: []
	W0731 18:10:23.750286   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:23.750293   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:23.750397   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:23.785700   74203 cri.go:89] found id: ""
	I0731 18:10:23.785726   74203 logs.go:276] 0 containers: []
	W0731 18:10:23.785737   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:23.785745   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:23.785792   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:23.816856   74203 cri.go:89] found id: ""
	I0731 18:10:23.816885   74203 logs.go:276] 0 containers: []
	W0731 18:10:23.816895   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:23.816902   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:23.816965   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:23.849931   74203 cri.go:89] found id: ""
	I0731 18:10:23.849962   74203 logs.go:276] 0 containers: []
	W0731 18:10:23.849972   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:23.849980   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:23.850043   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:23.881413   74203 cri.go:89] found id: ""
	I0731 18:10:23.881444   74203 logs.go:276] 0 containers: []
	W0731 18:10:23.881452   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:23.881458   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:23.881516   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:23.914272   74203 cri.go:89] found id: ""
	I0731 18:10:23.914303   74203 logs.go:276] 0 containers: []
	W0731 18:10:23.914313   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:23.914325   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:23.914352   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:23.979988   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:23.980015   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:23.980027   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:24.057159   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:24.057198   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:24.097567   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:24.097603   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:24.154740   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:24.154781   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:21.279091   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:23.779103   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:25.779754   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:23.142423   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:25.642901   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:25.272277   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:27.771221   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:26.670324   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:26.683866   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:26.683951   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:26.717671   74203 cri.go:89] found id: ""
	I0731 18:10:26.717722   74203 logs.go:276] 0 containers: []
	W0731 18:10:26.717733   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:26.717739   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:26.717790   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:26.751201   74203 cri.go:89] found id: ""
	I0731 18:10:26.751228   74203 logs.go:276] 0 containers: []
	W0731 18:10:26.751236   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:26.751246   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:26.751315   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:26.784768   74203 cri.go:89] found id: ""
	I0731 18:10:26.784793   74203 logs.go:276] 0 containers: []
	W0731 18:10:26.784803   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:26.784811   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:26.784868   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:26.822269   74203 cri.go:89] found id: ""
	I0731 18:10:26.822298   74203 logs.go:276] 0 containers: []
	W0731 18:10:26.822307   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:26.822315   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:26.822378   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:26.854405   74203 cri.go:89] found id: ""
	I0731 18:10:26.854427   74203 logs.go:276] 0 containers: []
	W0731 18:10:26.854434   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:26.854441   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:26.854490   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:26.888975   74203 cri.go:89] found id: ""
	I0731 18:10:26.889000   74203 logs.go:276] 0 containers: []
	W0731 18:10:26.889007   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:26.889013   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:26.889085   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:26.922940   74203 cri.go:89] found id: ""
	I0731 18:10:26.922967   74203 logs.go:276] 0 containers: []
	W0731 18:10:26.922976   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:26.922981   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:26.923040   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:26.955717   74203 cri.go:89] found id: ""
	I0731 18:10:26.955743   74203 logs.go:276] 0 containers: []
	W0731 18:10:26.955754   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:26.955764   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:26.955779   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:27.006453   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:27.006481   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:27.019136   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:27.019159   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:27.086988   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:27.087014   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:27.087031   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:27.161574   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:27.161604   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:29.705620   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:29.718718   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:29.718775   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:29.751079   74203 cri.go:89] found id: ""
	I0731 18:10:29.751123   74203 logs.go:276] 0 containers: []
	W0731 18:10:29.751134   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:29.751142   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:29.751198   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:29.790944   74203 cri.go:89] found id: ""
	I0731 18:10:29.790971   74203 logs.go:276] 0 containers: []
	W0731 18:10:29.790982   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:29.790988   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:29.791041   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:29.827921   74203 cri.go:89] found id: ""
	I0731 18:10:29.827951   74203 logs.go:276] 0 containers: []
	W0731 18:10:29.827965   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:29.827971   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:29.828031   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:29.861365   74203 cri.go:89] found id: ""
	I0731 18:10:29.861398   74203 logs.go:276] 0 containers: []
	W0731 18:10:29.861409   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:29.861417   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:29.861472   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:29.894509   74203 cri.go:89] found id: ""
	I0731 18:10:29.894537   74203 logs.go:276] 0 containers: []
	W0731 18:10:29.894546   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:29.894552   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:29.894614   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:29.926793   74203 cri.go:89] found id: ""
	I0731 18:10:29.926821   74203 logs.go:276] 0 containers: []
	W0731 18:10:29.926832   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:29.926839   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:29.926904   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:29.963765   74203 cri.go:89] found id: ""
	I0731 18:10:29.963792   74203 logs.go:276] 0 containers: []
	W0731 18:10:29.963802   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:29.963809   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:29.963870   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:29.998577   74203 cri.go:89] found id: ""
	I0731 18:10:29.998604   74203 logs.go:276] 0 containers: []
	W0731 18:10:29.998611   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:29.998619   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:29.998630   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:30.050035   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:30.050072   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:30.064147   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:30.064178   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:30.136990   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:30.137012   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:30.137030   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:30.214687   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:30.214719   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:28.279257   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:30.778466   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:27.644082   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:30.144191   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:29.772316   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:32.272651   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:32.753503   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:32.766795   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:32.766873   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:32.812134   74203 cri.go:89] found id: ""
	I0731 18:10:32.812161   74203 logs.go:276] 0 containers: []
	W0731 18:10:32.812169   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:32.812175   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:32.812229   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:32.846997   74203 cri.go:89] found id: ""
	I0731 18:10:32.847029   74203 logs.go:276] 0 containers: []
	W0731 18:10:32.847039   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:32.847044   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:32.847092   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:32.884093   74203 cri.go:89] found id: ""
	I0731 18:10:32.884123   74203 logs.go:276] 0 containers: []
	W0731 18:10:32.884132   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:32.884138   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:32.884188   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:32.920160   74203 cri.go:89] found id: ""
	I0731 18:10:32.920186   74203 logs.go:276] 0 containers: []
	W0731 18:10:32.920197   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:32.920204   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:32.920263   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:32.952750   74203 cri.go:89] found id: ""
	I0731 18:10:32.952777   74203 logs.go:276] 0 containers: []
	W0731 18:10:32.952788   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:32.952795   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:32.952865   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:32.989086   74203 cri.go:89] found id: ""
	I0731 18:10:32.989115   74203 logs.go:276] 0 containers: []
	W0731 18:10:32.989125   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:32.989135   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:32.989189   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:33.021554   74203 cri.go:89] found id: ""
	I0731 18:10:33.021590   74203 logs.go:276] 0 containers: []
	W0731 18:10:33.021602   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:33.021609   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:33.021662   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:33.061097   74203 cri.go:89] found id: ""
	I0731 18:10:33.061128   74203 logs.go:276] 0 containers: []
	W0731 18:10:33.061141   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:33.061160   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:33.061174   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:33.113497   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:33.113534   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:33.126816   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:33.126842   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:33.196713   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:33.196733   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:33.196744   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:33.277697   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:33.277724   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:33.279738   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:35.780181   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:32.643177   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:35.143606   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:34.771678   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:36.772167   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:39.272752   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:35.817143   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:35.829760   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:35.829820   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:35.862974   74203 cri.go:89] found id: ""
	I0731 18:10:35.863002   74203 logs.go:276] 0 containers: []
	W0731 18:10:35.863014   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:35.863022   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:35.863078   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:35.898547   74203 cri.go:89] found id: ""
	I0731 18:10:35.898576   74203 logs.go:276] 0 containers: []
	W0731 18:10:35.898584   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:35.898590   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:35.898651   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:35.930351   74203 cri.go:89] found id: ""
	I0731 18:10:35.930379   74203 logs.go:276] 0 containers: []
	W0731 18:10:35.930390   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:35.930396   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:35.930463   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:35.962623   74203 cri.go:89] found id: ""
	I0731 18:10:35.962652   74203 logs.go:276] 0 containers: []
	W0731 18:10:35.962663   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:35.962670   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:35.962727   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:35.998213   74203 cri.go:89] found id: ""
	I0731 18:10:35.998233   74203 logs.go:276] 0 containers: []
	W0731 18:10:35.998240   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:35.998245   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:35.998291   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:36.032670   74203 cri.go:89] found id: ""
	I0731 18:10:36.032695   74203 logs.go:276] 0 containers: []
	W0731 18:10:36.032703   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:36.032709   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:36.032757   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:36.066349   74203 cri.go:89] found id: ""
	I0731 18:10:36.066381   74203 logs.go:276] 0 containers: []
	W0731 18:10:36.066392   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:36.066399   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:36.066461   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:36.104137   74203 cri.go:89] found id: ""
	I0731 18:10:36.104168   74203 logs.go:276] 0 containers: []
	W0731 18:10:36.104180   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:36.104200   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:36.104215   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:36.155814   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:36.155844   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:36.168885   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:36.168912   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:36.235950   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:36.235972   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:36.235987   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:36.318382   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:36.318414   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:38.853972   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:38.867018   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:38.867089   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:38.902069   74203 cri.go:89] found id: ""
	I0731 18:10:38.902097   74203 logs.go:276] 0 containers: []
	W0731 18:10:38.902109   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:38.902115   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:38.902181   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:38.935272   74203 cri.go:89] found id: ""
	I0731 18:10:38.935296   74203 logs.go:276] 0 containers: []
	W0731 18:10:38.935316   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:38.935329   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:38.935387   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:38.968582   74203 cri.go:89] found id: ""
	I0731 18:10:38.968610   74203 logs.go:276] 0 containers: []
	W0731 18:10:38.968621   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:38.968629   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:38.968688   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:38.999740   74203 cri.go:89] found id: ""
	I0731 18:10:38.999770   74203 logs.go:276] 0 containers: []
	W0731 18:10:38.999780   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:38.999787   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:38.999845   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:39.032964   74203 cri.go:89] found id: ""
	I0731 18:10:39.032993   74203 logs.go:276] 0 containers: []
	W0731 18:10:39.033008   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:39.033015   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:39.033099   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:39.064121   74203 cri.go:89] found id: ""
	I0731 18:10:39.064149   74203 logs.go:276] 0 containers: []
	W0731 18:10:39.064158   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:39.064164   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:39.064222   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:39.098462   74203 cri.go:89] found id: ""
	I0731 18:10:39.098488   74203 logs.go:276] 0 containers: []
	W0731 18:10:39.098498   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:39.098505   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:39.098564   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:39.130627   74203 cri.go:89] found id: ""
	I0731 18:10:39.130653   74203 logs.go:276] 0 containers: []
	W0731 18:10:39.130663   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:39.130674   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:39.130687   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:39.223664   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:39.223698   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:39.260502   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:39.260533   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:39.315643   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:39.315675   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:39.329731   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:39.329761   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:39.395078   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:38.278911   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:40.779921   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:37.643246   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:39.643862   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:41.772051   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:44.271544   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:41.895698   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:41.910111   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:41.910191   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:41.943700   74203 cri.go:89] found id: ""
	I0731 18:10:41.943732   74203 logs.go:276] 0 containers: []
	W0731 18:10:41.943743   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:41.943751   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:41.943812   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:41.976848   74203 cri.go:89] found id: ""
	I0731 18:10:41.976879   74203 logs.go:276] 0 containers: []
	W0731 18:10:41.976888   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:41.976894   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:41.976967   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:42.009424   74203 cri.go:89] found id: ""
	I0731 18:10:42.009451   74203 logs.go:276] 0 containers: []
	W0731 18:10:42.009462   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:42.009477   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:42.009546   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:42.047233   74203 cri.go:89] found id: ""
	I0731 18:10:42.047260   74203 logs.go:276] 0 containers: []
	W0731 18:10:42.047268   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:42.047274   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:42.047342   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:42.079900   74203 cri.go:89] found id: ""
	I0731 18:10:42.079928   74203 logs.go:276] 0 containers: []
	W0731 18:10:42.079938   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:42.079945   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:42.080025   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:42.114122   74203 cri.go:89] found id: ""
	I0731 18:10:42.114152   74203 logs.go:276] 0 containers: []
	W0731 18:10:42.114164   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:42.114172   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:42.114224   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:42.148741   74203 cri.go:89] found id: ""
	I0731 18:10:42.148768   74203 logs.go:276] 0 containers: []
	W0731 18:10:42.148780   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:42.148789   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:42.148853   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:42.184739   74203 cri.go:89] found id: ""
	I0731 18:10:42.184762   74203 logs.go:276] 0 containers: []
	W0731 18:10:42.184769   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:42.184777   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:42.184791   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:42.254676   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:42.254694   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:42.254706   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:42.334936   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:42.334978   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:42.371511   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:42.371540   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:42.421800   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:42.421831   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:44.934983   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:44.947212   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:44.947293   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:44.979722   74203 cri.go:89] found id: ""
	I0731 18:10:44.979748   74203 logs.go:276] 0 containers: []
	W0731 18:10:44.979760   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:44.979767   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:44.979819   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:45.011594   74203 cri.go:89] found id: ""
	I0731 18:10:45.011620   74203 logs.go:276] 0 containers: []
	W0731 18:10:45.011630   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:45.011637   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:45.011803   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:45.043174   74203 cri.go:89] found id: ""
	I0731 18:10:45.043197   74203 logs.go:276] 0 containers: []
	W0731 18:10:45.043207   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:45.043214   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:45.043278   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:45.074629   74203 cri.go:89] found id: ""
	I0731 18:10:45.074652   74203 logs.go:276] 0 containers: []
	W0731 18:10:45.074662   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:45.074669   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:45.074727   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:45.108917   74203 cri.go:89] found id: ""
	I0731 18:10:45.108944   74203 logs.go:276] 0 containers: []
	W0731 18:10:45.108952   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:45.108959   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:45.109018   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:45.142200   74203 cri.go:89] found id: ""
	I0731 18:10:45.142227   74203 logs.go:276] 0 containers: []
	W0731 18:10:45.142237   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:45.142244   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:45.142306   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:45.177076   74203 cri.go:89] found id: ""
	I0731 18:10:45.177101   74203 logs.go:276] 0 containers: []
	W0731 18:10:45.177109   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:45.177114   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:45.177168   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:45.209352   74203 cri.go:89] found id: ""
	I0731 18:10:45.209376   74203 logs.go:276] 0 containers: []
	W0731 18:10:45.209383   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:45.209392   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:45.209407   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:45.257966   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:45.257998   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:45.272429   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:45.272462   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 18:10:43.279626   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:45.778975   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:42.145247   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:44.642278   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:46.644897   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:46.771785   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:48.772117   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	W0731 18:10:45.347952   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:45.347973   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:45.347988   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:45.428556   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:45.428609   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:47.971089   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:47.986677   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:47.986749   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:48.020396   74203 cri.go:89] found id: ""
	I0731 18:10:48.020426   74203 logs.go:276] 0 containers: []
	W0731 18:10:48.020438   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:48.020446   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:48.020512   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:48.058129   74203 cri.go:89] found id: ""
	I0731 18:10:48.058161   74203 logs.go:276] 0 containers: []
	W0731 18:10:48.058172   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:48.058180   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:48.058249   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:48.091894   74203 cri.go:89] found id: ""
	I0731 18:10:48.091922   74203 logs.go:276] 0 containers: []
	W0731 18:10:48.091932   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:48.091939   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:48.091998   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:48.124757   74203 cri.go:89] found id: ""
	I0731 18:10:48.124788   74203 logs.go:276] 0 containers: []
	W0731 18:10:48.124798   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:48.124807   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:48.124871   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:48.159145   74203 cri.go:89] found id: ""
	I0731 18:10:48.159172   74203 logs.go:276] 0 containers: []
	W0731 18:10:48.159184   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:48.159191   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:48.159253   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:48.200024   74203 cri.go:89] found id: ""
	I0731 18:10:48.200051   74203 logs.go:276] 0 containers: []
	W0731 18:10:48.200061   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:48.200069   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:48.200128   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:48.233838   74203 cri.go:89] found id: ""
	I0731 18:10:48.233870   74203 logs.go:276] 0 containers: []
	W0731 18:10:48.233880   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:48.233886   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:48.233941   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:48.265786   74203 cri.go:89] found id: ""
	I0731 18:10:48.265812   74203 logs.go:276] 0 containers: []
	W0731 18:10:48.265821   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:48.265832   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:48.265846   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:48.280422   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:48.280449   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:48.346774   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:48.346796   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:48.346808   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:48.424017   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:48.424052   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:48.464139   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:48.464166   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:47.781556   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:50.278635   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:49.143684   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:51.144631   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:51.272847   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:53.771397   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:51.013681   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:51.028745   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:51.028814   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:51.062656   74203 cri.go:89] found id: ""
	I0731 18:10:51.062683   74203 logs.go:276] 0 containers: []
	W0731 18:10:51.062691   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:51.062700   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:51.062749   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:51.099203   74203 cri.go:89] found id: ""
	I0731 18:10:51.099228   74203 logs.go:276] 0 containers: []
	W0731 18:10:51.099237   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:51.099243   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:51.099310   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:51.133507   74203 cri.go:89] found id: ""
	I0731 18:10:51.133533   74203 logs.go:276] 0 containers: []
	W0731 18:10:51.133540   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:51.133546   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:51.133596   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:51.169935   74203 cri.go:89] found id: ""
	I0731 18:10:51.169954   74203 logs.go:276] 0 containers: []
	W0731 18:10:51.169961   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:51.169966   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:51.170012   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:51.202877   74203 cri.go:89] found id: ""
	I0731 18:10:51.202903   74203 logs.go:276] 0 containers: []
	W0731 18:10:51.202913   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:51.202919   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:51.202988   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:51.239913   74203 cri.go:89] found id: ""
	I0731 18:10:51.239939   74203 logs.go:276] 0 containers: []
	W0731 18:10:51.239949   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:51.239957   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:51.240018   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:51.272024   74203 cri.go:89] found id: ""
	I0731 18:10:51.272095   74203 logs.go:276] 0 containers: []
	W0731 18:10:51.272115   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:51.272123   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:51.272185   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:51.307016   74203 cri.go:89] found id: ""
	I0731 18:10:51.307043   74203 logs.go:276] 0 containers: []
	W0731 18:10:51.307053   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:51.307063   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:51.307079   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:51.364018   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:51.364066   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:51.384277   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:51.384303   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:51.472657   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:51.472679   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:51.472696   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:51.548408   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:51.548439   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:54.086526   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:54.099293   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:54.099368   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:54.129927   74203 cri.go:89] found id: ""
	I0731 18:10:54.129954   74203 logs.go:276] 0 containers: []
	W0731 18:10:54.129965   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:54.129972   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:54.130042   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:54.166428   74203 cri.go:89] found id: ""
	I0731 18:10:54.166457   74203 logs.go:276] 0 containers: []
	W0731 18:10:54.166468   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:54.166476   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:54.166538   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:54.204523   74203 cri.go:89] found id: ""
	I0731 18:10:54.204549   74203 logs.go:276] 0 containers: []
	W0731 18:10:54.204556   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:54.204562   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:54.204619   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:54.241706   74203 cri.go:89] found id: ""
	I0731 18:10:54.241730   74203 logs.go:276] 0 containers: []
	W0731 18:10:54.241737   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:54.241744   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:54.241802   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:54.277154   74203 cri.go:89] found id: ""
	I0731 18:10:54.277178   74203 logs.go:276] 0 containers: []
	W0731 18:10:54.277187   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:54.277193   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:54.277255   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:54.310198   74203 cri.go:89] found id: ""
	I0731 18:10:54.310223   74203 logs.go:276] 0 containers: []
	W0731 18:10:54.310231   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:54.310237   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:54.310283   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:54.344807   74203 cri.go:89] found id: ""
	I0731 18:10:54.344837   74203 logs.go:276] 0 containers: []
	W0731 18:10:54.344847   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:54.344854   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:54.344915   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:54.383358   74203 cri.go:89] found id: ""
	I0731 18:10:54.383391   74203 logs.go:276] 0 containers: []
	W0731 18:10:54.383400   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:54.383410   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:54.383424   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:54.431876   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:54.431908   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:54.444797   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:54.444824   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:54.518816   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:54.518839   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:54.518855   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:54.600072   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:54.600109   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:52.279006   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:54.279520   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:53.643093   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:56.143250   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:56.272955   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:58.771584   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:57.141070   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:57.155903   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:57.155975   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:57.189406   74203 cri.go:89] found id: ""
	I0731 18:10:57.189428   74203 logs.go:276] 0 containers: []
	W0731 18:10:57.189435   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:57.189441   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:57.189510   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:57.221507   74203 cri.go:89] found id: ""
	I0731 18:10:57.221531   74203 logs.go:276] 0 containers: []
	W0731 18:10:57.221540   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:57.221547   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:57.221614   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:57.257843   74203 cri.go:89] found id: ""
	I0731 18:10:57.257868   74203 logs.go:276] 0 containers: []
	W0731 18:10:57.257880   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:57.257887   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:57.257939   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:57.292697   74203 cri.go:89] found id: ""
	I0731 18:10:57.292728   74203 logs.go:276] 0 containers: []
	W0731 18:10:57.292737   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:57.292744   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:57.292802   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:57.325705   74203 cri.go:89] found id: ""
	I0731 18:10:57.325728   74203 logs.go:276] 0 containers: []
	W0731 18:10:57.325735   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:57.325740   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:57.325787   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:57.357436   74203 cri.go:89] found id: ""
	I0731 18:10:57.357463   74203 logs.go:276] 0 containers: []
	W0731 18:10:57.357471   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:57.357477   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:57.357534   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:57.388215   74203 cri.go:89] found id: ""
	I0731 18:10:57.388240   74203 logs.go:276] 0 containers: []
	W0731 18:10:57.388249   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:57.388256   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:57.388315   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:57.419609   74203 cri.go:89] found id: ""
	I0731 18:10:57.419631   74203 logs.go:276] 0 containers: []
	W0731 18:10:57.419643   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:57.419652   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:57.419663   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:57.497157   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:57.497188   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:57.533512   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:57.533552   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:57.587866   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:57.587904   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:57.601191   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:57.601222   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:57.681899   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:00.182160   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:00.195509   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:00.195598   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:00.230650   74203 cri.go:89] found id: ""
	I0731 18:11:00.230674   74203 logs.go:276] 0 containers: []
	W0731 18:11:00.230682   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:00.230689   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:00.230747   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:00.268629   74203 cri.go:89] found id: ""
	I0731 18:11:00.268656   74203 logs.go:276] 0 containers: []
	W0731 18:11:00.268666   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:00.268672   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:00.268740   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:00.301805   74203 cri.go:89] found id: ""
	I0731 18:11:00.301827   74203 logs.go:276] 0 containers: []
	W0731 18:11:00.301836   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:00.301843   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:00.301901   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:00.333844   74203 cri.go:89] found id: ""
	I0731 18:11:00.333871   74203 logs.go:276] 0 containers: []
	W0731 18:11:00.333882   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:00.333889   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:00.333949   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:56.779307   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:58.779655   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:58.643375   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:00.643713   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:01.272195   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:03.272739   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:00.366250   74203 cri.go:89] found id: ""
	I0731 18:11:00.366278   74203 logs.go:276] 0 containers: []
	W0731 18:11:00.366288   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:00.366295   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:00.366358   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:00.399301   74203 cri.go:89] found id: ""
	I0731 18:11:00.399325   74203 logs.go:276] 0 containers: []
	W0731 18:11:00.399335   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:00.399342   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:00.399405   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:00.432182   74203 cri.go:89] found id: ""
	I0731 18:11:00.432207   74203 logs.go:276] 0 containers: []
	W0731 18:11:00.432218   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:00.432224   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:00.432284   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:00.465395   74203 cri.go:89] found id: ""
	I0731 18:11:00.465423   74203 logs.go:276] 0 containers: []
	W0731 18:11:00.465432   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:00.465440   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:00.465453   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:00.516042   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:00.516077   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:00.528621   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:00.528647   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:00.600297   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:00.600322   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:00.600339   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:00.680368   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:00.680399   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:03.217684   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:03.230691   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:03.230752   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:03.264882   74203 cri.go:89] found id: ""
	I0731 18:11:03.264910   74203 logs.go:276] 0 containers: []
	W0731 18:11:03.264918   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:03.264924   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:03.264976   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:03.301608   74203 cri.go:89] found id: ""
	I0731 18:11:03.301733   74203 logs.go:276] 0 containers: []
	W0731 18:11:03.301754   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:03.301765   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:03.301838   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:03.335077   74203 cri.go:89] found id: ""
	I0731 18:11:03.335102   74203 logs.go:276] 0 containers: []
	W0731 18:11:03.335121   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:03.335128   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:03.335196   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:03.370755   74203 cri.go:89] found id: ""
	I0731 18:11:03.370783   74203 logs.go:276] 0 containers: []
	W0731 18:11:03.370794   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:03.370802   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:03.370862   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:03.403004   74203 cri.go:89] found id: ""
	I0731 18:11:03.403035   74203 logs.go:276] 0 containers: []
	W0731 18:11:03.403045   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:03.403052   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:03.403125   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:03.437169   74203 cri.go:89] found id: ""
	I0731 18:11:03.437209   74203 logs.go:276] 0 containers: []
	W0731 18:11:03.437219   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:03.437235   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:03.437296   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:03.469956   74203 cri.go:89] found id: ""
	I0731 18:11:03.469981   74203 logs.go:276] 0 containers: []
	W0731 18:11:03.469991   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:03.469998   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:03.470056   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:03.503850   74203 cri.go:89] found id: ""
	I0731 18:11:03.503878   74203 logs.go:276] 0 containers: []
	W0731 18:11:03.503888   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:03.503898   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:03.503913   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:03.554993   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:03.555036   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:03.567898   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:03.567925   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:03.630151   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:03.630188   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:03.630207   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:03.708552   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:03.708596   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:01.278830   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:03.278880   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:05.778296   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:03.143289   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:05.152015   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:05.771810   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:08.271205   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:06.249728   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:06.261923   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:06.261998   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:06.296249   74203 cri.go:89] found id: ""
	I0731 18:11:06.296276   74203 logs.go:276] 0 containers: []
	W0731 18:11:06.296286   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:06.296292   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:06.296356   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:06.329355   74203 cri.go:89] found id: ""
	I0731 18:11:06.329381   74203 logs.go:276] 0 containers: []
	W0731 18:11:06.329389   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:06.329395   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:06.329443   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:06.362585   74203 cri.go:89] found id: ""
	I0731 18:11:06.362618   74203 logs.go:276] 0 containers: []
	W0731 18:11:06.362630   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:06.362643   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:06.362704   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:06.396489   74203 cri.go:89] found id: ""
	I0731 18:11:06.396514   74203 logs.go:276] 0 containers: []
	W0731 18:11:06.396521   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:06.396527   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:06.396590   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:06.428859   74203 cri.go:89] found id: ""
	I0731 18:11:06.428888   74203 logs.go:276] 0 containers: []
	W0731 18:11:06.428897   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:06.428903   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:06.428960   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:06.468817   74203 cri.go:89] found id: ""
	I0731 18:11:06.468846   74203 logs.go:276] 0 containers: []
	W0731 18:11:06.468856   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:06.468864   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:06.468924   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:06.499975   74203 cri.go:89] found id: ""
	I0731 18:11:06.500000   74203 logs.go:276] 0 containers: []
	W0731 18:11:06.500008   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:06.500013   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:06.500067   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:06.537410   74203 cri.go:89] found id: ""
	I0731 18:11:06.537440   74203 logs.go:276] 0 containers: []
	W0731 18:11:06.537451   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:06.537461   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:06.537476   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:06.589664   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:06.589709   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:06.603978   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:06.604005   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:06.673436   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:06.673454   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:06.673465   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:06.757101   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:06.757143   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:09.299562   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:09.311910   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:09.311971   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:09.346517   74203 cri.go:89] found id: ""
	I0731 18:11:09.346545   74203 logs.go:276] 0 containers: []
	W0731 18:11:09.346555   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:09.346562   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:09.346634   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:09.377688   74203 cri.go:89] found id: ""
	I0731 18:11:09.377713   74203 logs.go:276] 0 containers: []
	W0731 18:11:09.377720   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:09.377726   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:09.377787   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:09.412149   74203 cri.go:89] found id: ""
	I0731 18:11:09.412176   74203 logs.go:276] 0 containers: []
	W0731 18:11:09.412186   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:09.412193   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:09.412259   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:09.444134   74203 cri.go:89] found id: ""
	I0731 18:11:09.444162   74203 logs.go:276] 0 containers: []
	W0731 18:11:09.444172   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:09.444178   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:09.444233   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:09.481407   74203 cri.go:89] found id: ""
	I0731 18:11:09.481436   74203 logs.go:276] 0 containers: []
	W0731 18:11:09.481447   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:09.481453   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:09.481513   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:09.514926   74203 cri.go:89] found id: ""
	I0731 18:11:09.514950   74203 logs.go:276] 0 containers: []
	W0731 18:11:09.514967   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:09.514974   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:09.515036   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:09.547253   74203 cri.go:89] found id: ""
	I0731 18:11:09.547278   74203 logs.go:276] 0 containers: []
	W0731 18:11:09.547285   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:09.547291   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:09.547376   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:09.587585   74203 cri.go:89] found id: ""
	I0731 18:11:09.587614   74203 logs.go:276] 0 containers: []
	W0731 18:11:09.587622   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:09.587632   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:09.587646   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:09.642024   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:09.642057   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:09.655244   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:09.655270   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:09.721446   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:09.721474   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:09.721489   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:09.803315   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:09.803349   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:07.779195   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:10.278028   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:07.643242   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:10.143895   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:10.271515   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:12.771322   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:12.344355   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:12.357122   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:12.357194   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:12.392237   74203 cri.go:89] found id: ""
	I0731 18:11:12.392258   74203 logs.go:276] 0 containers: []
	W0731 18:11:12.392267   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:12.392272   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:12.392339   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:12.424490   74203 cri.go:89] found id: ""
	I0731 18:11:12.424514   74203 logs.go:276] 0 containers: []
	W0731 18:11:12.424523   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:12.424529   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:12.424587   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:12.458438   74203 cri.go:89] found id: ""
	I0731 18:11:12.458467   74203 logs.go:276] 0 containers: []
	W0731 18:11:12.458477   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:12.458483   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:12.458545   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:12.495343   74203 cri.go:89] found id: ""
	I0731 18:11:12.495371   74203 logs.go:276] 0 containers: []
	W0731 18:11:12.495383   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:12.495391   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:12.495455   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:12.527285   74203 cri.go:89] found id: ""
	I0731 18:11:12.527314   74203 logs.go:276] 0 containers: []
	W0731 18:11:12.527324   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:12.527334   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:12.527393   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:12.560341   74203 cri.go:89] found id: ""
	I0731 18:11:12.560369   74203 logs.go:276] 0 containers: []
	W0731 18:11:12.560379   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:12.560387   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:12.560444   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:12.595084   74203 cri.go:89] found id: ""
	I0731 18:11:12.595120   74203 logs.go:276] 0 containers: []
	W0731 18:11:12.595133   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:12.595141   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:12.595215   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:12.630666   74203 cri.go:89] found id: ""
	I0731 18:11:12.630692   74203 logs.go:276] 0 containers: []
	W0731 18:11:12.630702   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:12.630711   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:12.630727   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:12.683588   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:12.683620   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:12.696899   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:12.696925   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:12.757815   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:12.757837   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:12.757870   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:12.834888   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:12.834927   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:12.278464   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:14.279031   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:12.643960   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:15.142811   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:14.771367   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:16.772010   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:19.271857   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:15.372797   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:15.386268   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:15.386356   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:15.420446   74203 cri.go:89] found id: ""
	I0731 18:11:15.420477   74203 logs.go:276] 0 containers: []
	W0731 18:11:15.420488   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:15.420497   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:15.420556   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:15.456092   74203 cri.go:89] found id: ""
	I0731 18:11:15.456118   74203 logs.go:276] 0 containers: []
	W0731 18:11:15.456129   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:15.456136   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:15.456194   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:15.488277   74203 cri.go:89] found id: ""
	I0731 18:11:15.488304   74203 logs.go:276] 0 containers: []
	W0731 18:11:15.488316   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:15.488323   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:15.488384   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:15.520701   74203 cri.go:89] found id: ""
	I0731 18:11:15.520730   74203 logs.go:276] 0 containers: []
	W0731 18:11:15.520741   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:15.520749   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:15.520818   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:15.552831   74203 cri.go:89] found id: ""
	I0731 18:11:15.552854   74203 logs.go:276] 0 containers: []
	W0731 18:11:15.552862   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:15.552867   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:15.552920   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:15.589161   74203 cri.go:89] found id: ""
	I0731 18:11:15.589191   74203 logs.go:276] 0 containers: []
	W0731 18:11:15.589203   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:15.589210   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:15.589274   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:15.622501   74203 cri.go:89] found id: ""
	I0731 18:11:15.622532   74203 logs.go:276] 0 containers: []
	W0731 18:11:15.622544   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:15.622552   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:15.622611   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:15.654772   74203 cri.go:89] found id: ""
	I0731 18:11:15.654801   74203 logs.go:276] 0 containers: []
	W0731 18:11:15.654815   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:15.654826   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:15.654843   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:15.703103   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:15.703148   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:15.716620   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:15.716645   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:15.783391   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:15.783416   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:15.783431   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:15.857462   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:15.857495   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:18.394223   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:18.407297   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:18.407374   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:18.439542   74203 cri.go:89] found id: ""
	I0731 18:11:18.439564   74203 logs.go:276] 0 containers: []
	W0731 18:11:18.439572   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:18.439578   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:18.439625   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:18.471838   74203 cri.go:89] found id: ""
	I0731 18:11:18.471863   74203 logs.go:276] 0 containers: []
	W0731 18:11:18.471873   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:18.471883   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:18.471943   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:18.505325   74203 cri.go:89] found id: ""
	I0731 18:11:18.505355   74203 logs.go:276] 0 containers: []
	W0731 18:11:18.505365   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:18.505372   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:18.505432   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:18.536155   74203 cri.go:89] found id: ""
	I0731 18:11:18.536180   74203 logs.go:276] 0 containers: []
	W0731 18:11:18.536189   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:18.536194   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:18.536241   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:18.569301   74203 cri.go:89] found id: ""
	I0731 18:11:18.569329   74203 logs.go:276] 0 containers: []
	W0731 18:11:18.569339   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:18.569344   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:18.569398   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:18.603053   74203 cri.go:89] found id: ""
	I0731 18:11:18.603079   74203 logs.go:276] 0 containers: []
	W0731 18:11:18.603087   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:18.603092   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:18.603170   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:18.636259   74203 cri.go:89] found id: ""
	I0731 18:11:18.636287   74203 logs.go:276] 0 containers: []
	W0731 18:11:18.636298   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:18.636305   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:18.636361   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:18.667839   74203 cri.go:89] found id: ""
	I0731 18:11:18.667863   74203 logs.go:276] 0 containers: []
	W0731 18:11:18.667873   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:18.667883   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:18.667897   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:18.681005   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:18.681030   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:18.747793   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:18.747875   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:18.747892   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:18.828970   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:18.829005   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:18.866724   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:18.866749   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:16.279368   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:18.778730   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:20.779465   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:17.144041   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:19.645356   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:21.272651   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:23.771240   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:21.416598   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:21.431968   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:21.432027   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:21.469670   74203 cri.go:89] found id: ""
	I0731 18:11:21.469696   74203 logs.go:276] 0 containers: []
	W0731 18:11:21.469703   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:21.469709   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:21.469762   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:21.508461   74203 cri.go:89] found id: ""
	I0731 18:11:21.508490   74203 logs.go:276] 0 containers: []
	W0731 18:11:21.508500   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:21.508506   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:21.508570   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:21.548101   74203 cri.go:89] found id: ""
	I0731 18:11:21.548127   74203 logs.go:276] 0 containers: []
	W0731 18:11:21.548136   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:21.548142   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:21.548204   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:21.582617   74203 cri.go:89] found id: ""
	I0731 18:11:21.582646   74203 logs.go:276] 0 containers: []
	W0731 18:11:21.582653   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:21.582659   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:21.582712   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:21.614185   74203 cri.go:89] found id: ""
	I0731 18:11:21.614210   74203 logs.go:276] 0 containers: []
	W0731 18:11:21.614218   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:21.614223   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:21.614278   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:21.647596   74203 cri.go:89] found id: ""
	I0731 18:11:21.647619   74203 logs.go:276] 0 containers: []
	W0731 18:11:21.647629   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:21.647636   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:21.647693   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:21.680106   74203 cri.go:89] found id: ""
	I0731 18:11:21.680132   74203 logs.go:276] 0 containers: []
	W0731 18:11:21.680142   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:21.680149   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:21.680208   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:21.714708   74203 cri.go:89] found id: ""
	I0731 18:11:21.714733   74203 logs.go:276] 0 containers: []
	W0731 18:11:21.714742   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:21.714754   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:21.714779   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:21.783425   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:21.783448   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:21.783462   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:21.859943   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:21.859980   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:21.898374   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:21.898405   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:21.945753   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:21.945784   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:24.459481   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:24.471376   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:24.471435   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:24.506474   74203 cri.go:89] found id: ""
	I0731 18:11:24.506502   74203 logs.go:276] 0 containers: []
	W0731 18:11:24.506511   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:24.506516   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:24.506572   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:24.547298   74203 cri.go:89] found id: ""
	I0731 18:11:24.547324   74203 logs.go:276] 0 containers: []
	W0731 18:11:24.547332   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:24.547337   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:24.547402   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:24.579912   74203 cri.go:89] found id: ""
	I0731 18:11:24.579944   74203 logs.go:276] 0 containers: []
	W0731 18:11:24.579955   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:24.579963   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:24.580032   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:24.613754   74203 cri.go:89] found id: ""
	I0731 18:11:24.613782   74203 logs.go:276] 0 containers: []
	W0731 18:11:24.613791   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:24.613799   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:24.613859   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:24.649782   74203 cri.go:89] found id: ""
	I0731 18:11:24.649811   74203 logs.go:276] 0 containers: []
	W0731 18:11:24.649822   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:24.649829   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:24.649888   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:24.689232   74203 cri.go:89] found id: ""
	I0731 18:11:24.689264   74203 logs.go:276] 0 containers: []
	W0731 18:11:24.689274   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:24.689283   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:24.689361   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:24.727861   74203 cri.go:89] found id: ""
	I0731 18:11:24.727894   74203 logs.go:276] 0 containers: []
	W0731 18:11:24.727902   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:24.727924   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:24.727983   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:24.763839   74203 cri.go:89] found id: ""
	I0731 18:11:24.763866   74203 logs.go:276] 0 containers: []
	W0731 18:11:24.763876   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:24.763886   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:24.763901   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:24.841090   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:24.841131   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:24.877206   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:24.877231   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:24.926149   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:24.926180   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:24.938795   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:24.938822   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:25.008349   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:23.279256   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:25.778644   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:22.143312   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:24.144259   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:26.144310   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:25.771403   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:28.270613   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:27.509192   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:27.522506   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:27.522582   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:27.557915   74203 cri.go:89] found id: ""
	I0731 18:11:27.557943   74203 logs.go:276] 0 containers: []
	W0731 18:11:27.557954   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:27.557962   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:27.558019   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:27.594295   74203 cri.go:89] found id: ""
	I0731 18:11:27.594322   74203 logs.go:276] 0 containers: []
	W0731 18:11:27.594332   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:27.594348   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:27.594410   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:27.626830   74203 cri.go:89] found id: ""
	I0731 18:11:27.626857   74203 logs.go:276] 0 containers: []
	W0731 18:11:27.626868   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:27.626875   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:27.626964   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:27.662062   74203 cri.go:89] found id: ""
	I0731 18:11:27.662084   74203 logs.go:276] 0 containers: []
	W0731 18:11:27.662092   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:27.662099   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:27.662158   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:27.695686   74203 cri.go:89] found id: ""
	I0731 18:11:27.695715   74203 logs.go:276] 0 containers: []
	W0731 18:11:27.695727   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:27.695735   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:27.695785   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:27.729444   74203 cri.go:89] found id: ""
	I0731 18:11:27.729467   74203 logs.go:276] 0 containers: []
	W0731 18:11:27.729475   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:27.729481   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:27.729531   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:27.761889   74203 cri.go:89] found id: ""
	I0731 18:11:27.761916   74203 logs.go:276] 0 containers: []
	W0731 18:11:27.761926   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:27.761934   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:27.761995   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:27.796178   74203 cri.go:89] found id: ""
	I0731 18:11:27.796199   74203 logs.go:276] 0 containers: []
	W0731 18:11:27.796206   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:27.796214   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:27.796227   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:27.849613   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:27.849650   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:27.862892   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:27.862923   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:27.928691   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:27.928717   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:27.928740   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:28.006310   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:28.006340   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:27.779125   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:30.279252   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:28.643172   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:30.645474   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:30.271016   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:32.771684   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:30.543065   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:30.555951   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:30.556013   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:30.597411   74203 cri.go:89] found id: ""
	I0731 18:11:30.597440   74203 logs.go:276] 0 containers: []
	W0731 18:11:30.597451   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:30.597458   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:30.597518   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:30.629836   74203 cri.go:89] found id: ""
	I0731 18:11:30.629866   74203 logs.go:276] 0 containers: []
	W0731 18:11:30.629873   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:30.629878   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:30.629932   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:30.667402   74203 cri.go:89] found id: ""
	I0731 18:11:30.667432   74203 logs.go:276] 0 containers: []
	W0731 18:11:30.667443   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:30.667450   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:30.667513   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:30.701677   74203 cri.go:89] found id: ""
	I0731 18:11:30.701708   74203 logs.go:276] 0 containers: []
	W0731 18:11:30.701716   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:30.701722   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:30.701773   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:30.736685   74203 cri.go:89] found id: ""
	I0731 18:11:30.736714   74203 logs.go:276] 0 containers: []
	W0731 18:11:30.736721   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:30.736736   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:30.736786   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:30.771501   74203 cri.go:89] found id: ""
	I0731 18:11:30.771526   74203 logs.go:276] 0 containers: []
	W0731 18:11:30.771543   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:30.771549   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:30.771597   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:30.805878   74203 cri.go:89] found id: ""
	I0731 18:11:30.805902   74203 logs.go:276] 0 containers: []
	W0731 18:11:30.805911   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:30.805921   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:30.805966   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:30.839001   74203 cri.go:89] found id: ""
	I0731 18:11:30.839027   74203 logs.go:276] 0 containers: []
	W0731 18:11:30.839038   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:30.839048   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:30.839062   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:30.893357   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:30.893387   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:30.907222   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:30.907248   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:30.985626   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:30.985648   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:30.985668   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:31.067900   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:31.067948   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:33.607259   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:33.621596   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:33.621656   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:33.663616   74203 cri.go:89] found id: ""
	I0731 18:11:33.663642   74203 logs.go:276] 0 containers: []
	W0731 18:11:33.663649   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:33.663655   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:33.663704   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:33.702133   74203 cri.go:89] found id: ""
	I0731 18:11:33.702159   74203 logs.go:276] 0 containers: []
	W0731 18:11:33.702167   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:33.702173   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:33.702226   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:33.733730   74203 cri.go:89] found id: ""
	I0731 18:11:33.733752   74203 logs.go:276] 0 containers: []
	W0731 18:11:33.733760   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:33.733765   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:33.733813   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:33.765036   74203 cri.go:89] found id: ""
	I0731 18:11:33.765064   74203 logs.go:276] 0 containers: []
	W0731 18:11:33.765074   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:33.765080   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:33.765128   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:33.799604   74203 cri.go:89] found id: ""
	I0731 18:11:33.799630   74203 logs.go:276] 0 containers: []
	W0731 18:11:33.799640   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:33.799648   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:33.799716   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:33.831434   74203 cri.go:89] found id: ""
	I0731 18:11:33.831455   74203 logs.go:276] 0 containers: []
	W0731 18:11:33.831464   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:33.831469   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:33.831514   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:33.862975   74203 cri.go:89] found id: ""
	I0731 18:11:33.863004   74203 logs.go:276] 0 containers: []
	W0731 18:11:33.863014   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:33.863022   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:33.863090   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:33.895674   74203 cri.go:89] found id: ""
	I0731 18:11:33.895704   74203 logs.go:276] 0 containers: []
	W0731 18:11:33.895714   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:33.895723   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:33.895737   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:33.931954   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:33.931980   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:33.985353   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:33.985385   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:33.997857   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:33.997882   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:34.060523   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:34.060553   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:34.060575   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:32.778212   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:35.278655   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:33.151579   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:35.643326   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:34.771873   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:36.772309   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:39.271582   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:36.643003   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:36.659306   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:36.659385   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:36.717097   74203 cri.go:89] found id: ""
	I0731 18:11:36.717129   74203 logs.go:276] 0 containers: []
	W0731 18:11:36.717141   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:36.717149   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:36.717212   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:36.750288   74203 cri.go:89] found id: ""
	I0731 18:11:36.750314   74203 logs.go:276] 0 containers: []
	W0731 18:11:36.750325   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:36.750331   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:36.750391   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:36.785272   74203 cri.go:89] found id: ""
	I0731 18:11:36.785296   74203 logs.go:276] 0 containers: []
	W0731 18:11:36.785304   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:36.785310   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:36.785360   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:36.818927   74203 cri.go:89] found id: ""
	I0731 18:11:36.818953   74203 logs.go:276] 0 containers: []
	W0731 18:11:36.818965   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:36.818972   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:36.819020   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:36.854562   74203 cri.go:89] found id: ""
	I0731 18:11:36.854593   74203 logs.go:276] 0 containers: []
	W0731 18:11:36.854602   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:36.854607   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:36.854670   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:36.887786   74203 cri.go:89] found id: ""
	I0731 18:11:36.887814   74203 logs.go:276] 0 containers: []
	W0731 18:11:36.887825   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:36.887833   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:36.887893   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:36.919418   74203 cri.go:89] found id: ""
	I0731 18:11:36.919446   74203 logs.go:276] 0 containers: []
	W0731 18:11:36.919457   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:36.919471   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:36.919533   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:36.956934   74203 cri.go:89] found id: ""
	I0731 18:11:36.956957   74203 logs.go:276] 0 containers: []
	W0731 18:11:36.956964   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:36.956971   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:36.956989   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:37.003755   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:37.003783   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:37.016977   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:37.017004   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:37.091617   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:37.091646   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:37.091662   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:37.170870   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:37.170903   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:39.714271   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:39.730306   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:39.730383   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:39.765368   74203 cri.go:89] found id: ""
	I0731 18:11:39.765399   74203 logs.go:276] 0 containers: []
	W0731 18:11:39.765407   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:39.765412   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:39.765471   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:39.800394   74203 cri.go:89] found id: ""
	I0731 18:11:39.800419   74203 logs.go:276] 0 containers: []
	W0731 18:11:39.800427   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:39.800433   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:39.800486   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:39.834861   74203 cri.go:89] found id: ""
	I0731 18:11:39.834889   74203 logs.go:276] 0 containers: []
	W0731 18:11:39.834898   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:39.834903   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:39.834958   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:39.868108   74203 cri.go:89] found id: ""
	I0731 18:11:39.868132   74203 logs.go:276] 0 containers: []
	W0731 18:11:39.868141   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:39.868146   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:39.868220   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:39.902097   74203 cri.go:89] found id: ""
	I0731 18:11:39.902120   74203 logs.go:276] 0 containers: []
	W0731 18:11:39.902128   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:39.902134   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:39.902184   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:39.933073   74203 cri.go:89] found id: ""
	I0731 18:11:39.933100   74203 logs.go:276] 0 containers: []
	W0731 18:11:39.933109   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:39.933114   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:39.933165   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:39.965748   74203 cri.go:89] found id: ""
	I0731 18:11:39.965775   74203 logs.go:276] 0 containers: []
	W0731 18:11:39.965785   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:39.965796   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:39.965856   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:39.998164   74203 cri.go:89] found id: ""
	I0731 18:11:39.998189   74203 logs.go:276] 0 containers: []
	W0731 18:11:39.998197   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:39.998205   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:39.998222   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:40.049991   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:40.050027   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:40.063676   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:40.063705   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:40.125855   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:40.125880   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:40.125896   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:40.207937   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:40.207970   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:37.778894   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:40.278489   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:37.643651   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:40.144731   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:41.271897   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:43.771556   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:42.746315   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:42.758998   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:42.759053   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:42.791921   74203 cri.go:89] found id: ""
	I0731 18:11:42.791946   74203 logs.go:276] 0 containers: []
	W0731 18:11:42.791954   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:42.791958   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:42.792004   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:42.822888   74203 cri.go:89] found id: ""
	I0731 18:11:42.822914   74203 logs.go:276] 0 containers: []
	W0731 18:11:42.822922   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:42.822927   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:42.822973   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:42.854516   74203 cri.go:89] found id: ""
	I0731 18:11:42.854545   74203 logs.go:276] 0 containers: []
	W0731 18:11:42.854564   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:42.854574   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:42.854638   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:42.890933   74203 cri.go:89] found id: ""
	I0731 18:11:42.890955   74203 logs.go:276] 0 containers: []
	W0731 18:11:42.890963   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:42.890968   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:42.891013   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:42.925170   74203 cri.go:89] found id: ""
	I0731 18:11:42.925196   74203 logs.go:276] 0 containers: []
	W0731 18:11:42.925206   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:42.925213   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:42.925273   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:42.959845   74203 cri.go:89] found id: ""
	I0731 18:11:42.959868   74203 logs.go:276] 0 containers: []
	W0731 18:11:42.959876   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:42.959881   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:42.959938   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:42.997305   74203 cri.go:89] found id: ""
	I0731 18:11:42.997346   74203 logs.go:276] 0 containers: []
	W0731 18:11:42.997358   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:42.997366   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:42.997427   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:43.030663   74203 cri.go:89] found id: ""
	I0731 18:11:43.030690   74203 logs.go:276] 0 containers: []
	W0731 18:11:43.030700   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:43.030711   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:43.030725   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:43.112280   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:43.112303   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:43.112318   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:43.209002   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:43.209035   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:43.249596   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:43.249629   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:43.302419   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:43.302449   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:42.278874   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:44.273355   73696 pod_ready.go:81] duration metric: took 4m0.000454583s for pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace to be "Ready" ...
	E0731 18:11:44.273380   73696 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace to be "Ready" (will not retry!)
	I0731 18:11:44.273399   73696 pod_ready.go:38] duration metric: took 4m8.019714552s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:11:44.273430   73696 kubeadm.go:597] duration metric: took 4m16.379038728s to restartPrimaryControlPlane
	W0731 18:11:44.273506   73696 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 18:11:44.273531   73696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 18:11:42.643165   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:44.644976   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:46.271751   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:48.771274   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:45.816910   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:45.829909   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:45.829976   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:45.865534   74203 cri.go:89] found id: ""
	I0731 18:11:45.865561   74203 logs.go:276] 0 containers: []
	W0731 18:11:45.865572   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:45.865584   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:45.865646   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:45.901552   74203 cri.go:89] found id: ""
	I0731 18:11:45.901585   74203 logs.go:276] 0 containers: []
	W0731 18:11:45.901593   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:45.901598   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:45.901678   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:45.938790   74203 cri.go:89] found id: ""
	I0731 18:11:45.938820   74203 logs.go:276] 0 containers: []
	W0731 18:11:45.938842   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:45.938859   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:45.938926   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:45.971502   74203 cri.go:89] found id: ""
	I0731 18:11:45.971534   74203 logs.go:276] 0 containers: []
	W0731 18:11:45.971546   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:45.971553   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:45.971620   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:46.009281   74203 cri.go:89] found id: ""
	I0731 18:11:46.009316   74203 logs.go:276] 0 containers: []
	W0731 18:11:46.009327   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:46.009335   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:46.009399   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:46.042899   74203 cri.go:89] found id: ""
	I0731 18:11:46.042928   74203 logs.go:276] 0 containers: []
	W0731 18:11:46.042939   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:46.042947   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:46.043005   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:46.079982   74203 cri.go:89] found id: ""
	I0731 18:11:46.080013   74203 logs.go:276] 0 containers: []
	W0731 18:11:46.080024   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:46.080031   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:46.080098   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:46.113136   74203 cri.go:89] found id: ""
	I0731 18:11:46.113168   74203 logs.go:276] 0 containers: []
	W0731 18:11:46.113179   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:46.113191   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:46.113206   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:46.165818   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:46.165855   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:46.181058   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:46.181083   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:46.256805   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:46.256826   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:46.256838   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:46.353045   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:46.353093   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:48.894656   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:48.910648   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:48.910723   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:48.941080   74203 cri.go:89] found id: ""
	I0731 18:11:48.941103   74203 logs.go:276] 0 containers: []
	W0731 18:11:48.941111   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:48.941117   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:48.941164   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:48.972113   74203 cri.go:89] found id: ""
	I0731 18:11:48.972136   74203 logs.go:276] 0 containers: []
	W0731 18:11:48.972146   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:48.972151   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:48.972208   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:49.004521   74203 cri.go:89] found id: ""
	I0731 18:11:49.004547   74203 logs.go:276] 0 containers: []
	W0731 18:11:49.004557   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:49.004571   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:49.004658   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:49.036600   74203 cri.go:89] found id: ""
	I0731 18:11:49.036622   74203 logs.go:276] 0 containers: []
	W0731 18:11:49.036629   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:49.036635   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:49.036683   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:49.071397   74203 cri.go:89] found id: ""
	I0731 18:11:49.071426   74203 logs.go:276] 0 containers: []
	W0731 18:11:49.071436   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:49.071444   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:49.071501   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:49.108907   74203 cri.go:89] found id: ""
	I0731 18:11:49.108933   74203 logs.go:276] 0 containers: []
	W0731 18:11:49.108944   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:49.108952   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:49.109007   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:49.141808   74203 cri.go:89] found id: ""
	I0731 18:11:49.141834   74203 logs.go:276] 0 containers: []
	W0731 18:11:49.141844   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:49.141856   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:49.141917   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:49.174063   74203 cri.go:89] found id: ""
	I0731 18:11:49.174087   74203 logs.go:276] 0 containers: []
	W0731 18:11:49.174095   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:49.174104   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:49.174116   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:49.212152   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:49.212181   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:49.267297   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:49.267324   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:49.281342   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:49.281365   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:49.349843   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:49.349866   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:49.349882   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:47.144588   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:49.644395   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:51.271203   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:53.770849   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:51.927764   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:51.940480   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:51.940539   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:51.973731   74203 cri.go:89] found id: ""
	I0731 18:11:51.973759   74203 logs.go:276] 0 containers: []
	W0731 18:11:51.973768   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:51.973780   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:51.973837   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:52.003761   74203 cri.go:89] found id: ""
	I0731 18:11:52.003783   74203 logs.go:276] 0 containers: []
	W0731 18:11:52.003790   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:52.003795   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:52.003844   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:52.035009   74203 cri.go:89] found id: ""
	I0731 18:11:52.035028   74203 logs.go:276] 0 containers: []
	W0731 18:11:52.035035   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:52.035041   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:52.035100   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:52.065475   74203 cri.go:89] found id: ""
	I0731 18:11:52.065501   74203 logs.go:276] 0 containers: []
	W0731 18:11:52.065509   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:52.065515   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:52.065574   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:52.097529   74203 cri.go:89] found id: ""
	I0731 18:11:52.097558   74203 logs.go:276] 0 containers: []
	W0731 18:11:52.097567   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:52.097573   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:52.097622   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:52.128881   74203 cri.go:89] found id: ""
	I0731 18:11:52.128909   74203 logs.go:276] 0 containers: []
	W0731 18:11:52.128917   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:52.128923   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:52.128974   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:52.159894   74203 cri.go:89] found id: ""
	I0731 18:11:52.159921   74203 logs.go:276] 0 containers: []
	W0731 18:11:52.159931   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:52.159939   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:52.159998   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:52.191955   74203 cri.go:89] found id: ""
	I0731 18:11:52.191981   74203 logs.go:276] 0 containers: []
	W0731 18:11:52.191990   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:52.191999   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:52.192009   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:52.246389   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:52.246423   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:52.260226   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:52.260253   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:52.328423   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:52.328447   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:52.328459   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:52.408456   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:52.408495   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:54.947734   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:54.960359   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:54.960420   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:54.994231   74203 cri.go:89] found id: ""
	I0731 18:11:54.994256   74203 logs.go:276] 0 containers: []
	W0731 18:11:54.994264   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:54.994270   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:54.994332   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:55.027323   74203 cri.go:89] found id: ""
	I0731 18:11:55.027364   74203 logs.go:276] 0 containers: []
	W0731 18:11:55.027374   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:55.027382   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:55.027440   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:55.061741   74203 cri.go:89] found id: ""
	I0731 18:11:55.061763   74203 logs.go:276] 0 containers: []
	W0731 18:11:55.061771   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:55.061776   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:55.061822   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:55.100685   74203 cri.go:89] found id: ""
	I0731 18:11:55.100712   74203 logs.go:276] 0 containers: []
	W0731 18:11:55.100720   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:55.100726   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:55.100780   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:55.141917   74203 cri.go:89] found id: ""
	I0731 18:11:55.141958   74203 logs.go:276] 0 containers: []
	W0731 18:11:55.141971   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:55.141980   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:55.142054   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:55.176669   74203 cri.go:89] found id: ""
	I0731 18:11:55.176702   74203 logs.go:276] 0 containers: []
	W0731 18:11:55.176711   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:55.176718   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:55.176780   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:55.209795   74203 cri.go:89] found id: ""
	I0731 18:11:55.209829   74203 logs.go:276] 0 containers: []
	W0731 18:11:55.209842   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:55.209850   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:55.209915   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:55.244503   74203 cri.go:89] found id: ""
	I0731 18:11:55.244527   74203 logs.go:276] 0 containers: []
	W0731 18:11:55.244537   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:55.244556   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:55.244572   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:55.320033   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:55.320071   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:52.143803   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:54.644223   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:56.273321   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:58.772541   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:55.357684   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:55.357719   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:55.411465   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:55.411501   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:55.423802   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:55.423833   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:55.487945   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:57.988078   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:58.001639   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:58.001724   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:58.036075   74203 cri.go:89] found id: ""
	I0731 18:11:58.036099   74203 logs.go:276] 0 containers: []
	W0731 18:11:58.036107   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:58.036112   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:58.036163   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:58.067316   74203 cri.go:89] found id: ""
	I0731 18:11:58.067340   74203 logs.go:276] 0 containers: []
	W0731 18:11:58.067348   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:58.067353   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:58.067420   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:58.102446   74203 cri.go:89] found id: ""
	I0731 18:11:58.102470   74203 logs.go:276] 0 containers: []
	W0731 18:11:58.102479   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:58.102485   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:58.102553   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:58.134924   74203 cri.go:89] found id: ""
	I0731 18:11:58.134949   74203 logs.go:276] 0 containers: []
	W0731 18:11:58.134957   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:58.134963   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:58.135023   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:58.171589   74203 cri.go:89] found id: ""
	I0731 18:11:58.171611   74203 logs.go:276] 0 containers: []
	W0731 18:11:58.171620   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:58.171625   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:58.171673   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:58.203813   74203 cri.go:89] found id: ""
	I0731 18:11:58.203836   74203 logs.go:276] 0 containers: []
	W0731 18:11:58.203844   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:58.203850   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:58.203911   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:58.236251   74203 cri.go:89] found id: ""
	I0731 18:11:58.236277   74203 logs.go:276] 0 containers: []
	W0731 18:11:58.236288   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:58.236295   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:58.236357   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:58.270595   74203 cri.go:89] found id: ""
	I0731 18:11:58.270624   74203 logs.go:276] 0 containers: []
	W0731 18:11:58.270636   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:58.270647   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:58.270662   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:58.321889   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:58.321927   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:58.334529   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:58.334552   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:58.398489   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:58.398515   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:58.398540   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:58.479657   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:58.479695   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:57.143080   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:59.144357   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:01.643343   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:01.266100   73800 pod_ready.go:81] duration metric: took 4m0.000711681s for pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace to be "Ready" ...
	E0731 18:12:01.266123   73800 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace to be "Ready" (will not retry!)
	I0731 18:12:01.266160   73800 pod_ready.go:38] duration metric: took 4m6.529342365s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:12:01.266205   73800 kubeadm.go:597] duration metric: took 4m13.643145888s to restartPrimaryControlPlane
	W0731 18:12:01.266270   73800 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 18:12:01.266297   73800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 18:12:01.014684   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:12:01.027959   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:12:01.028026   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:12:01.065423   74203 cri.go:89] found id: ""
	I0731 18:12:01.065459   74203 logs.go:276] 0 containers: []
	W0731 18:12:01.065472   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:12:01.065481   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:12:01.065545   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:12:01.099519   74203 cri.go:89] found id: ""
	I0731 18:12:01.099549   74203 logs.go:276] 0 containers: []
	W0731 18:12:01.099561   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:12:01.099568   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:12:01.099630   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:12:01.131239   74203 cri.go:89] found id: ""
	I0731 18:12:01.131262   74203 logs.go:276] 0 containers: []
	W0731 18:12:01.131270   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:12:01.131275   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:12:01.131321   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:12:01.163209   74203 cri.go:89] found id: ""
	I0731 18:12:01.163229   74203 logs.go:276] 0 containers: []
	W0731 18:12:01.163237   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:12:01.163242   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:12:01.163295   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:12:01.201165   74203 cri.go:89] found id: ""
	I0731 18:12:01.201193   74203 logs.go:276] 0 containers: []
	W0731 18:12:01.201204   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:12:01.201217   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:12:01.201274   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:12:01.233310   74203 cri.go:89] found id: ""
	I0731 18:12:01.233334   74203 logs.go:276] 0 containers: []
	W0731 18:12:01.233342   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:12:01.233348   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:12:01.233415   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:12:01.263412   74203 cri.go:89] found id: ""
	I0731 18:12:01.263442   74203 logs.go:276] 0 containers: []
	W0731 18:12:01.263452   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:12:01.263459   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:12:01.263521   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:12:01.296598   74203 cri.go:89] found id: ""
	I0731 18:12:01.296624   74203 logs.go:276] 0 containers: []
	W0731 18:12:01.296632   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:12:01.296642   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:12:01.296656   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:12:01.372362   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:12:01.372381   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:12:01.372395   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:12:01.461997   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:12:01.462029   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:12:01.507610   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:12:01.507636   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:12:01.558335   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:12:01.558375   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:12:04.073333   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:12:04.091122   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:12:04.091205   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:12:04.130510   74203 cri.go:89] found id: ""
	I0731 18:12:04.130545   74203 logs.go:276] 0 containers: []
	W0731 18:12:04.130558   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:12:04.130566   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:12:04.130632   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:12:04.174749   74203 cri.go:89] found id: ""
	I0731 18:12:04.174775   74203 logs.go:276] 0 containers: []
	W0731 18:12:04.174785   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:12:04.174792   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:12:04.174846   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:12:04.212123   74203 cri.go:89] found id: ""
	I0731 18:12:04.212160   74203 logs.go:276] 0 containers: []
	W0731 18:12:04.212172   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:12:04.212180   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:12:04.212254   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:12:04.251558   74203 cri.go:89] found id: ""
	I0731 18:12:04.251589   74203 logs.go:276] 0 containers: []
	W0731 18:12:04.251600   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:12:04.251608   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:12:04.251671   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:12:04.284831   74203 cri.go:89] found id: ""
	I0731 18:12:04.284864   74203 logs.go:276] 0 containers: []
	W0731 18:12:04.284878   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:12:04.284886   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:12:04.284954   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:12:04.325076   74203 cri.go:89] found id: ""
	I0731 18:12:04.325115   74203 logs.go:276] 0 containers: []
	W0731 18:12:04.325126   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:12:04.325135   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:12:04.325195   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:12:04.370883   74203 cri.go:89] found id: ""
	I0731 18:12:04.370922   74203 logs.go:276] 0 containers: []
	W0731 18:12:04.370933   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:12:04.370940   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:12:04.370999   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:12:04.410639   74203 cri.go:89] found id: ""
	I0731 18:12:04.410671   74203 logs.go:276] 0 containers: []
	W0731 18:12:04.410685   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:12:04.410697   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:12:04.410713   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:12:04.462988   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:12:04.463023   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:12:04.479086   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:12:04.479123   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:12:04.544675   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:12:04.544699   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:12:04.544712   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:12:04.633231   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:12:04.633267   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:12:03.645118   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:06.143865   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:07.174252   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:12:07.187289   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:12:07.187393   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:12:07.220927   74203 cri.go:89] found id: ""
	I0731 18:12:07.220953   74203 logs.go:276] 0 containers: []
	W0731 18:12:07.220964   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:12:07.220972   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:12:07.221040   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:12:07.256817   74203 cri.go:89] found id: ""
	I0731 18:12:07.256849   74203 logs.go:276] 0 containers: []
	W0731 18:12:07.256861   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:12:07.256870   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:12:07.256935   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:12:07.290267   74203 cri.go:89] found id: ""
	I0731 18:12:07.290297   74203 logs.go:276] 0 containers: []
	W0731 18:12:07.290309   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:12:07.290315   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:12:07.290373   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:12:07.330037   74203 cri.go:89] found id: ""
	I0731 18:12:07.330068   74203 logs.go:276] 0 containers: []
	W0731 18:12:07.330079   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:12:07.330087   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:12:07.330143   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:12:07.366745   74203 cri.go:89] found id: ""
	I0731 18:12:07.366770   74203 logs.go:276] 0 containers: []
	W0731 18:12:07.366778   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:12:07.366783   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:12:07.366861   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:12:07.400608   74203 cri.go:89] found id: ""
	I0731 18:12:07.400637   74203 logs.go:276] 0 containers: []
	W0731 18:12:07.400648   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:12:07.400661   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:12:07.400727   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:12:07.434996   74203 cri.go:89] found id: ""
	I0731 18:12:07.435028   74203 logs.go:276] 0 containers: []
	W0731 18:12:07.435037   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:12:07.435044   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:12:07.435130   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:12:07.474347   74203 cri.go:89] found id: ""
	I0731 18:12:07.474375   74203 logs.go:276] 0 containers: []
	W0731 18:12:07.474387   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:12:07.474400   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:12:07.474415   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:12:07.549009   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:12:07.549045   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:12:07.586710   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:12:07.586736   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:12:07.640770   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:12:07.640800   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:12:07.654380   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:12:07.654405   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:12:07.721479   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:12:10.221837   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:12:10.235686   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:12:10.235746   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:12:10.268769   74203 cri.go:89] found id: ""
	I0731 18:12:10.268794   74203 logs.go:276] 0 containers: []
	W0731 18:12:10.268802   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:12:10.268808   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:12:10.268860   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:12:10.305229   74203 cri.go:89] found id: ""
	I0731 18:12:10.305264   74203 logs.go:276] 0 containers: []
	W0731 18:12:10.305277   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:12:10.305290   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:12:10.305353   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:12:10.337070   74203 cri.go:89] found id: ""
	I0731 18:12:10.337095   74203 logs.go:276] 0 containers: []
	W0731 18:12:10.337104   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:12:10.337109   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:12:10.337155   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:12:08.643708   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:10.645483   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:10.372979   74203 cri.go:89] found id: ""
	I0731 18:12:10.373005   74203 logs.go:276] 0 containers: []
	W0731 18:12:10.373015   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:12:10.373022   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:12:10.373079   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:12:10.407225   74203 cri.go:89] found id: ""
	I0731 18:12:10.407252   74203 logs.go:276] 0 containers: []
	W0731 18:12:10.407264   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:12:10.407270   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:12:10.407327   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:12:10.443338   74203 cri.go:89] found id: ""
	I0731 18:12:10.443366   74203 logs.go:276] 0 containers: []
	W0731 18:12:10.443377   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:12:10.443385   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:12:10.443474   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:12:10.477005   74203 cri.go:89] found id: ""
	I0731 18:12:10.477030   74203 logs.go:276] 0 containers: []
	W0731 18:12:10.477038   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:12:10.477043   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:12:10.477092   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:12:10.509338   74203 cri.go:89] found id: ""
	I0731 18:12:10.509367   74203 logs.go:276] 0 containers: []
	W0731 18:12:10.509378   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:12:10.509389   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:12:10.509405   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:12:10.559604   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:12:10.559639   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:12:10.572652   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:12:10.572682   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:12:10.642749   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:12:10.642772   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:12:10.642789   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:12:10.728716   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:12:10.728753   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:12:13.267783   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:12:13.282235   74203 kubeadm.go:597] duration metric: took 4m4.41837453s to restartPrimaryControlPlane
	W0731 18:12:13.282324   74203 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 18:12:13.282355   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 18:12:15.410363   73696 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.136815784s)
	I0731 18:12:15.410431   73696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:12:15.426599   73696 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 18:12:15.435823   73696 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 18:12:15.444553   73696 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 18:12:15.444581   73696 kubeadm.go:157] found existing configuration files:
	
	I0731 18:12:15.444624   73696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0731 18:12:15.453198   73696 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 18:12:15.453273   73696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 18:12:15.461988   73696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0731 18:12:15.470178   73696 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 18:12:15.470238   73696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 18:12:15.478903   73696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0731 18:12:15.487176   73696 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 18:12:15.487215   73696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 18:12:15.496114   73696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0731 18:12:15.504518   73696 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 18:12:15.504579   73696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 18:12:15.513915   73696 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 18:12:15.563318   73696 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0731 18:12:15.563381   73696 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 18:12:15.697426   73696 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 18:12:15.697574   73696 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 18:12:15.697688   73696 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 18:12:15.902621   73696 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 18:12:15.904763   73696 out.go:204]   - Generating certificates and keys ...
	I0731 18:12:15.904869   73696 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 18:12:15.904948   73696 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 18:12:15.905049   73696 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 18:12:15.905149   73696 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 18:12:15.905247   73696 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 18:12:15.905328   73696 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 18:12:15.905426   73696 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 18:12:15.905516   73696 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 18:12:15.905620   73696 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 18:12:15.905729   73696 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 18:12:15.905812   73696 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 18:12:15.905890   73696 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 18:12:16.011366   73696 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 18:12:16.171776   73696 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0731 18:12:16.404302   73696 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 18:12:16.559451   73696 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 18:12:16.686612   73696 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 18:12:16.687311   73696 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 18:12:16.689956   73696 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 18:12:13.142855   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:15.144107   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:16.959318   74203 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.676937263s)
	I0731 18:12:16.959425   74203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:12:16.973440   74203 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 18:12:16.983482   74203 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 18:12:16.993930   74203 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 18:12:16.993951   74203 kubeadm.go:157] found existing configuration files:
	
	I0731 18:12:16.993993   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 18:12:17.002713   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 18:12:17.002771   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 18:12:17.012107   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 18:12:17.022548   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 18:12:17.022604   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 18:12:17.033569   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 18:12:17.043338   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 18:12:17.043391   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 18:12:17.052064   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 18:12:17.060785   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 18:12:17.060850   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 18:12:17.069499   74203 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 18:12:17.136512   74203 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0731 18:12:17.136579   74203 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 18:12:17.286224   74203 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 18:12:17.286383   74203 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 18:12:17.286506   74203 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 18:12:17.467092   74203 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 18:12:17.468918   74203 out.go:204]   - Generating certificates and keys ...
	I0731 18:12:17.469024   74203 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 18:12:17.469135   74203 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 18:12:17.469229   74203 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 18:12:17.469307   74203 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 18:12:17.469439   74203 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 18:12:17.469525   74203 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 18:12:17.469609   74203 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 18:12:17.470025   74203 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 18:12:17.470501   74203 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 18:12:17.470852   74203 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 18:12:17.470899   74203 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 18:12:17.470949   74203 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 18:12:17.673308   74203 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 18:12:17.922789   74203 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 18:12:18.391239   74203 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 18:12:18.464854   74203 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 18:12:18.480495   74203 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 18:12:18.480675   74203 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 18:12:18.480746   74203 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 18:12:18.632564   74203 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 18:12:18.635416   74203 out.go:204]   - Booting up control plane ...
	I0731 18:12:18.635542   74203 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 18:12:18.643338   74203 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 18:12:18.645881   74203 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 18:12:18.646898   74203 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 18:12:18.650052   74203 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 18:12:16.691876   73696 out.go:204]   - Booting up control plane ...
	I0731 18:12:16.691967   73696 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 18:12:16.692064   73696 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 18:12:16.692643   73696 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 18:12:16.713038   73696 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 18:12:16.713123   73696 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 18:12:16.713159   73696 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 18:12:16.855506   73696 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0731 18:12:16.855638   73696 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0731 18:12:17.856697   73696 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001297342s
	I0731 18:12:17.856823   73696 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0731 18:12:17.144295   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:19.644100   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:21.644654   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:22.358287   73696 kubeadm.go:310] [api-check] The API server is healthy after 4.501118217s
	I0731 18:12:22.370066   73696 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 18:12:22.382929   73696 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 18:12:22.402765   73696 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 18:12:22.403044   73696 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-094310 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 18:12:22.419724   73696 kubeadm.go:310] [bootstrap-token] Using token: hduea8.ix2m91ewiu6okgi9
	I0731 18:12:22.421231   73696 out.go:204]   - Configuring RBAC rules ...
	I0731 18:12:22.421382   73696 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 18:12:22.426230   73696 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 18:12:22.434423   73696 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 18:12:22.437839   73696 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 18:12:22.449264   73696 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 18:12:22.452420   73696 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 18:12:22.764876   73696 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 18:12:23.216229   73696 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 18:12:23.765173   73696 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 18:12:23.766223   73696 kubeadm.go:310] 
	I0731 18:12:23.766311   73696 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 18:12:23.766356   73696 kubeadm.go:310] 
	I0731 18:12:23.766466   73696 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 18:12:23.766487   73696 kubeadm.go:310] 
	I0731 18:12:23.766521   73696 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 18:12:23.766641   73696 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 18:12:23.766726   73696 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 18:12:23.766741   73696 kubeadm.go:310] 
	I0731 18:12:23.766827   73696 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 18:12:23.766844   73696 kubeadm.go:310] 
	I0731 18:12:23.766899   73696 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 18:12:23.766910   73696 kubeadm.go:310] 
	I0731 18:12:23.766986   73696 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 18:12:23.767089   73696 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 18:12:23.767225   73696 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 18:12:23.767237   73696 kubeadm.go:310] 
	I0731 18:12:23.767310   73696 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 18:12:23.767401   73696 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 18:12:23.767411   73696 kubeadm.go:310] 
	I0731 18:12:23.767531   73696 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token hduea8.ix2m91ewiu6okgi9 \
	I0731 18:12:23.767662   73696 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:460b24b51d10a3760f2391542a8a89e401359854f437f922cbee3f2d8ed6c856 \
	I0731 18:12:23.767695   73696 kubeadm.go:310] 	--control-plane 
	I0731 18:12:23.767702   73696 kubeadm.go:310] 
	I0731 18:12:23.767773   73696 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 18:12:23.767782   73696 kubeadm.go:310] 
	I0731 18:12:23.767847   73696 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token hduea8.ix2m91ewiu6okgi9 \
	I0731 18:12:23.767930   73696 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:460b24b51d10a3760f2391542a8a89e401359854f437f922cbee3f2d8ed6c856 
	I0731 18:12:23.768912   73696 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 18:12:23.769058   73696 cni.go:84] Creating CNI manager for ""
	I0731 18:12:23.769073   73696 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:12:23.771596   73696 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 18:12:23.773122   73696 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 18:12:23.782944   73696 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 18:12:23.800254   73696 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 18:12:23.800383   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:23.800398   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-094310 minikube.k8s.io/updated_at=2024_07_31T18_12_23_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1d737dad7efa60c56d30434fcd857dd3b14c91d9 minikube.k8s.io/name=default-k8s-diff-port-094310 minikube.k8s.io/primary=true
	I0731 18:12:23.827190   73696 ops.go:34] apiserver oom_adj: -16
	I0731 18:12:23.990425   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:24.490585   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:24.991490   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:25.490948   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:25.991461   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:23.645259   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:26.144352   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:26.491041   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:26.990516   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:27.491386   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:27.991150   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:28.490838   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:28.991267   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:29.490459   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:29.990672   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:30.491302   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:30.990644   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:28.644749   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:31.143617   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:32.532203   73800 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.265875459s)
	I0731 18:12:32.532286   73800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:12:32.548139   73800 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 18:12:32.558049   73800 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 18:12:32.567036   73800 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 18:12:32.567060   73800 kubeadm.go:157] found existing configuration files:
	
	I0731 18:12:32.567133   73800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 18:12:32.576069   73800 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 18:12:32.576124   73800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 18:12:32.584762   73800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 18:12:32.592927   73800 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 18:12:32.592980   73800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 18:12:32.601309   73800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 18:12:32.609478   73800 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 18:12:32.609525   73800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 18:12:32.617980   73800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 18:12:32.625943   73800 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 18:12:32.625978   73800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 18:12:32.634091   73800 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 18:12:32.821569   73800 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 18:12:31.491226   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:31.991099   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:32.490751   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:32.991252   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:33.490564   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:33.990977   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:34.491037   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:34.990696   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:35.491381   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:35.990793   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:36.490926   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:36.581312   73696 kubeadm.go:1113] duration metric: took 12.780981821s to wait for elevateKubeSystemPrivileges
	I0731 18:12:36.581370   73696 kubeadm.go:394] duration metric: took 5m8.741923744s to StartCluster
	I0731 18:12:36.581393   73696 settings.go:142] acquiring lock: {Name:mk8ac8269296bf0b887a00ff74caaad180005f89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:12:36.581485   73696 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 18:12:36.583690   73696 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/kubeconfig: {Name:mkf2340bc1ada3c683f132aca37228d7588e1fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:12:36.583986   73696 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.197 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 18:12:36.585079   73696 config.go:182] Loaded profile config "default-k8s-diff-port-094310": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:12:36.585328   73696 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 18:12:36.585677   73696 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-094310"
	I0731 18:12:36.585686   73696 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-094310"
	I0731 18:12:36.585688   73696 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-094310"
	I0731 18:12:36.585705   73696 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-094310"
	W0731 18:12:36.585717   73696 addons.go:243] addon storage-provisioner should already be in state true
	I0731 18:12:36.585720   73696 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-094310"
	I0731 18:12:36.585732   73696 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-094310"
	W0731 18:12:36.585740   73696 addons.go:243] addon metrics-server should already be in state true
	I0731 18:12:36.585752   73696 host.go:66] Checking if "default-k8s-diff-port-094310" exists ...
	I0731 18:12:36.585766   73696 host.go:66] Checking if "default-k8s-diff-port-094310" exists ...
	I0731 18:12:36.586152   73696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:36.586166   73696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:36.586166   73696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:36.586174   73696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:36.586180   73696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:36.586188   73696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:36.586456   73696 out.go:177] * Verifying Kubernetes components...
	I0731 18:12:36.588174   73696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:12:36.605611   73696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38873
	I0731 18:12:36.605856   73696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44661
	I0731 18:12:36.606122   73696 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:36.606710   73696 main.go:141] libmachine: Using API Version  1
	I0731 18:12:36.606731   73696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:36.606809   73696 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:36.607072   73696 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:36.607240   73696 main.go:141] libmachine: Using API Version  1
	I0731 18:12:36.607262   73696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:36.607789   73696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:36.607817   73696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:36.608000   73696 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:36.608173   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetState
	I0731 18:12:36.609009   73696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39155
	I0731 18:12:36.609469   73696 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:36.609954   73696 main.go:141] libmachine: Using API Version  1
	I0731 18:12:36.609973   73696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:36.610333   73696 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:36.610936   73696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:36.610998   73696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:36.612199   73696 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-094310"
	W0731 18:12:36.612224   73696 addons.go:243] addon default-storageclass should already be in state true
	I0731 18:12:36.612254   73696 host.go:66] Checking if "default-k8s-diff-port-094310" exists ...
	I0731 18:12:36.612624   73696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:36.612659   73696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:36.626474   73696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38009
	I0731 18:12:36.626981   73696 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:36.627514   73696 main.go:141] libmachine: Using API Version  1
	I0731 18:12:36.627534   73696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:36.627836   73696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43349
	I0731 18:12:36.628007   73696 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:36.628336   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetState
	I0731 18:12:36.628415   73696 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:36.628816   73696 main.go:141] libmachine: Using API Version  1
	I0731 18:12:36.628831   73696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:36.629237   73696 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:36.629450   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetState
	I0731 18:12:36.630518   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:12:36.631198   73696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43471
	I0731 18:12:36.631550   73696 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:36.632064   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:12:36.632200   73696 main.go:141] libmachine: Using API Version  1
	I0731 18:12:36.632217   73696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:36.632576   73696 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:12:36.632739   73696 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:36.633275   73696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:36.633313   73696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:36.633711   73696 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0731 18:12:33.642776   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:35.643640   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:36.633805   73696 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 18:12:36.633820   73696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 18:12:36.633840   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:12:36.634990   73696 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 18:12:36.635005   73696 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 18:12:36.635022   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:12:36.637135   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:12:36.637767   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:12:36.637792   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:12:36.637891   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:12:36.639047   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:12:36.639583   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:12:36.639617   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:12:36.639885   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:12:36.640106   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:12:36.640235   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:12:36.640419   73696 sshutil.go:53] new ssh client: &{IP:192.168.72.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/default-k8s-diff-port-094310/id_rsa Username:docker}
	I0731 18:12:36.641860   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:12:36.642037   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:12:36.642205   73696 sshutil.go:53] new ssh client: &{IP:192.168.72.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/default-k8s-diff-port-094310/id_rsa Username:docker}
	I0731 18:12:36.659960   73696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36389
	I0731 18:12:36.660280   73696 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:36.660692   73696 main.go:141] libmachine: Using API Version  1
	I0731 18:12:36.660713   73696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:36.660986   73696 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:36.661150   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetState
	I0731 18:12:36.663024   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:12:36.663232   73696 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 18:12:36.663245   73696 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 18:12:36.663264   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:12:36.666016   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:12:36.666393   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:12:36.666472   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:12:36.666562   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:12:36.666730   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:12:36.666832   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:12:36.666935   73696 sshutil.go:53] new ssh client: &{IP:192.168.72.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/default-k8s-diff-port-094310/id_rsa Username:docker}
	I0731 18:12:36.813977   73696 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 18:12:36.832201   73696 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-094310" to be "Ready" ...
	I0731 18:12:36.849864   73696 node_ready.go:49] node "default-k8s-diff-port-094310" has status "Ready":"True"
	I0731 18:12:36.849891   73696 node_ready.go:38] duration metric: took 17.657098ms for node "default-k8s-diff-port-094310" to be "Ready" ...
	I0731 18:12:36.849903   73696 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:12:36.860981   73696 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:36.865178   73696 pod_ready.go:92] pod "etcd-default-k8s-diff-port-094310" in "kube-system" namespace has status "Ready":"True"
	I0731 18:12:36.865198   73696 pod_ready.go:81] duration metric: took 4.190559ms for pod "etcd-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:36.865209   73696 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:36.869977   73696 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-094310" in "kube-system" namespace has status "Ready":"True"
	I0731 18:12:36.869998   73696 pod_ready.go:81] duration metric: took 4.780295ms for pod "kube-apiserver-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:36.870008   73696 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:36.874051   73696 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-094310" in "kube-system" namespace has status "Ready":"True"
	I0731 18:12:36.874069   73696 pod_ready.go:81] duration metric: took 4.053362ms for pod "kube-controller-manager-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:36.874079   73696 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:36.878519   73696 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-094310" in "kube-system" namespace has status "Ready":"True"
	I0731 18:12:36.878536   73696 pod_ready.go:81] duration metric: took 4.448692ms for pod "kube-scheduler-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:36.878544   73696 pod_ready.go:38] duration metric: took 28.628924ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:12:36.878564   73696 api_server.go:52] waiting for apiserver process to appear ...
	I0731 18:12:36.878622   73696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:12:36.892011   73696 api_server.go:72] duration metric: took 307.983877ms to wait for apiserver process to appear ...
	I0731 18:12:36.892031   73696 api_server.go:88] waiting for apiserver healthz status ...
	I0731 18:12:36.892049   73696 api_server.go:253] Checking apiserver healthz at https://192.168.72.197:8444/healthz ...
	I0731 18:12:36.895929   73696 api_server.go:279] https://192.168.72.197:8444/healthz returned 200:
	ok
	I0731 18:12:36.896760   73696 api_server.go:141] control plane version: v1.30.3
	I0731 18:12:36.896780   73696 api_server.go:131] duration metric: took 4.741896ms to wait for apiserver health ...
	I0731 18:12:36.896789   73696 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 18:12:36.974073   73696 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 18:12:36.974092   73696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0731 18:12:37.010218   73696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 18:12:37.018536   73696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 18:12:37.039734   73696 system_pods.go:59] 5 kube-system pods found
	I0731 18:12:37.039767   73696 system_pods.go:61] "etcd-default-k8s-diff-port-094310" [f73c4fb4-b160-4e79-a0a4-8fba255cfb8c] Running
	I0731 18:12:37.039773   73696 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-094310" [820215b3-0eaa-46ca-bf0b-4fa73942160a] Running
	I0731 18:12:37.039778   73696 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-094310" [5ca0ed32-5439-49b5-9a85-1a4b1628371d] Running
	I0731 18:12:37.039787   73696 system_pods.go:61] "kube-proxy-4vvjq" [a8dd9db6-5f45-4c8d-a9d3-03ede1db07eb] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 18:12:37.039792   73696 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-094310" [11590718-4e74-46eb-be90-6405c3f1759f] Running
	I0731 18:12:37.039802   73696 system_pods.go:74] duration metric: took 143.007992ms to wait for pod list to return data ...
	I0731 18:12:37.039812   73696 default_sa.go:34] waiting for default service account to be created ...
	I0731 18:12:37.041650   73696 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 18:12:37.041672   73696 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 18:12:37.096891   73696 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 18:12:37.096920   73696 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 18:12:37.159438   73696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 18:12:37.235560   73696 default_sa.go:45] found service account: "default"
	I0731 18:12:37.235599   73696 default_sa.go:55] duration metric: took 195.778976ms for default service account to be created ...
	I0731 18:12:37.235612   73696 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 18:12:37.439935   73696 system_pods.go:86] 7 kube-system pods found
	I0731 18:12:37.439966   73696 system_pods.go:89] "coredns-7db6d8ff4d-2r7zb" [e7acc926-db53-4c1c-a62f-45e303c69fc7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:12:37.439975   73696 system_pods.go:89] "coredns-7db6d8ff4d-756jj" [c91e86b1-8348-4dc7-9aa1-6909055c2dde] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:12:37.439982   73696 system_pods.go:89] "etcd-default-k8s-diff-port-094310" [f73c4fb4-b160-4e79-a0a4-8fba255cfb8c] Running
	I0731 18:12:37.439988   73696 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-094310" [820215b3-0eaa-46ca-bf0b-4fa73942160a] Running
	I0731 18:12:37.439993   73696 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-094310" [5ca0ed32-5439-49b5-9a85-1a4b1628371d] Running
	I0731 18:12:37.439998   73696 system_pods.go:89] "kube-proxy-4vvjq" [a8dd9db6-5f45-4c8d-a9d3-03ede1db07eb] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 18:12:37.440003   73696 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-094310" [11590718-4e74-46eb-be90-6405c3f1759f] Running
	I0731 18:12:37.440020   73696 retry.go:31] will retry after 230.300903ms: missing components: kube-dns, kube-proxy
	I0731 18:12:37.676385   73696 system_pods.go:86] 7 kube-system pods found
	I0731 18:12:37.676411   73696 system_pods.go:89] "coredns-7db6d8ff4d-2r7zb" [e7acc926-db53-4c1c-a62f-45e303c69fc7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:12:37.676421   73696 system_pods.go:89] "coredns-7db6d8ff4d-756jj" [c91e86b1-8348-4dc7-9aa1-6909055c2dde] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:12:37.676429   73696 system_pods.go:89] "etcd-default-k8s-diff-port-094310" [f73c4fb4-b160-4e79-a0a4-8fba255cfb8c] Running
	I0731 18:12:37.676436   73696 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-094310" [820215b3-0eaa-46ca-bf0b-4fa73942160a] Running
	I0731 18:12:37.676442   73696 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-094310" [5ca0ed32-5439-49b5-9a85-1a4b1628371d] Running
	I0731 18:12:37.676451   73696 system_pods.go:89] "kube-proxy-4vvjq" [a8dd9db6-5f45-4c8d-a9d3-03ede1db07eb] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 18:12:37.676456   73696 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-094310" [11590718-4e74-46eb-be90-6405c3f1759f] Running
	I0731 18:12:37.676475   73696 retry.go:31] will retry after 311.28179ms: missing components: kube-dns, kube-proxy
	I0731 18:12:37.813837   73696 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:37.813870   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .Close
	I0731 18:12:37.814017   73696 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:37.814039   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .Close
	I0731 18:12:37.814265   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | Closing plugin on server side
	I0731 18:12:37.814316   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | Closing plugin on server side
	I0731 18:12:37.814363   73696 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:37.814376   73696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:37.814391   73696 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:37.814402   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .Close
	I0731 18:12:37.814531   73696 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:37.814556   73696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:37.814598   73696 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:37.814608   73696 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:37.814631   73696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:37.814622   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .Close
	I0731 18:12:37.816102   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | Closing plugin on server side
	I0731 18:12:37.816268   73696 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:37.816280   73696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:37.830991   73696 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:37.831018   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .Close
	I0731 18:12:37.831354   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | Closing plugin on server side
	I0731 18:12:37.831354   73696 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:37.831380   73696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:37.995206   73696 system_pods.go:86] 8 kube-system pods found
	I0731 18:12:37.995248   73696 system_pods.go:89] "coredns-7db6d8ff4d-2r7zb" [e7acc926-db53-4c1c-a62f-45e303c69fc7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:12:37.995262   73696 system_pods.go:89] "coredns-7db6d8ff4d-756jj" [c91e86b1-8348-4dc7-9aa1-6909055c2dde] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:12:37.995272   73696 system_pods.go:89] "etcd-default-k8s-diff-port-094310" [f73c4fb4-b160-4e79-a0a4-8fba255cfb8c] Running
	I0731 18:12:37.995295   73696 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-094310" [820215b3-0eaa-46ca-bf0b-4fa73942160a] Running
	I0731 18:12:37.995310   73696 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-094310" [5ca0ed32-5439-49b5-9a85-1a4b1628371d] Running
	I0731 18:12:37.995322   73696 system_pods.go:89] "kube-proxy-4vvjq" [a8dd9db6-5f45-4c8d-a9d3-03ede1db07eb] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 18:12:37.995332   73696 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-094310" [11590718-4e74-46eb-be90-6405c3f1759f] Running
	I0731 18:12:37.995345   73696 system_pods.go:89] "storage-provisioner" [2e4c5a96-4dfc-4af6-8d3e-bab865644328] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 18:12:37.995370   73696 retry.go:31] will retry after 381.430275ms: missing components: kube-dns, kube-proxy
	I0731 18:12:38.392678   73696 system_pods.go:86] 9 kube-system pods found
	I0731 18:12:38.392719   73696 system_pods.go:89] "coredns-7db6d8ff4d-2r7zb" [e7acc926-db53-4c1c-a62f-45e303c69fc7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:12:38.392732   73696 system_pods.go:89] "coredns-7db6d8ff4d-756jj" [c91e86b1-8348-4dc7-9aa1-6909055c2dde] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:12:38.392742   73696 system_pods.go:89] "etcd-default-k8s-diff-port-094310" [f73c4fb4-b160-4e79-a0a4-8fba255cfb8c] Running
	I0731 18:12:38.392751   73696 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-094310" [820215b3-0eaa-46ca-bf0b-4fa73942160a] Running
	I0731 18:12:38.392760   73696 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-094310" [5ca0ed32-5439-49b5-9a85-1a4b1628371d] Running
	I0731 18:12:38.392770   73696 system_pods.go:89] "kube-proxy-4vvjq" [a8dd9db6-5f45-4c8d-a9d3-03ede1db07eb] Running
	I0731 18:12:38.392778   73696 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-094310" [11590718-4e74-46eb-be90-6405c3f1759f] Running
	I0731 18:12:38.392787   73696 system_pods.go:89] "metrics-server-569cc877fc-mskwc" [c57990b4-1d91-4764-9c33-2fd5f7d2f83b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 18:12:38.392802   73696 system_pods.go:89] "storage-provisioner" [2e4c5a96-4dfc-4af6-8d3e-bab865644328] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 18:12:38.392823   73696 retry.go:31] will retry after 567.905994ms: missing components: kube-dns
	I0731 18:12:38.501117   73696 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.341621275s)
	I0731 18:12:38.501181   73696 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:38.501200   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .Close
	I0731 18:12:38.501595   73696 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:38.501615   73696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:38.501625   73696 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:38.501634   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .Close
	I0731 18:12:38.501593   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | Closing plugin on server side
	I0731 18:12:38.501907   73696 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:38.501953   73696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:38.501966   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | Closing plugin on server side
	I0731 18:12:38.501975   73696 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-094310"
	I0731 18:12:38.505204   73696 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0731 18:12:38.506517   73696 addons.go:510] duration metric: took 1.921658263s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0731 18:12:38.967657   73696 system_pods.go:86] 9 kube-system pods found
	I0731 18:12:38.967691   73696 system_pods.go:89] "coredns-7db6d8ff4d-2r7zb" [e7acc926-db53-4c1c-a62f-45e303c69fc7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:12:38.967700   73696 system_pods.go:89] "coredns-7db6d8ff4d-756jj" [c91e86b1-8348-4dc7-9aa1-6909055c2dde] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:12:38.967708   73696 system_pods.go:89] "etcd-default-k8s-diff-port-094310" [f73c4fb4-b160-4e79-a0a4-8fba255cfb8c] Running
	I0731 18:12:38.967716   73696 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-094310" [820215b3-0eaa-46ca-bf0b-4fa73942160a] Running
	I0731 18:12:38.967723   73696 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-094310" [5ca0ed32-5439-49b5-9a85-1a4b1628371d] Running
	I0731 18:12:38.967729   73696 system_pods.go:89] "kube-proxy-4vvjq" [a8dd9db6-5f45-4c8d-a9d3-03ede1db07eb] Running
	I0731 18:12:38.967736   73696 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-094310" [11590718-4e74-46eb-be90-6405c3f1759f] Running
	I0731 18:12:38.967746   73696 system_pods.go:89] "metrics-server-569cc877fc-mskwc" [c57990b4-1d91-4764-9c33-2fd5f7d2f83b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 18:12:38.967759   73696 system_pods.go:89] "storage-provisioner" [2e4c5a96-4dfc-4af6-8d3e-bab865644328] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 18:12:38.967779   73696 retry.go:31] will retry after 488.293971ms: missing components: kube-dns
	I0731 18:12:39.464918   73696 system_pods.go:86] 9 kube-system pods found
	I0731 18:12:39.464956   73696 system_pods.go:89] "coredns-7db6d8ff4d-2r7zb" [e7acc926-db53-4c1c-a62f-45e303c69fc7] Running
	I0731 18:12:39.464965   73696 system_pods.go:89] "coredns-7db6d8ff4d-756jj" [c91e86b1-8348-4dc7-9aa1-6909055c2dde] Running
	I0731 18:12:39.464972   73696 system_pods.go:89] "etcd-default-k8s-diff-port-094310" [f73c4fb4-b160-4e79-a0a4-8fba255cfb8c] Running
	I0731 18:12:39.464978   73696 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-094310" [820215b3-0eaa-46ca-bf0b-4fa73942160a] Running
	I0731 18:12:39.464986   73696 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-094310" [5ca0ed32-5439-49b5-9a85-1a4b1628371d] Running
	I0731 18:12:39.464992   73696 system_pods.go:89] "kube-proxy-4vvjq" [a8dd9db6-5f45-4c8d-a9d3-03ede1db07eb] Running
	I0731 18:12:39.464999   73696 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-094310" [11590718-4e74-46eb-be90-6405c3f1759f] Running
	I0731 18:12:39.465017   73696 system_pods.go:89] "metrics-server-569cc877fc-mskwc" [c57990b4-1d91-4764-9c33-2fd5f7d2f83b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 18:12:39.465028   73696 system_pods.go:89] "storage-provisioner" [2e4c5a96-4dfc-4af6-8d3e-bab865644328] Running
	I0731 18:12:39.465041   73696 system_pods.go:126] duration metric: took 2.229422302s to wait for k8s-apps to be running ...
	I0731 18:12:39.465053   73696 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 18:12:39.465111   73696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:12:39.482063   73696 system_svc.go:56] duration metric: took 16.998965ms WaitForService to wait for kubelet
	I0731 18:12:39.482092   73696 kubeadm.go:582] duration metric: took 2.898066741s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 18:12:39.482138   73696 node_conditions.go:102] verifying NodePressure condition ...
	I0731 18:12:39.486728   73696 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 18:12:39.486752   73696 node_conditions.go:123] node cpu capacity is 2
	I0731 18:12:39.486764   73696 node_conditions.go:105] duration metric: took 4.617934ms to run NodePressure ...
	I0731 18:12:39.486777   73696 start.go:241] waiting for startup goroutines ...
	I0731 18:12:39.486787   73696 start.go:246] waiting for cluster config update ...
	I0731 18:12:39.486798   73696 start.go:255] writing updated cluster config ...
	I0731 18:12:39.487565   73696 ssh_runner.go:195] Run: rm -f paused
	I0731 18:12:39.539591   73696 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 18:12:39.541533   73696 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-094310" cluster and "default" namespace by default
	I0731 18:12:37.644379   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:39.645608   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:41.969949   73800 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0731 18:12:41.970018   73800 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 18:12:41.970137   73800 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 18:12:41.970234   73800 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 18:12:41.970386   73800 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 18:12:41.970495   73800 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 18:12:41.972177   73800 out.go:204]   - Generating certificates and keys ...
	I0731 18:12:41.972244   73800 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 18:12:41.972314   73800 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 18:12:41.972403   73800 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 18:12:41.972480   73800 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 18:12:41.972538   73800 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 18:12:41.972588   73800 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 18:12:41.972654   73800 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 18:12:41.972748   73800 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 18:12:41.972859   73800 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 18:12:41.972982   73800 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 18:12:41.973027   73800 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 18:12:41.973082   73800 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 18:12:41.973152   73800 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 18:12:41.973205   73800 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0731 18:12:41.973252   73800 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 18:12:41.973323   73800 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 18:12:41.973387   73800 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 18:12:41.973456   73800 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 18:12:41.973553   73800 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 18:12:41.974927   73800 out.go:204]   - Booting up control plane ...
	I0731 18:12:41.975019   73800 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 18:12:41.975128   73800 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 18:12:41.975215   73800 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 18:12:41.975342   73800 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 18:12:41.975425   73800 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 18:12:41.975474   73800 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 18:12:41.975635   73800 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0731 18:12:41.975710   73800 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0731 18:12:41.975766   73800 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001397088s
	I0731 18:12:41.975824   73800 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0731 18:12:41.975909   73800 kubeadm.go:310] [api-check] The API server is healthy after 5.001258426s
	I0731 18:12:41.976064   73800 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 18:12:41.976241   73800 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 18:12:41.976355   73800 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 18:12:41.976528   73800 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-436067 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 18:12:41.976605   73800 kubeadm.go:310] [bootstrap-token] Using token: m9csv8.j58cj919sgzkgy1k
	I0731 18:12:41.978880   73800 out.go:204]   - Configuring RBAC rules ...
	I0731 18:12:41.978976   73800 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 18:12:41.979087   73800 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 18:12:41.979277   73800 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 18:12:41.979441   73800 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 18:12:41.979622   73800 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 18:12:41.979708   73800 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 18:12:41.979835   73800 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 18:12:41.979875   73800 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 18:12:41.979918   73800 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 18:12:41.979924   73800 kubeadm.go:310] 
	I0731 18:12:41.979971   73800 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 18:12:41.979979   73800 kubeadm.go:310] 
	I0731 18:12:41.980058   73800 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 18:12:41.980067   73800 kubeadm.go:310] 
	I0731 18:12:41.980098   73800 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 18:12:41.980160   73800 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 18:12:41.980229   73800 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 18:12:41.980236   73800 kubeadm.go:310] 
	I0731 18:12:41.980300   73800 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 18:12:41.980311   73800 kubeadm.go:310] 
	I0731 18:12:41.980384   73800 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 18:12:41.980393   73800 kubeadm.go:310] 
	I0731 18:12:41.980446   73800 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 18:12:41.980548   73800 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 18:12:41.980644   73800 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 18:12:41.980653   73800 kubeadm.go:310] 
	I0731 18:12:41.980759   73800 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 18:12:41.980824   73800 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 18:12:41.980830   73800 kubeadm.go:310] 
	I0731 18:12:41.980896   73800 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token m9csv8.j58cj919sgzkgy1k \
	I0731 18:12:41.980984   73800 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:460b24b51d10a3760f2391542a8a89e401359854f437f922cbee3f2d8ed6c856 \
	I0731 18:12:41.981011   73800 kubeadm.go:310] 	--control-plane 
	I0731 18:12:41.981023   73800 kubeadm.go:310] 
	I0731 18:12:41.981093   73800 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 18:12:41.981099   73800 kubeadm.go:310] 
	I0731 18:12:41.981183   73800 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token m9csv8.j58cj919sgzkgy1k \
	I0731 18:12:41.981306   73800 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:460b24b51d10a3760f2391542a8a89e401359854f437f922cbee3f2d8ed6c856 
	I0731 18:12:41.981317   73800 cni.go:84] Creating CNI manager for ""
	I0731 18:12:41.981324   73800 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:12:41.982701   73800 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 18:12:41.983929   73800 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 18:12:41.995272   73800 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 18:12:42.014929   73800 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 18:12:42.014984   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:42.015033   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-436067 minikube.k8s.io/updated_at=2024_07_31T18_12_42_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1d737dad7efa60c56d30434fcd857dd3b14c91d9 minikube.k8s.io/name=embed-certs-436067 minikube.k8s.io/primary=true
	I0731 18:12:42.164811   73800 ops.go:34] apiserver oom_adj: -16
	I0731 18:12:42.164934   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:42.665108   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:43.165818   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:43.665733   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:44.165074   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:42.144896   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:44.644077   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:44.665477   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:45.165127   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:45.665440   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:46.165555   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:46.665998   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:47.165829   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:47.665704   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:48.164973   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:48.665549   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:49.165210   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:47.142947   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:49.144015   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:51.644495   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:49.665500   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:50.165567   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:50.665547   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:51.166002   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:51.664981   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:52.165135   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:52.665927   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:53.165045   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:53.664981   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:54.165715   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:54.252373   73800 kubeadm.go:1113] duration metric: took 12.237438799s to wait for elevateKubeSystemPrivileges
	I0731 18:12:54.252415   73800 kubeadm.go:394] duration metric: took 5m6.689979758s to StartCluster
	I0731 18:12:54.252435   73800 settings.go:142] acquiring lock: {Name:mk8ac8269296bf0b887a00ff74caaad180005f89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:12:54.252509   73800 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 18:12:54.254175   73800 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/kubeconfig: {Name:mkf2340bc1ada3c683f132aca37228d7588e1fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:12:54.254495   73800 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.86 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 18:12:54.254600   73800 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 18:12:54.254687   73800 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-436067"
	I0731 18:12:54.254721   73800 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-436067"
	I0731 18:12:54.254724   73800 addons.go:69] Setting default-storageclass=true in profile "embed-certs-436067"
	W0731 18:12:54.254734   73800 addons.go:243] addon storage-provisioner should already be in state true
	I0731 18:12:54.254737   73800 config.go:182] Loaded profile config "embed-certs-436067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:12:54.254743   73800 addons.go:69] Setting metrics-server=true in profile "embed-certs-436067"
	I0731 18:12:54.254760   73800 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-436067"
	I0731 18:12:54.254769   73800 host.go:66] Checking if "embed-certs-436067" exists ...
	I0731 18:12:54.254785   73800 addons.go:234] Setting addon metrics-server=true in "embed-certs-436067"
	W0731 18:12:54.254795   73800 addons.go:243] addon metrics-server should already be in state true
	I0731 18:12:54.254826   73800 host.go:66] Checking if "embed-certs-436067" exists ...
	I0731 18:12:54.255205   73800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:54.255208   73800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:54.255233   73800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:54.255238   73800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:54.255302   73800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:54.255323   73800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:54.256412   73800 out.go:177] * Verifying Kubernetes components...
	I0731 18:12:54.257653   73800 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:12:54.274456   73800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34433
	I0731 18:12:54.274959   73800 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:54.275532   73800 main.go:141] libmachine: Using API Version  1
	I0731 18:12:54.275554   73800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:54.275828   73800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44423
	I0731 18:12:54.275851   73800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41513
	I0731 18:12:54.276001   73800 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:54.276152   73800 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:54.276225   73800 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:54.276498   73800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:54.276534   73800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:54.276592   73800 main.go:141] libmachine: Using API Version  1
	I0731 18:12:54.276606   73800 main.go:141] libmachine: Using API Version  1
	I0731 18:12:54.276613   73800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:54.276616   73800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:54.276954   73800 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:54.277055   73800 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:54.277103   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetState
	I0731 18:12:54.277663   73800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:54.277704   73800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:54.280559   73800 addons.go:234] Setting addon default-storageclass=true in "embed-certs-436067"
	W0731 18:12:54.280583   73800 addons.go:243] addon default-storageclass should already be in state true
	I0731 18:12:54.280615   73800 host.go:66] Checking if "embed-certs-436067" exists ...
	I0731 18:12:54.280969   73800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:54.281000   73800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:54.293211   73800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38393
	I0731 18:12:54.293657   73800 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:54.294121   73800 main.go:141] libmachine: Using API Version  1
	I0731 18:12:54.294142   73800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:54.294444   73800 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:54.294642   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetState
	I0731 18:12:54.294724   73800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38771
	I0731 18:12:54.295077   73800 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:54.295590   73800 main.go:141] libmachine: Using API Version  1
	I0731 18:12:54.295609   73800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:54.296058   73800 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:54.296285   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetState
	I0731 18:12:54.296377   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:12:54.298013   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:12:54.298541   73800 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0731 18:12:54.299454   73800 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:12:54.299489   73800 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 18:12:54.299501   73800 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 18:12:54.299515   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:12:54.300664   73800 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 18:12:54.300682   73800 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 18:12:54.300699   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:12:54.301018   73800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36647
	I0731 18:12:54.301671   73800 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:54.302210   73800 main.go:141] libmachine: Using API Version  1
	I0731 18:12:54.302229   73800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:54.302731   73800 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:54.302857   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:12:54.303479   73800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:54.303503   73800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:54.303710   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:12:54.303744   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:12:54.303768   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:12:54.303893   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:12:54.304071   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:12:54.304232   73800 sshutil.go:53] new ssh client: &{IP:192.168.50.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/embed-certs-436067/id_rsa Username:docker}
	I0731 18:12:54.304601   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:12:54.305040   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:12:54.305063   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:12:54.305311   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:12:54.305480   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:12:54.305594   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:12:54.305712   73800 sshutil.go:53] new ssh client: &{IP:192.168.50.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/embed-certs-436067/id_rsa Username:docker}
	I0731 18:12:54.318168   73800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33711
	I0731 18:12:54.318558   73800 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:54.319015   73800 main.go:141] libmachine: Using API Version  1
	I0731 18:12:54.319033   73800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:54.319355   73800 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:54.319552   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetState
	I0731 18:12:54.321369   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:12:54.321540   73800 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 18:12:54.321553   73800 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 18:12:54.321565   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:12:54.324613   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:12:54.324994   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:12:54.325011   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:12:54.325310   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:12:54.325437   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:12:54.325571   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:12:54.325683   73800 sshutil.go:53] new ssh client: &{IP:192.168.50.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/embed-certs-436067/id_rsa Username:docker}
	I0731 18:12:54.435485   73800 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 18:12:54.462541   73800 node_ready.go:35] waiting up to 6m0s for node "embed-certs-436067" to be "Ready" ...
	I0731 18:12:54.473787   73800 node_ready.go:49] node "embed-certs-436067" has status "Ready":"True"
	I0731 18:12:54.473810   73800 node_ready.go:38] duration metric: took 11.237808ms for node "embed-certs-436067" to be "Ready" ...
	I0731 18:12:54.473819   73800 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:12:54.485589   73800 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:54.507887   73800 pod_ready.go:92] pod "etcd-embed-certs-436067" in "kube-system" namespace has status "Ready":"True"
	I0731 18:12:54.507910   73800 pod_ready.go:81] duration metric: took 22.296215ms for pod "etcd-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:54.507921   73800 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:54.524721   73800 pod_ready.go:92] pod "kube-apiserver-embed-certs-436067" in "kube-system" namespace has status "Ready":"True"
	I0731 18:12:54.524742   73800 pod_ready.go:81] duration metric: took 16.814491ms for pod "kube-apiserver-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:54.524751   73800 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:54.536810   73800 pod_ready.go:92] pod "kube-controller-manager-embed-certs-436067" in "kube-system" namespace has status "Ready":"True"
	I0731 18:12:54.536837   73800 pod_ready.go:81] duration metric: took 12.078703ms for pod "kube-controller-manager-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:54.536848   73800 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-85spm" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:54.552538   73800 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 18:12:54.579223   73800 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 18:12:54.579244   73800 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0731 18:12:54.596087   73800 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 18:12:54.617180   73800 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 18:12:54.617209   73800 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 18:12:54.679879   73800 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 18:12:54.679908   73800 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 18:12:54.775272   73800 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 18:12:55.199299   73800 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:55.199335   73800 main.go:141] libmachine: (embed-certs-436067) Calling .Close
	I0731 18:12:55.199342   73800 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:55.199361   73800 main.go:141] libmachine: (embed-certs-436067) Calling .Close
	I0731 18:12:55.199618   73800 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:55.199666   73800 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:55.199678   73800 main.go:141] libmachine: (embed-certs-436067) DBG | Closing plugin on server side
	I0731 18:12:55.199634   73800 main.go:141] libmachine: (embed-certs-436067) DBG | Closing plugin on server side
	I0731 18:12:55.199685   73800 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:55.199710   73800 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:55.199689   73800 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:55.199717   73800 main.go:141] libmachine: (embed-certs-436067) Calling .Close
	I0731 18:12:55.199726   73800 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:55.199735   73800 main.go:141] libmachine: (embed-certs-436067) Calling .Close
	I0731 18:12:55.200002   73800 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:55.200016   73800 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:55.200079   73800 main.go:141] libmachine: (embed-certs-436067) DBG | Closing plugin on server side
	I0731 18:12:55.200107   73800 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:55.200120   73800 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:55.227472   73800 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:55.227497   73800 main.go:141] libmachine: (embed-certs-436067) Calling .Close
	I0731 18:12:55.227792   73800 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:55.227811   73800 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:55.712134   73800 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:55.712159   73800 main.go:141] libmachine: (embed-certs-436067) Calling .Close
	I0731 18:12:55.712516   73800 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:55.712568   73800 main.go:141] libmachine: (embed-certs-436067) DBG | Closing plugin on server side
	I0731 18:12:55.712574   73800 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:55.712596   73800 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:55.712605   73800 main.go:141] libmachine: (embed-certs-436067) Calling .Close
	I0731 18:12:55.712851   73800 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:55.712868   73800 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:55.712867   73800 main.go:141] libmachine: (embed-certs-436067) DBG | Closing plugin on server side
	I0731 18:12:55.712877   73800 addons.go:475] Verifying addon metrics-server=true in "embed-certs-436067"
	I0731 18:12:55.714432   73800 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0731 18:12:54.143455   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:56.144177   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:55.715903   73800 addons.go:510] duration metric: took 1.461304856s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0731 18:12:56.542100   73800 pod_ready.go:92] pod "kube-proxy-85spm" in "kube-system" namespace has status "Ready":"True"
	I0731 18:12:56.542122   73800 pod_ready.go:81] duration metric: took 2.005265959s for pod "kube-proxy-85spm" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:56.542135   73800 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:56.553810   73800 pod_ready.go:92] pod "kube-scheduler-embed-certs-436067" in "kube-system" namespace has status "Ready":"True"
	I0731 18:12:56.553831   73800 pod_ready.go:81] duration metric: took 11.689814ms for pod "kube-scheduler-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:56.553840   73800 pod_ready.go:38] duration metric: took 2.080010607s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:12:56.553853   73800 api_server.go:52] waiting for apiserver process to appear ...
	I0731 18:12:56.553899   73800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:12:56.568301   73800 api_server.go:72] duration metric: took 2.313759916s to wait for apiserver process to appear ...
	I0731 18:12:56.568327   73800 api_server.go:88] waiting for apiserver healthz status ...
	I0731 18:12:56.568345   73800 api_server.go:253] Checking apiserver healthz at https://192.168.50.86:8443/healthz ...
	I0731 18:12:56.573861   73800 api_server.go:279] https://192.168.50.86:8443/healthz returned 200:
	ok
	I0731 18:12:56.575494   73800 api_server.go:141] control plane version: v1.30.3
	I0731 18:12:56.575513   73800 api_server.go:131] duration metric: took 7.1795ms to wait for apiserver health ...
	I0731 18:12:56.575520   73800 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 18:12:56.669169   73800 system_pods.go:59] 9 kube-system pods found
	I0731 18:12:56.669197   73800 system_pods.go:61] "coredns-7db6d8ff4d-fqkfd" [2e5a67a3-6f2b-43a5-8b94-cf48202c5958] Running
	I0731 18:12:56.669202   73800 system_pods.go:61] "coredns-7db6d8ff4d-qpb62" [e57f157b-01b5-42ec-9630-b9b5ae94fe5d] Running
	I0731 18:12:56.669206   73800 system_pods.go:61] "etcd-embed-certs-436067" [d0439817-b307-4673-8a64-34aa31266fbb] Running
	I0731 18:12:56.669210   73800 system_pods.go:61] "kube-apiserver-embed-certs-436067" [c2409e33-d42b-4132-8f63-197c4d445704] Running
	I0731 18:12:56.669214   73800 system_pods.go:61] "kube-controller-manager-embed-certs-436067" [dfea1f44-48a3-4a3c-8a27-be6f01f58add] Running
	I0731 18:12:56.669218   73800 system_pods.go:61] "kube-proxy-85spm" [eecb399a-0365-436d-8f14-a7853a5a2ea3] Running
	I0731 18:12:56.669221   73800 system_pods.go:61] "kube-scheduler-embed-certs-436067" [e406617d-adf7-4baa-9a8a-6fd92847a12f] Running
	I0731 18:12:56.669228   73800 system_pods.go:61] "metrics-server-569cc877fc-pgf6q" [b799fa04-0bf7-4914-9738-964b825577b5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 18:12:56.669231   73800 system_pods.go:61] "storage-provisioner" [a7d95d32-1002-4fba-b0fd-555b872efa1f] Running
	I0731 18:12:56.669240   73800 system_pods.go:74] duration metric: took 93.714593ms to wait for pod list to return data ...
	I0731 18:12:56.669247   73800 default_sa.go:34] waiting for default service account to be created ...
	I0731 18:12:56.866494   73800 default_sa.go:45] found service account: "default"
	I0731 18:12:56.866521   73800 default_sa.go:55] duration metric: took 197.264891ms for default service account to be created ...
	I0731 18:12:56.866532   73800 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 18:12:57.068903   73800 system_pods.go:86] 9 kube-system pods found
	I0731 18:12:57.068930   73800 system_pods.go:89] "coredns-7db6d8ff4d-fqkfd" [2e5a67a3-6f2b-43a5-8b94-cf48202c5958] Running
	I0731 18:12:57.068936   73800 system_pods.go:89] "coredns-7db6d8ff4d-qpb62" [e57f157b-01b5-42ec-9630-b9b5ae94fe5d] Running
	I0731 18:12:57.068940   73800 system_pods.go:89] "etcd-embed-certs-436067" [d0439817-b307-4673-8a64-34aa31266fbb] Running
	I0731 18:12:57.068944   73800 system_pods.go:89] "kube-apiserver-embed-certs-436067" [c2409e33-d42b-4132-8f63-197c4d445704] Running
	I0731 18:12:57.068948   73800 system_pods.go:89] "kube-controller-manager-embed-certs-436067" [dfea1f44-48a3-4a3c-8a27-be6f01f58add] Running
	I0731 18:12:57.068951   73800 system_pods.go:89] "kube-proxy-85spm" [eecb399a-0365-436d-8f14-a7853a5a2ea3] Running
	I0731 18:12:57.068955   73800 system_pods.go:89] "kube-scheduler-embed-certs-436067" [e406617d-adf7-4baa-9a8a-6fd92847a12f] Running
	I0731 18:12:57.068961   73800 system_pods.go:89] "metrics-server-569cc877fc-pgf6q" [b799fa04-0bf7-4914-9738-964b825577b5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 18:12:57.068965   73800 system_pods.go:89] "storage-provisioner" [a7d95d32-1002-4fba-b0fd-555b872efa1f] Running
	I0731 18:12:57.068972   73800 system_pods.go:126] duration metric: took 202.435205ms to wait for k8s-apps to be running ...
	I0731 18:12:57.068980   73800 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 18:12:57.069018   73800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:12:57.083728   73800 system_svc.go:56] duration metric: took 14.739831ms WaitForService to wait for kubelet
	I0731 18:12:57.083756   73800 kubeadm.go:582] duration metric: took 2.829227102s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 18:12:57.083782   73800 node_conditions.go:102] verifying NodePressure condition ...
	I0731 18:12:57.266463   73800 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 18:12:57.266486   73800 node_conditions.go:123] node cpu capacity is 2
	I0731 18:12:57.266495   73800 node_conditions.go:105] duration metric: took 182.707869ms to run NodePressure ...
	I0731 18:12:57.266505   73800 start.go:241] waiting for startup goroutines ...
	I0731 18:12:57.266512   73800 start.go:246] waiting for cluster config update ...
	I0731 18:12:57.266521   73800 start.go:255] writing updated cluster config ...
	I0731 18:12:57.266767   73800 ssh_runner.go:195] Run: rm -f paused
	I0731 18:12:57.313723   73800 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 18:12:57.315966   73800 out.go:177] * Done! kubectl is now configured to use "embed-certs-436067" cluster and "default" namespace by default
	I0731 18:12:58.652853   74203 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0731 18:12:58.653480   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:12:58.653735   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:12:58.643237   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:13:01.143274   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:13:01.643357   73479 pod_ready.go:81] duration metric: took 4m0.006506347s for pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace to be "Ready" ...
	E0731 18:13:01.643382   73479 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0731 18:13:01.643388   73479 pod_ready.go:38] duration metric: took 4m7.418860701s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:13:01.643402   73479 api_server.go:52] waiting for apiserver process to appear ...
	I0731 18:13:01.643428   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:13:01.643481   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:13:01.692071   73479 cri.go:89] found id: "895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0"
	I0731 18:13:01.692092   73479 cri.go:89] found id: ""
	I0731 18:13:01.692101   73479 logs.go:276] 1 containers: [895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0]
	I0731 18:13:01.692159   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:01.697266   73479 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:13:01.697356   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:13:01.736299   73479 cri.go:89] found id: "65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645"
	I0731 18:13:01.736350   73479 cri.go:89] found id: ""
	I0731 18:13:01.736360   73479 logs.go:276] 1 containers: [65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645]
	I0731 18:13:01.736417   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:01.740672   73479 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:13:01.740733   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:13:01.774782   73479 cri.go:89] found id: "f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855"
	I0731 18:13:01.774816   73479 cri.go:89] found id: ""
	I0731 18:13:01.774826   73479 logs.go:276] 1 containers: [f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855]
	I0731 18:13:01.774893   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:01.778542   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:13:01.778618   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:13:01.818749   73479 cri.go:89] found id: "ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2"
	I0731 18:13:01.818769   73479 cri.go:89] found id: ""
	I0731 18:13:01.818776   73479 logs.go:276] 1 containers: [ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2]
	I0731 18:13:01.818828   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:01.827176   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:13:01.827248   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:13:01.860700   73479 cri.go:89] found id: "57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847"
	I0731 18:13:01.860730   73479 cri.go:89] found id: ""
	I0731 18:13:01.860739   73479 logs.go:276] 1 containers: [57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847]
	I0731 18:13:01.860825   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:03.654494   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:13:03.654747   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:13:01.864629   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:13:01.864702   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:13:01.899293   73479 cri.go:89] found id: "ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c"
	I0731 18:13:01.899338   73479 cri.go:89] found id: ""
	I0731 18:13:01.899347   73479 logs.go:276] 1 containers: [ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c]
	I0731 18:13:01.899406   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:01.903202   73479 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:13:01.903272   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:13:01.934472   73479 cri.go:89] found id: ""
	I0731 18:13:01.934505   73479 logs.go:276] 0 containers: []
	W0731 18:13:01.934516   73479 logs.go:278] No container was found matching "kindnet"
	I0731 18:13:01.934523   73479 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 18:13:01.934588   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 18:13:01.967244   73479 cri.go:89] found id: "6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df"
	I0731 18:13:01.967271   73479 cri.go:89] found id: "9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c"
	I0731 18:13:01.967276   73479 cri.go:89] found id: ""
	I0731 18:13:01.967285   73479 logs.go:276] 2 containers: [6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df 9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c]
	I0731 18:13:01.967349   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:01.971167   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:01.975648   73479 logs.go:123] Gathering logs for kubelet ...
	I0731 18:13:01.975670   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:13:02.031430   73479 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:13:02.031472   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 18:13:02.158774   73479 logs.go:123] Gathering logs for kube-apiserver [895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0] ...
	I0731 18:13:02.158803   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0"
	I0731 18:13:02.199495   73479 logs.go:123] Gathering logs for kube-scheduler [ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2] ...
	I0731 18:13:02.199521   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2"
	I0731 18:13:02.232285   73479 logs.go:123] Gathering logs for kube-proxy [57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847] ...
	I0731 18:13:02.232327   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847"
	I0731 18:13:02.272360   73479 logs.go:123] Gathering logs for storage-provisioner [9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c] ...
	I0731 18:13:02.272389   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c"
	I0731 18:13:02.305902   73479 logs.go:123] Gathering logs for dmesg ...
	I0731 18:13:02.305931   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:13:02.319954   73479 logs.go:123] Gathering logs for etcd [65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645] ...
	I0731 18:13:02.319984   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645"
	I0731 18:13:02.361657   73479 logs.go:123] Gathering logs for coredns [f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855] ...
	I0731 18:13:02.361685   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855"
	I0731 18:13:02.395696   73479 logs.go:123] Gathering logs for kube-controller-manager [ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c] ...
	I0731 18:13:02.395724   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c"
	I0731 18:13:02.444671   73479 logs.go:123] Gathering logs for storage-provisioner [6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df] ...
	I0731 18:13:02.444704   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df"
	I0731 18:13:02.480666   73479 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:13:02.480693   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:13:02.967693   73479 logs.go:123] Gathering logs for container status ...
	I0731 18:13:02.967741   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:13:05.512381   73479 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:13:05.528582   73479 api_server.go:72] duration metric: took 4m19.030809429s to wait for apiserver process to appear ...
	I0731 18:13:05.528612   73479 api_server.go:88] waiting for apiserver healthz status ...
	I0731 18:13:05.528652   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:13:05.528730   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:13:05.567984   73479 cri.go:89] found id: "895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0"
	I0731 18:13:05.568004   73479 cri.go:89] found id: ""
	I0731 18:13:05.568013   73479 logs.go:276] 1 containers: [895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0]
	I0731 18:13:05.568073   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:05.571946   73479 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:13:05.572003   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:13:05.620468   73479 cri.go:89] found id: "65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645"
	I0731 18:13:05.620495   73479 cri.go:89] found id: ""
	I0731 18:13:05.620504   73479 logs.go:276] 1 containers: [65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645]
	I0731 18:13:05.620571   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:05.624599   73479 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:13:05.624653   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:13:05.663717   73479 cri.go:89] found id: "f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855"
	I0731 18:13:05.663740   73479 cri.go:89] found id: ""
	I0731 18:13:05.663748   73479 logs.go:276] 1 containers: [f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855]
	I0731 18:13:05.663803   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:05.667601   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:13:05.667672   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:13:05.699764   73479 cri.go:89] found id: "ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2"
	I0731 18:13:05.699791   73479 cri.go:89] found id: ""
	I0731 18:13:05.699801   73479 logs.go:276] 1 containers: [ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2]
	I0731 18:13:05.699858   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:05.703965   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:13:05.704036   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:13:05.739460   73479 cri.go:89] found id: "57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847"
	I0731 18:13:05.739487   73479 cri.go:89] found id: ""
	I0731 18:13:05.739496   73479 logs.go:276] 1 containers: [57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847]
	I0731 18:13:05.739558   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:05.743180   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:13:05.743232   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:13:05.777369   73479 cri.go:89] found id: "ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c"
	I0731 18:13:05.777390   73479 cri.go:89] found id: ""
	I0731 18:13:05.777397   73479 logs.go:276] 1 containers: [ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c]
	I0731 18:13:05.777449   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:05.781388   73479 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:13:05.781435   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:13:05.825567   73479 cri.go:89] found id: ""
	I0731 18:13:05.825599   73479 logs.go:276] 0 containers: []
	W0731 18:13:05.825610   73479 logs.go:278] No container was found matching "kindnet"
	I0731 18:13:05.825617   73479 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 18:13:05.825689   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 18:13:05.859538   73479 cri.go:89] found id: "6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df"
	I0731 18:13:05.859570   73479 cri.go:89] found id: "9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c"
	I0731 18:13:05.859577   73479 cri.go:89] found id: ""
	I0731 18:13:05.859586   73479 logs.go:276] 2 containers: [6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df 9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c]
	I0731 18:13:05.859657   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:05.863513   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:05.866989   73479 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:13:05.867011   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:13:06.314116   73479 logs.go:123] Gathering logs for container status ...
	I0731 18:13:06.314166   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:13:06.357738   73479 logs.go:123] Gathering logs for kubelet ...
	I0731 18:13:06.357764   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:13:06.407330   73479 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:13:06.407365   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 18:13:06.508580   73479 logs.go:123] Gathering logs for kube-apiserver [895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0] ...
	I0731 18:13:06.508616   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0"
	I0731 18:13:06.550032   73479 logs.go:123] Gathering logs for coredns [f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855] ...
	I0731 18:13:06.550071   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855"
	I0731 18:13:06.588519   73479 logs.go:123] Gathering logs for kube-proxy [57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847] ...
	I0731 18:13:06.588548   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847"
	I0731 18:13:06.622872   73479 logs.go:123] Gathering logs for storage-provisioner [6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df] ...
	I0731 18:13:06.622901   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df"
	I0731 18:13:06.666694   73479 logs.go:123] Gathering logs for dmesg ...
	I0731 18:13:06.666721   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:13:06.680326   73479 logs.go:123] Gathering logs for etcd [65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645] ...
	I0731 18:13:06.680355   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645"
	I0731 18:13:06.723966   73479 logs.go:123] Gathering logs for kube-scheduler [ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2] ...
	I0731 18:13:06.723997   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2"
	I0731 18:13:06.760873   73479 logs.go:123] Gathering logs for kube-controller-manager [ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c] ...
	I0731 18:13:06.760901   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c"
	I0731 18:13:06.809348   73479 logs.go:123] Gathering logs for storage-provisioner [9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c] ...
	I0731 18:13:06.809387   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c"
	I0731 18:13:09.341394   73479 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0731 18:13:09.346642   73479 api_server.go:279] https://192.168.61.126:8443/healthz returned 200:
	ok
	I0731 18:13:09.347803   73479 api_server.go:141] control plane version: v1.31.0-beta.0
	I0731 18:13:09.347821   73479 api_server.go:131] duration metric: took 3.819202346s to wait for apiserver health ...
	I0731 18:13:09.347828   73479 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 18:13:09.347850   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:13:09.347903   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:13:09.391857   73479 cri.go:89] found id: "895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0"
	I0731 18:13:09.391885   73479 cri.go:89] found id: ""
	I0731 18:13:09.391895   73479 logs.go:276] 1 containers: [895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0]
	I0731 18:13:09.391956   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:09.395723   73479 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:13:09.395789   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:13:09.430108   73479 cri.go:89] found id: "65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645"
	I0731 18:13:09.430128   73479 cri.go:89] found id: ""
	I0731 18:13:09.430135   73479 logs.go:276] 1 containers: [65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645]
	I0731 18:13:09.430180   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:09.433933   73479 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:13:09.434037   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:13:09.471630   73479 cri.go:89] found id: "f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855"
	I0731 18:13:09.471655   73479 cri.go:89] found id: ""
	I0731 18:13:09.471663   73479 logs.go:276] 1 containers: [f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855]
	I0731 18:13:09.471709   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:09.476432   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:13:09.476496   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:13:09.519568   73479 cri.go:89] found id: "ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2"
	I0731 18:13:09.519590   73479 cri.go:89] found id: ""
	I0731 18:13:09.519598   73479 logs.go:276] 1 containers: [ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2]
	I0731 18:13:09.519641   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:09.523587   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:13:09.523656   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:13:09.559405   73479 cri.go:89] found id: "57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847"
	I0731 18:13:09.559429   73479 cri.go:89] found id: ""
	I0731 18:13:09.559438   73479 logs.go:276] 1 containers: [57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847]
	I0731 18:13:09.559485   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:09.564137   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:13:09.564199   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:13:09.605298   73479 cri.go:89] found id: "ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c"
	I0731 18:13:09.605324   73479 cri.go:89] found id: ""
	I0731 18:13:09.605332   73479 logs.go:276] 1 containers: [ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c]
	I0731 18:13:09.605403   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:09.612233   73479 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:13:09.612296   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:13:09.648804   73479 cri.go:89] found id: ""
	I0731 18:13:09.648836   73479 logs.go:276] 0 containers: []
	W0731 18:13:09.648848   73479 logs.go:278] No container was found matching "kindnet"
	I0731 18:13:09.648855   73479 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 18:13:09.648916   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 18:13:09.694708   73479 cri.go:89] found id: "6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df"
	I0731 18:13:09.694733   73479 cri.go:89] found id: "9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c"
	I0731 18:13:09.694737   73479 cri.go:89] found id: ""
	I0731 18:13:09.694743   73479 logs.go:276] 2 containers: [6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df 9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c]
	I0731 18:13:09.694794   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:09.698687   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:09.702244   73479 logs.go:123] Gathering logs for storage-provisioner [6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df] ...
	I0731 18:13:09.702261   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df"
	I0731 18:13:09.737777   73479 logs.go:123] Gathering logs for storage-provisioner [9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c] ...
	I0731 18:13:09.737808   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c"
	I0731 18:13:09.771128   73479 logs.go:123] Gathering logs for container status ...
	I0731 18:13:09.771161   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:13:09.817498   73479 logs.go:123] Gathering logs for dmesg ...
	I0731 18:13:09.817525   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:13:09.833574   73479 logs.go:123] Gathering logs for etcd [65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645] ...
	I0731 18:13:09.833607   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645"
	I0731 18:13:09.872664   73479 logs.go:123] Gathering logs for kube-scheduler [ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2] ...
	I0731 18:13:09.872691   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2"
	I0731 18:13:09.913741   73479 logs.go:123] Gathering logs for coredns [f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855] ...
	I0731 18:13:09.913771   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855"
	I0731 18:13:09.949469   73479 logs.go:123] Gathering logs for kube-proxy [57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847] ...
	I0731 18:13:09.949512   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847"
	I0731 18:13:09.985409   73479 logs.go:123] Gathering logs for kube-controller-manager [ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c] ...
	I0731 18:13:09.985447   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c"
	I0731 18:13:10.039018   73479 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:13:10.039048   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:13:10.406380   73479 logs.go:123] Gathering logs for kubelet ...
	I0731 18:13:10.406416   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:13:10.459944   73479 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:13:10.459997   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 18:13:10.564092   73479 logs.go:123] Gathering logs for kube-apiserver [895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0] ...
	I0731 18:13:10.564134   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0"
	I0731 18:13:13.124074   73479 system_pods.go:59] 8 kube-system pods found
	I0731 18:13:13.124102   73479 system_pods.go:61] "coredns-5cfdc65f69-k7clq" [e4be77b6-aa7c-45e2-90a1-6a8264fd5101] Running
	I0731 18:13:13.124107   73479 system_pods.go:61] "etcd-no-preload-673754" [d64e9194-dd33-4a28-b8c3-d25462f1dae8] Running
	I0731 18:13:13.124110   73479 system_pods.go:61] "kube-apiserver-no-preload-673754" [4497614d-51de-4a5f-93dc-1397446fe4c8] Running
	I0731 18:13:13.124114   73479 system_pods.go:61] "kube-controller-manager-no-preload-673754" [fe7f714e-d778-49e7-b4ef-44e6efb5bc33] Running
	I0731 18:13:13.124117   73479 system_pods.go:61] "kube-proxy-hqxh6" [4fb623c0-4be3-421c-85d6-1d76a90b874f] Running
	I0731 18:13:13.124119   73479 system_pods.go:61] "kube-scheduler-no-preload-673754" [558e9b78-a6a9-46b8-9db8-004126359c74] Running
	I0731 18:13:13.124125   73479 system_pods.go:61] "metrics-server-78fcd8795b-27pkr" [a63a156b-0446-4bd9-8619-de75edaeb481] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 18:13:13.124129   73479 system_pods.go:61] "storage-provisioner" [a9f0ab70-18f4-4fac-b858-a9177077fe29] Running
	I0731 18:13:13.124135   73479 system_pods.go:74] duration metric: took 3.776302431s to wait for pod list to return data ...
	I0731 18:13:13.124141   73479 default_sa.go:34] waiting for default service account to be created ...
	I0731 18:13:13.127100   73479 default_sa.go:45] found service account: "default"
	I0731 18:13:13.127137   73479 default_sa.go:55] duration metric: took 2.989455ms for default service account to be created ...
	I0731 18:13:13.127148   73479 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 18:13:13.132359   73479 system_pods.go:86] 8 kube-system pods found
	I0731 18:13:13.132379   73479 system_pods.go:89] "coredns-5cfdc65f69-k7clq" [e4be77b6-aa7c-45e2-90a1-6a8264fd5101] Running
	I0731 18:13:13.132387   73479 system_pods.go:89] "etcd-no-preload-673754" [d64e9194-dd33-4a28-b8c3-d25462f1dae8] Running
	I0731 18:13:13.132393   73479 system_pods.go:89] "kube-apiserver-no-preload-673754" [4497614d-51de-4a5f-93dc-1397446fe4c8] Running
	I0731 18:13:13.132399   73479 system_pods.go:89] "kube-controller-manager-no-preload-673754" [fe7f714e-d778-49e7-b4ef-44e6efb5bc33] Running
	I0731 18:13:13.132405   73479 system_pods.go:89] "kube-proxy-hqxh6" [4fb623c0-4be3-421c-85d6-1d76a90b874f] Running
	I0731 18:13:13.132410   73479 system_pods.go:89] "kube-scheduler-no-preload-673754" [558e9b78-a6a9-46b8-9db8-004126359c74] Running
	I0731 18:13:13.132420   73479 system_pods.go:89] "metrics-server-78fcd8795b-27pkr" [a63a156b-0446-4bd9-8619-de75edaeb481] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 18:13:13.132427   73479 system_pods.go:89] "storage-provisioner" [a9f0ab70-18f4-4fac-b858-a9177077fe29] Running
	I0731 18:13:13.132435   73479 system_pods.go:126] duration metric: took 5.281138ms to wait for k8s-apps to be running ...
	I0731 18:13:13.132443   73479 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 18:13:13.132488   73479 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:13:13.148254   73479 system_svc.go:56] duration metric: took 15.802724ms WaitForService to wait for kubelet
	I0731 18:13:13.148281   73479 kubeadm.go:582] duration metric: took 4m26.650509962s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 18:13:13.148315   73479 node_conditions.go:102] verifying NodePressure condition ...
	I0731 18:13:13.151986   73479 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 18:13:13.152006   73479 node_conditions.go:123] node cpu capacity is 2
	I0731 18:13:13.152018   73479 node_conditions.go:105] duration metric: took 3.693857ms to run NodePressure ...
	I0731 18:13:13.152031   73479 start.go:241] waiting for startup goroutines ...
	I0731 18:13:13.152043   73479 start.go:246] waiting for cluster config update ...
	I0731 18:13:13.152058   73479 start.go:255] writing updated cluster config ...
	I0731 18:13:13.152347   73479 ssh_runner.go:195] Run: rm -f paused
	I0731 18:13:13.202434   73479 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0731 18:13:13.205205   73479 out.go:177] * Done! kubectl is now configured to use "no-preload-673754" cluster and "default" namespace by default
	I0731 18:13:13.655618   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:13:13.655843   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:13:33.657356   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:13:33.657560   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:14:13.660934   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:14:13.661161   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:14:13.661183   74203 kubeadm.go:310] 
	I0731 18:14:13.661216   74203 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 18:14:13.661251   74203 kubeadm.go:310] 		timed out waiting for the condition
	I0731 18:14:13.661279   74203 kubeadm.go:310] 
	I0731 18:14:13.661338   74203 kubeadm.go:310] 	This error is likely caused by:
	I0731 18:14:13.661378   74203 kubeadm.go:310] 		- The kubelet is not running
	I0731 18:14:13.661477   74203 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 18:14:13.661483   74203 kubeadm.go:310] 
	I0731 18:14:13.661577   74203 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 18:14:13.661617   74203 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 18:14:13.661646   74203 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 18:14:13.661651   74203 kubeadm.go:310] 
	I0731 18:14:13.661781   74203 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 18:14:13.661897   74203 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 18:14:13.661909   74203 kubeadm.go:310] 
	I0731 18:14:13.662044   74203 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 18:14:13.662164   74203 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 18:14:13.662265   74203 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 18:14:13.662444   74203 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 18:14:13.662477   74203 kubeadm.go:310] 
	I0731 18:14:13.663123   74203 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 18:14:13.663235   74203 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 18:14:13.663331   74203 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0731 18:14:13.663497   74203 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0731 18:14:13.663559   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 18:14:18.956376   74203 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.292787213s)
	I0731 18:14:18.956479   74203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:14:18.970820   74203 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 18:14:18.980747   74203 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 18:14:18.980771   74203 kubeadm.go:157] found existing configuration files:
	
	I0731 18:14:18.980816   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 18:14:18.989985   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 18:14:18.990063   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 18:14:18.999143   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 18:14:19.008740   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 18:14:19.008798   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 18:14:19.018729   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 18:14:19.028953   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 18:14:19.029015   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 18:14:19.039399   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 18:14:19.049072   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 18:14:19.049124   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 18:14:19.059592   74203 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 18:14:19.121542   74203 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0731 18:14:19.121613   74203 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 18:14:19.271989   74203 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 18:14:19.272100   74203 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 18:14:19.272223   74203 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 18:14:19.440224   74203 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 18:14:19.441929   74203 out.go:204]   - Generating certificates and keys ...
	I0731 18:14:19.442025   74203 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 18:14:19.442104   74203 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 18:14:19.442196   74203 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 18:14:19.442245   74203 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 18:14:19.442326   74203 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 18:14:19.442395   74203 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 18:14:19.442498   74203 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 18:14:19.442610   74203 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 18:14:19.442687   74203 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 18:14:19.442770   74203 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 18:14:19.442813   74203 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 18:14:19.442887   74203 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 18:14:19.481696   74203 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 18:14:19.804252   74203 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 18:14:20.038734   74203 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 18:14:20.211133   74203 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 18:14:20.225726   74203 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 18:14:20.227920   74203 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 18:14:20.227977   74203 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 18:14:20.364068   74203 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 18:14:20.365991   74203 out.go:204]   - Booting up control plane ...
	I0731 18:14:20.366094   74203 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 18:14:20.366195   74203 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 18:14:20.366270   74203 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 18:14:20.366379   74203 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 18:14:20.367688   74203 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 18:15:00.365616   74203 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0731 18:15:00.366184   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:15:00.366412   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:15:05.366332   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:15:05.366529   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:15:15.366241   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:15:15.366499   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:15:35.366114   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:15:35.366344   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:16:15.365995   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:16:15.366181   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:16:15.366191   74203 kubeadm.go:310] 
	I0731 18:16:15.366224   74203 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 18:16:15.366448   74203 kubeadm.go:310] 		timed out waiting for the condition
	I0731 18:16:15.366472   74203 kubeadm.go:310] 
	I0731 18:16:15.366517   74203 kubeadm.go:310] 	This error is likely caused by:
	I0731 18:16:15.366568   74203 kubeadm.go:310] 		- The kubelet is not running
	I0731 18:16:15.366723   74203 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 18:16:15.366740   74203 kubeadm.go:310] 
	I0731 18:16:15.366863   74203 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 18:16:15.366924   74203 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 18:16:15.366986   74203 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 18:16:15.366999   74203 kubeadm.go:310] 
	I0731 18:16:15.367153   74203 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 18:16:15.367271   74203 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 18:16:15.367283   74203 kubeadm.go:310] 
	I0731 18:16:15.367418   74203 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 18:16:15.367504   74203 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 18:16:15.367609   74203 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 18:16:15.367725   74203 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 18:16:15.367734   74203 kubeadm.go:310] 
	I0731 18:16:15.369210   74203 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 18:16:15.369361   74203 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 18:16:15.369434   74203 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0731 18:16:15.369496   74203 kubeadm.go:394] duration metric: took 8m6.557607575s to StartCluster
	I0731 18:16:15.369537   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:16:15.369590   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:16:15.432899   74203 cri.go:89] found id: ""
	I0731 18:16:15.432929   74203 logs.go:276] 0 containers: []
	W0731 18:16:15.432941   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:16:15.432947   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:16:15.433005   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:16:15.470506   74203 cri.go:89] found id: ""
	I0731 18:16:15.470534   74203 logs.go:276] 0 containers: []
	W0731 18:16:15.470542   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:16:15.470549   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:16:15.470609   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:16:15.502032   74203 cri.go:89] found id: ""
	I0731 18:16:15.502055   74203 logs.go:276] 0 containers: []
	W0731 18:16:15.502062   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:16:15.502067   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:16:15.502115   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:16:15.533897   74203 cri.go:89] found id: ""
	I0731 18:16:15.533918   74203 logs.go:276] 0 containers: []
	W0731 18:16:15.533925   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:16:15.533930   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:16:15.533980   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:16:15.565275   74203 cri.go:89] found id: ""
	I0731 18:16:15.565311   74203 logs.go:276] 0 containers: []
	W0731 18:16:15.565326   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:16:15.565333   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:16:15.565395   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:16:15.601402   74203 cri.go:89] found id: ""
	I0731 18:16:15.601427   74203 logs.go:276] 0 containers: []
	W0731 18:16:15.601435   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:16:15.601440   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:16:15.601489   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:16:15.638778   74203 cri.go:89] found id: ""
	I0731 18:16:15.638801   74203 logs.go:276] 0 containers: []
	W0731 18:16:15.638808   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:16:15.638813   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:16:15.638861   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:16:15.675697   74203 cri.go:89] found id: ""
	I0731 18:16:15.675720   74203 logs.go:276] 0 containers: []
	W0731 18:16:15.675728   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:16:15.675736   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:16:15.675748   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:16:15.745287   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:16:15.745325   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:16:15.745341   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:16:15.848503   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:16:15.848536   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:16:15.887234   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:16:15.887258   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:16:15.934871   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:16:15.934901   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0731 18:16:15.947727   74203 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0731 18:16:15.947769   74203 out.go:239] * 
	W0731 18:16:15.947817   74203 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 18:16:15.947836   74203 out.go:239] * 
	W0731 18:16:15.948669   74203 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 18:16:15.952286   74203 out.go:177] 
	W0731 18:16:15.953375   74203 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 18:16:15.953424   74203 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0731 18:16:15.953442   74203 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0731 18:16:15.954734   74203 out.go:177] 
	
	
	==> CRI-O <==
	Jul 31 18:21:41 default-k8s-diff-port-094310 crio[732]: time="2024-07-31 18:21:41.562471359Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722450101562442133,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0d9c3900-a685-49e2-9076-bafd67f4389c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:21:41 default-k8s-diff-port-094310 crio[732]: time="2024-07-31 18:21:41.562978783Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e8b8129b-c718-4484-b587-3dfaddaaabc8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:21:41 default-k8s-diff-port-094310 crio[732]: time="2024-07-31 18:21:41.563037614Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e8b8129b-c718-4484-b587-3dfaddaaabc8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:21:41 default-k8s-diff-port-094310 crio[732]: time="2024-07-31 18:21:41.563375702Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8867449b4946a6b46cb5df1ec153fc2d52afab53d8d19dc9fc5d2d8685f5c162,PodSandboxId:74fa804456abf16fac2e4fa969eee65afe4b7e15d6f538d13d57f28771a4d365,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722449558463916333,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e4c5a96-4dfc-4af6-8d3e-bab865644328,},Annotations:map[string]string{io.kubernetes.container.hash: 1864afb1,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e823d63ea68928a212649386ed1e1c88825a99b2e51a8fd260abd803ead2f898,PodSandboxId:48fd734a4f028ef2ae7597ea7dacd7fd78736f6d56e69f8e4d542e8d8e3bb1b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722449558198634989,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2r7zb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7acc926-db53-4c1c-a62f-45e303c69fc7,},Annotations:map[string]string{io.kubernetes.container.hash: 9fb1dd8f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebc062f33e6f9112555983f1590d8754e32c9f6b68fb23b23ead8100dc9fecca,PodSandboxId:a0ea8bf7817a49db99a3ee9ba0faf78e0ccb811c9e36c0a1fb4c9cb41b64efe4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722449558231808748,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-756jj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: c91e86b1-8348-4dc7-9aa1-6909055c2dde,},Annotations:map[string]string{io.kubernetes.container.hash: 2b84c784,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f8486d598e4f1a0155d2efcf42b5d5b6189672809432dddfbf300f911645053,PodSandboxId:75a61419abdf32959044b20bd7157430ccd33922eac014b248e8ebaa12533a4a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING
,CreatedAt:1722449557536317934,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4vvjq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8dd9db6-5f45-4c8d-a9d3-03ede1db07eb,},Annotations:map[string]string{io.kubernetes.container.hash: 53ff8087,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2cf9a7321c1d465b4ff676a304ab76c71686a8fa65ce79b916fb5388df6c085,PodSandboxId:cec8900941b1a088416eb020a103598cfbb3be666a95976ea8c4d2b45b764cfe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:172244953
8066367091,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-094310,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f9d9a2710369dbc0687b88cb4bd4404,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:157f17723a1b942bc7ad5fa1c4a9bb8b7ea85b1e1b216e3275a8bb5dc2cb7825,PodSandboxId:d3a6809c64dd5600a0019669b05eb9fff5535d97af24300d2f118746de5120dd,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:172
2449538059414984,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-094310,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0108a789bab39d2e181154113e900f6c,},Annotations:map[string]string{io.kubernetes.container.hash: 3de99cbf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a91f54ad2f9d99bd43cf0a559b3727f82d3316da1a4f6b0bcae0d421af842b1c,PodSandboxId:e76d819d9c1a5fc216d7b8d8dbead55ebe011c4aaff616f53493d4c59d3e6267,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722449537998604800,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-094310,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b04f1a21453eb8a8c3d16cebd0427a1,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5575a74c69e5ebdff7ec9a9a2b7dcbc1703d1f590f0dd35611530a040896cbd6,PodSandboxId:91d9f73237ba174e1e107f148a125f315e4e482ea1fc9bd3aacd781e4957fa4b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722449537950680856,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-094310,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13952284e64c880c73a35e185bde83ea,},Annotations:map[string]string{io.kubernetes.container.hash: 4e096051,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e8b8129b-c718-4484-b587-3dfaddaaabc8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:21:41 default-k8s-diff-port-094310 crio[732]: time="2024-07-31 18:21:41.600631388Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=864e2f9a-120b-4c6a-bf06-f7e27d3affbc name=/runtime.v1.RuntimeService/Version
	Jul 31 18:21:41 default-k8s-diff-port-094310 crio[732]: time="2024-07-31 18:21:41.600716247Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=864e2f9a-120b-4c6a-bf06-f7e27d3affbc name=/runtime.v1.RuntimeService/Version
	Jul 31 18:21:41 default-k8s-diff-port-094310 crio[732]: time="2024-07-31 18:21:41.601899096Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=363f64cf-2109-4854-9c5a-e0d4ffa7212a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:21:41 default-k8s-diff-port-094310 crio[732]: time="2024-07-31 18:21:41.602562633Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722450101602532467,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=363f64cf-2109-4854-9c5a-e0d4ffa7212a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:21:41 default-k8s-diff-port-094310 crio[732]: time="2024-07-31 18:21:41.603291861Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0110e2c1-0806-4492-9a44-4ff7b9e64d21 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:21:41 default-k8s-diff-port-094310 crio[732]: time="2024-07-31 18:21:41.603354068Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0110e2c1-0806-4492-9a44-4ff7b9e64d21 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:21:41 default-k8s-diff-port-094310 crio[732]: time="2024-07-31 18:21:41.603545034Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8867449b4946a6b46cb5df1ec153fc2d52afab53d8d19dc9fc5d2d8685f5c162,PodSandboxId:74fa804456abf16fac2e4fa969eee65afe4b7e15d6f538d13d57f28771a4d365,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722449558463916333,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e4c5a96-4dfc-4af6-8d3e-bab865644328,},Annotations:map[string]string{io.kubernetes.container.hash: 1864afb1,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e823d63ea68928a212649386ed1e1c88825a99b2e51a8fd260abd803ead2f898,PodSandboxId:48fd734a4f028ef2ae7597ea7dacd7fd78736f6d56e69f8e4d542e8d8e3bb1b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722449558198634989,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2r7zb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7acc926-db53-4c1c-a62f-45e303c69fc7,},Annotations:map[string]string{io.kubernetes.container.hash: 9fb1dd8f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebc062f33e6f9112555983f1590d8754e32c9f6b68fb23b23ead8100dc9fecca,PodSandboxId:a0ea8bf7817a49db99a3ee9ba0faf78e0ccb811c9e36c0a1fb4c9cb41b64efe4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722449558231808748,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-756jj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: c91e86b1-8348-4dc7-9aa1-6909055c2dde,},Annotations:map[string]string{io.kubernetes.container.hash: 2b84c784,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f8486d598e4f1a0155d2efcf42b5d5b6189672809432dddfbf300f911645053,PodSandboxId:75a61419abdf32959044b20bd7157430ccd33922eac014b248e8ebaa12533a4a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING
,CreatedAt:1722449557536317934,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4vvjq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8dd9db6-5f45-4c8d-a9d3-03ede1db07eb,},Annotations:map[string]string{io.kubernetes.container.hash: 53ff8087,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2cf9a7321c1d465b4ff676a304ab76c71686a8fa65ce79b916fb5388df6c085,PodSandboxId:cec8900941b1a088416eb020a103598cfbb3be666a95976ea8c4d2b45b764cfe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:172244953
8066367091,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-094310,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f9d9a2710369dbc0687b88cb4bd4404,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:157f17723a1b942bc7ad5fa1c4a9bb8b7ea85b1e1b216e3275a8bb5dc2cb7825,PodSandboxId:d3a6809c64dd5600a0019669b05eb9fff5535d97af24300d2f118746de5120dd,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:172
2449538059414984,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-094310,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0108a789bab39d2e181154113e900f6c,},Annotations:map[string]string{io.kubernetes.container.hash: 3de99cbf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a91f54ad2f9d99bd43cf0a559b3727f82d3316da1a4f6b0bcae0d421af842b1c,PodSandboxId:e76d819d9c1a5fc216d7b8d8dbead55ebe011c4aaff616f53493d4c59d3e6267,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722449537998604800,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-094310,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b04f1a21453eb8a8c3d16cebd0427a1,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5575a74c69e5ebdff7ec9a9a2b7dcbc1703d1f590f0dd35611530a040896cbd6,PodSandboxId:91d9f73237ba174e1e107f148a125f315e4e482ea1fc9bd3aacd781e4957fa4b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722449537950680856,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-094310,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13952284e64c880c73a35e185bde83ea,},Annotations:map[string]string{io.kubernetes.container.hash: 4e096051,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0110e2c1-0806-4492-9a44-4ff7b9e64d21 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:21:41 default-k8s-diff-port-094310 crio[732]: time="2024-07-31 18:21:41.639716204Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=89de1523-8bb6-4b1b-a840-d7e6ead9988c name=/runtime.v1.RuntimeService/Version
	Jul 31 18:21:41 default-k8s-diff-port-094310 crio[732]: time="2024-07-31 18:21:41.639784831Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=89de1523-8bb6-4b1b-a840-d7e6ead9988c name=/runtime.v1.RuntimeService/Version
	Jul 31 18:21:41 default-k8s-diff-port-094310 crio[732]: time="2024-07-31 18:21:41.640742754Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8f974305-6ed6-4ac9-b592-0277cf0a9677 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:21:41 default-k8s-diff-port-094310 crio[732]: time="2024-07-31 18:21:41.641409212Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722450101641383674,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8f974305-6ed6-4ac9-b592-0277cf0a9677 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:21:41 default-k8s-diff-port-094310 crio[732]: time="2024-07-31 18:21:41.641868619Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e5412b91-2c49-444d-9676-e314dbd3921c name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:21:41 default-k8s-diff-port-094310 crio[732]: time="2024-07-31 18:21:41.641931117Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e5412b91-2c49-444d-9676-e314dbd3921c name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:21:41 default-k8s-diff-port-094310 crio[732]: time="2024-07-31 18:21:41.642295074Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8867449b4946a6b46cb5df1ec153fc2d52afab53d8d19dc9fc5d2d8685f5c162,PodSandboxId:74fa804456abf16fac2e4fa969eee65afe4b7e15d6f538d13d57f28771a4d365,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722449558463916333,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e4c5a96-4dfc-4af6-8d3e-bab865644328,},Annotations:map[string]string{io.kubernetes.container.hash: 1864afb1,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e823d63ea68928a212649386ed1e1c88825a99b2e51a8fd260abd803ead2f898,PodSandboxId:48fd734a4f028ef2ae7597ea7dacd7fd78736f6d56e69f8e4d542e8d8e3bb1b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722449558198634989,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2r7zb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7acc926-db53-4c1c-a62f-45e303c69fc7,},Annotations:map[string]string{io.kubernetes.container.hash: 9fb1dd8f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebc062f33e6f9112555983f1590d8754e32c9f6b68fb23b23ead8100dc9fecca,PodSandboxId:a0ea8bf7817a49db99a3ee9ba0faf78e0ccb811c9e36c0a1fb4c9cb41b64efe4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722449558231808748,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-756jj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: c91e86b1-8348-4dc7-9aa1-6909055c2dde,},Annotations:map[string]string{io.kubernetes.container.hash: 2b84c784,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f8486d598e4f1a0155d2efcf42b5d5b6189672809432dddfbf300f911645053,PodSandboxId:75a61419abdf32959044b20bd7157430ccd33922eac014b248e8ebaa12533a4a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING
,CreatedAt:1722449557536317934,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4vvjq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8dd9db6-5f45-4c8d-a9d3-03ede1db07eb,},Annotations:map[string]string{io.kubernetes.container.hash: 53ff8087,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2cf9a7321c1d465b4ff676a304ab76c71686a8fa65ce79b916fb5388df6c085,PodSandboxId:cec8900941b1a088416eb020a103598cfbb3be666a95976ea8c4d2b45b764cfe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:172244953
8066367091,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-094310,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f9d9a2710369dbc0687b88cb4bd4404,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:157f17723a1b942bc7ad5fa1c4a9bb8b7ea85b1e1b216e3275a8bb5dc2cb7825,PodSandboxId:d3a6809c64dd5600a0019669b05eb9fff5535d97af24300d2f118746de5120dd,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:172
2449538059414984,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-094310,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0108a789bab39d2e181154113e900f6c,},Annotations:map[string]string{io.kubernetes.container.hash: 3de99cbf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a91f54ad2f9d99bd43cf0a559b3727f82d3316da1a4f6b0bcae0d421af842b1c,PodSandboxId:e76d819d9c1a5fc216d7b8d8dbead55ebe011c4aaff616f53493d4c59d3e6267,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722449537998604800,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-094310,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b04f1a21453eb8a8c3d16cebd0427a1,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5575a74c69e5ebdff7ec9a9a2b7dcbc1703d1f590f0dd35611530a040896cbd6,PodSandboxId:91d9f73237ba174e1e107f148a125f315e4e482ea1fc9bd3aacd781e4957fa4b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722449537950680856,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-094310,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13952284e64c880c73a35e185bde83ea,},Annotations:map[string]string{io.kubernetes.container.hash: 4e096051,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e5412b91-2c49-444d-9676-e314dbd3921c name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:21:41 default-k8s-diff-port-094310 crio[732]: time="2024-07-31 18:21:41.673785757Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=db92ad04-f12c-4ee2-a100-9ca68c3f9fed name=/runtime.v1.RuntimeService/Version
	Jul 31 18:21:41 default-k8s-diff-port-094310 crio[732]: time="2024-07-31 18:21:41.673953924Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=db92ad04-f12c-4ee2-a100-9ca68c3f9fed name=/runtime.v1.RuntimeService/Version
	Jul 31 18:21:41 default-k8s-diff-port-094310 crio[732]: time="2024-07-31 18:21:41.674843708Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b93be408-8f5d-4a94-b114-5193a9618c34 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:21:41 default-k8s-diff-port-094310 crio[732]: time="2024-07-31 18:21:41.675343087Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722450101675322926,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b93be408-8f5d-4a94-b114-5193a9618c34 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:21:41 default-k8s-diff-port-094310 crio[732]: time="2024-07-31 18:21:41.675792457Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=745c7f76-207d-475a-935d-d7bb3e9b0bc6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:21:41 default-k8s-diff-port-094310 crio[732]: time="2024-07-31 18:21:41.675851261Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=745c7f76-207d-475a-935d-d7bb3e9b0bc6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:21:41 default-k8s-diff-port-094310 crio[732]: time="2024-07-31 18:21:41.676108138Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8867449b4946a6b46cb5df1ec153fc2d52afab53d8d19dc9fc5d2d8685f5c162,PodSandboxId:74fa804456abf16fac2e4fa969eee65afe4b7e15d6f538d13d57f28771a4d365,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722449558463916333,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e4c5a96-4dfc-4af6-8d3e-bab865644328,},Annotations:map[string]string{io.kubernetes.container.hash: 1864afb1,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e823d63ea68928a212649386ed1e1c88825a99b2e51a8fd260abd803ead2f898,PodSandboxId:48fd734a4f028ef2ae7597ea7dacd7fd78736f6d56e69f8e4d542e8d8e3bb1b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722449558198634989,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2r7zb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7acc926-db53-4c1c-a62f-45e303c69fc7,},Annotations:map[string]string{io.kubernetes.container.hash: 9fb1dd8f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebc062f33e6f9112555983f1590d8754e32c9f6b68fb23b23ead8100dc9fecca,PodSandboxId:a0ea8bf7817a49db99a3ee9ba0faf78e0ccb811c9e36c0a1fb4c9cb41b64efe4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722449558231808748,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-756jj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: c91e86b1-8348-4dc7-9aa1-6909055c2dde,},Annotations:map[string]string{io.kubernetes.container.hash: 2b84c784,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f8486d598e4f1a0155d2efcf42b5d5b6189672809432dddfbf300f911645053,PodSandboxId:75a61419abdf32959044b20bd7157430ccd33922eac014b248e8ebaa12533a4a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING
,CreatedAt:1722449557536317934,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4vvjq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8dd9db6-5f45-4c8d-a9d3-03ede1db07eb,},Annotations:map[string]string{io.kubernetes.container.hash: 53ff8087,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2cf9a7321c1d465b4ff676a304ab76c71686a8fa65ce79b916fb5388df6c085,PodSandboxId:cec8900941b1a088416eb020a103598cfbb3be666a95976ea8c4d2b45b764cfe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:172244953
8066367091,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-094310,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f9d9a2710369dbc0687b88cb4bd4404,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:157f17723a1b942bc7ad5fa1c4a9bb8b7ea85b1e1b216e3275a8bb5dc2cb7825,PodSandboxId:d3a6809c64dd5600a0019669b05eb9fff5535d97af24300d2f118746de5120dd,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:172
2449538059414984,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-094310,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0108a789bab39d2e181154113e900f6c,},Annotations:map[string]string{io.kubernetes.container.hash: 3de99cbf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a91f54ad2f9d99bd43cf0a559b3727f82d3316da1a4f6b0bcae0d421af842b1c,PodSandboxId:e76d819d9c1a5fc216d7b8d8dbead55ebe011c4aaff616f53493d4c59d3e6267,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722449537998604800,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-094310,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b04f1a21453eb8a8c3d16cebd0427a1,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5575a74c69e5ebdff7ec9a9a2b7dcbc1703d1f590f0dd35611530a040896cbd6,PodSandboxId:91d9f73237ba174e1e107f148a125f315e4e482ea1fc9bd3aacd781e4957fa4b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722449537950680856,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-094310,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13952284e64c880c73a35e185bde83ea,},Annotations:map[string]string{io.kubernetes.container.hash: 4e096051,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=745c7f76-207d-475a-935d-d7bb3e9b0bc6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8867449b4946a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   74fa804456abf       storage-provisioner
	ebc062f33e6f9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   a0ea8bf7817a4       coredns-7db6d8ff4d-756jj
	e823d63ea6892       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   48fd734a4f028       coredns-7db6d8ff4d-2r7zb
	2f8486d598e4f       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   9 minutes ago       Running             kube-proxy                0                   75a61419abdf3       kube-proxy-4vvjq
	f2cf9a7321c1d       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   9 minutes ago       Running             kube-controller-manager   2                   cec8900941b1a       kube-controller-manager-default-k8s-diff-port-094310
	157f17723a1b9       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   d3a6809c64dd5       etcd-default-k8s-diff-port-094310
	a91f54ad2f9d9       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   9 minutes ago       Running             kube-scheduler            2                   e76d819d9c1a5       kube-scheduler-default-k8s-diff-port-094310
	5575a74c69e5e       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   9 minutes ago       Running             kube-apiserver            2                   91d9f73237ba1       kube-apiserver-default-k8s-diff-port-094310
	
	
	==> coredns [e823d63ea68928a212649386ed1e1c88825a99b2e51a8fd260abd803ead2f898] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [ebc062f33e6f9112555983f1590d8754e32c9f6b68fb23b23ead8100dc9fecca] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-094310
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-094310
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1d737dad7efa60c56d30434fcd857dd3b14c91d9
	                    minikube.k8s.io/name=default-k8s-diff-port-094310
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T18_12_23_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 18:12:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-094310
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 18:21:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 18:17:50 +0000   Wed, 31 Jul 2024 18:12:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 18:17:50 +0000   Wed, 31 Jul 2024 18:12:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 18:17:50 +0000   Wed, 31 Jul 2024 18:12:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 18:17:50 +0000   Wed, 31 Jul 2024 18:12:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.197
	  Hostname:    default-k8s-diff-port-094310
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b931426af7404ddd8ff19612654a9015
	  System UUID:                b931426a-f740-4ddd-8ff1-9612654a9015
	  Boot ID:                    1cbee3ab-6252-4713-a476-4e77af6b70c8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-2r7zb                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m4s
	  kube-system                 coredns-7db6d8ff4d-756jj                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m4s
	  kube-system                 etcd-default-k8s-diff-port-094310                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m18s
	  kube-system                 kube-apiserver-default-k8s-diff-port-094310             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-094310    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                 kube-proxy-4vvjq                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m5s
	  kube-system                 kube-scheduler-default-k8s-diff-port-094310             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                 metrics-server-569cc877fc-mskwc                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m3s
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m3s   kube-proxy       
	  Normal  Starting                 9m18s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m18s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m18s  kubelet          Node default-k8s-diff-port-094310 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m18s  kubelet          Node default-k8s-diff-port-094310 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m18s  kubelet          Node default-k8s-diff-port-094310 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m6s   node-controller  Node default-k8s-diff-port-094310 event: Registered Node default-k8s-diff-port-094310 in Controller
	
	
	==> dmesg <==
	[  +0.037041] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.674092] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.767925] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.524839] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.779733] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +0.063172] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056380] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +0.170434] systemd-fstab-generator[675]: Ignoring "noauto" option for root device
	[  +0.145133] systemd-fstab-generator[687]: Ignoring "noauto" option for root device
	[  +0.256289] systemd-fstab-generator[716]: Ignoring "noauto" option for root device
	[  +4.107426] systemd-fstab-generator[813]: Ignoring "noauto" option for root device
	[  +1.831623] systemd-fstab-generator[936]: Ignoring "noauto" option for root device
	[  +0.063601] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.493060] kauditd_printk_skb: 69 callbacks suppressed
	[  +6.533131] kauditd_printk_skb: 79 callbacks suppressed
	[  +7.873186] kauditd_printk_skb: 2 callbacks suppressed
	[Jul31 18:12] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.807598] systemd-fstab-generator[3562]: Ignoring "noauto" option for root device
	[  +4.688996] kauditd_printk_skb: 57 callbacks suppressed
	[  +1.368027] systemd-fstab-generator[3889]: Ignoring "noauto" option for root device
	[ +13.860194] systemd-fstab-generator[4081]: Ignoring "noauto" option for root device
	[  +0.130983] kauditd_printk_skb: 14 callbacks suppressed
	[Jul31 18:13] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [157f17723a1b942bc7ad5fa1c4a9bb8b7ea85b1e1b216e3275a8bb5dc2cb7825] <==
	{"level":"info","ts":"2024-07-31T18:12:18.517442Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"caefc81a6a0a2f54 switched to configuration voters=(14623126530869047124)"}
	{"level":"info","ts":"2024-07-31T18:12:18.519134Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-31T18:12:18.519816Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"caefc81a6a0a2f54","initial-advertise-peer-urls":["https://192.168.72.197:2380"],"listen-peer-urls":["https://192.168.72.197:2380"],"advertise-client-urls":["https://192.168.72.197:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.197:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-31T18:12:18.51985Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-31T18:12:18.519216Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.197:2380"}
	{"level":"info","ts":"2024-07-31T18:12:18.519948Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.197:2380"}
	{"level":"info","ts":"2024-07-31T18:12:18.524311Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"260389a3b5060778","local-member-id":"caefc81a6a0a2f54","added-peer-id":"caefc81a6a0a2f54","added-peer-peer-urls":["https://192.168.72.197:2380"]}
	{"level":"info","ts":"2024-07-31T18:12:18.727206Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"caefc81a6a0a2f54 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-31T18:12:18.727319Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"caefc81a6a0a2f54 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-31T18:12:18.727376Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"caefc81a6a0a2f54 received MsgPreVoteResp from caefc81a6a0a2f54 at term 1"}
	{"level":"info","ts":"2024-07-31T18:12:18.72741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"caefc81a6a0a2f54 became candidate at term 2"}
	{"level":"info","ts":"2024-07-31T18:12:18.727434Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"caefc81a6a0a2f54 received MsgVoteResp from caefc81a6a0a2f54 at term 2"}
	{"level":"info","ts":"2024-07-31T18:12:18.727461Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"caefc81a6a0a2f54 became leader at term 2"}
	{"level":"info","ts":"2024-07-31T18:12:18.727486Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: caefc81a6a0a2f54 elected leader caefc81a6a0a2f54 at term 2"}
	{"level":"info","ts":"2024-07-31T18:12:18.731301Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T18:12:18.73355Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"caefc81a6a0a2f54","local-member-attributes":"{Name:default-k8s-diff-port-094310 ClientURLs:[https://192.168.72.197:2379]}","request-path":"/0/members/caefc81a6a0a2f54/attributes","cluster-id":"260389a3b5060778","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T18:12:18.73372Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T18:12:18.740398Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.197:2379"}
	{"level":"info","ts":"2024-07-31T18:12:18.745473Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"260389a3b5060778","local-member-id":"caefc81a6a0a2f54","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T18:12:18.75029Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T18:12:18.750364Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T18:12:18.745496Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T18:12:18.751258Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T18:12:18.754186Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-31T18:12:18.754963Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 18:21:42 up 14 min,  0 users,  load average: 0.01, 0.16, 0.14
	Linux default-k8s-diff-port-094310 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [5575a74c69e5ebdff7ec9a9a2b7dcbc1703d1f590f0dd35611530a040896cbd6] <==
	I0731 18:15:39.024252       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 18:17:20.673851       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 18:17:20.673942       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0731 18:17:21.675201       1 handler_proxy.go:93] no RequestInfo found in the context
	W0731 18:17:21.675309       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 18:17:21.675366       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0731 18:17:21.675405       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0731 18:17:21.675371       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0731 18:17:21.676687       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 18:18:21.676446       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 18:18:21.676515       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0731 18:18:21.676524       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 18:18:21.677590       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 18:18:21.677707       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0731 18:18:21.677751       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 18:20:21.677204       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 18:20:21.677494       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0731 18:20:21.677532       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 18:20:21.678312       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 18:20:21.678380       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0731 18:20:21.679564       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [f2cf9a7321c1d465b4ff676a304ab76c71686a8fa65ce79b916fb5388df6c085] <==
	I0731 18:16:06.600350       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 18:16:36.165607       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 18:16:36.608505       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 18:17:06.171106       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 18:17:06.616417       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 18:17:36.176891       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 18:17:36.624319       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 18:18:06.182966       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 18:18:06.632616       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 18:18:36.188385       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 18:18:36.641760       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0731 18:18:43.079424       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="331.914µs"
	I0731 18:18:57.080911       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="462.982µs"
	E0731 18:19:06.192955       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 18:19:06.649139       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 18:19:36.198480       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 18:19:36.657779       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 18:20:06.204054       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 18:20:06.666953       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 18:20:36.209265       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 18:20:36.676227       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 18:21:06.213940       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 18:21:06.685073       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 18:21:36.220263       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 18:21:36.694310       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [2f8486d598e4f1a0155d2efcf42b5d5b6189672809432dddfbf300f911645053] <==
	I0731 18:12:37.898727       1 server_linux.go:69] "Using iptables proxy"
	I0731 18:12:37.923719       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.72.197"]
	I0731 18:12:38.498328       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 18:12:38.498701       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 18:12:38.501250       1 server_linux.go:165] "Using iptables Proxier"
	I0731 18:12:38.523587       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 18:12:38.523882       1 server.go:872] "Version info" version="v1.30.3"
	I0731 18:12:38.524119       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 18:12:38.525420       1 config.go:192] "Starting service config controller"
	I0731 18:12:38.525766       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 18:12:38.525899       1 config.go:101] "Starting endpoint slice config controller"
	I0731 18:12:38.527439       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 18:12:38.528261       1 config.go:319] "Starting node config controller"
	I0731 18:12:38.528382       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 18:12:38.627200       1 shared_informer.go:320] Caches are synced for service config
	I0731 18:12:38.629898       1 shared_informer.go:320] Caches are synced for node config
	I0731 18:12:38.630700       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [a91f54ad2f9d99bd43cf0a559b3727f82d3316da1a4f6b0bcae0d421af842b1c] <==
	W0731 18:12:20.705737       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 18:12:20.705782       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0731 18:12:20.705811       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0731 18:12:20.705842       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0731 18:12:20.706011       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 18:12:20.706061       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0731 18:12:20.706253       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 18:12:20.706303       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0731 18:12:21.524350       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 18:12:21.524479       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 18:12:21.605945       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 18:12:21.605999       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 18:12:21.657332       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0731 18:12:21.657427       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0731 18:12:21.673778       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 18:12:21.673831       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0731 18:12:21.752045       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 18:12:21.752213       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0731 18:12:21.770817       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 18:12:21.770911       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0731 18:12:21.796357       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0731 18:12:21.796473       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0731 18:12:21.927894       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 18:12:21.927941       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0731 18:12:23.990915       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 31 18:19:23 default-k8s-diff-port-094310 kubelet[3896]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 18:19:23 default-k8s-diff-port-094310 kubelet[3896]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 18:19:23 default-k8s-diff-port-094310 kubelet[3896]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 18:19:23 default-k8s-diff-port-094310 kubelet[3896]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 18:19:33 default-k8s-diff-port-094310 kubelet[3896]: E0731 18:19:33.065430    3896 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mskwc" podUID="c57990b4-1d91-4764-9c33-2fd5f7d2f83b"
	Jul 31 18:19:44 default-k8s-diff-port-094310 kubelet[3896]: E0731 18:19:44.064134    3896 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mskwc" podUID="c57990b4-1d91-4764-9c33-2fd5f7d2f83b"
	Jul 31 18:19:55 default-k8s-diff-port-094310 kubelet[3896]: E0731 18:19:55.064265    3896 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mskwc" podUID="c57990b4-1d91-4764-9c33-2fd5f7d2f83b"
	Jul 31 18:20:06 default-k8s-diff-port-094310 kubelet[3896]: E0731 18:20:06.063427    3896 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mskwc" podUID="c57990b4-1d91-4764-9c33-2fd5f7d2f83b"
	Jul 31 18:20:20 default-k8s-diff-port-094310 kubelet[3896]: E0731 18:20:20.063919    3896 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mskwc" podUID="c57990b4-1d91-4764-9c33-2fd5f7d2f83b"
	Jul 31 18:20:23 default-k8s-diff-port-094310 kubelet[3896]: E0731 18:20:23.087908    3896 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 18:20:23 default-k8s-diff-port-094310 kubelet[3896]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 18:20:23 default-k8s-diff-port-094310 kubelet[3896]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 18:20:23 default-k8s-diff-port-094310 kubelet[3896]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 18:20:23 default-k8s-diff-port-094310 kubelet[3896]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 18:20:31 default-k8s-diff-port-094310 kubelet[3896]: E0731 18:20:31.064460    3896 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mskwc" podUID="c57990b4-1d91-4764-9c33-2fd5f7d2f83b"
	Jul 31 18:20:46 default-k8s-diff-port-094310 kubelet[3896]: E0731 18:20:46.063582    3896 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mskwc" podUID="c57990b4-1d91-4764-9c33-2fd5f7d2f83b"
	Jul 31 18:20:57 default-k8s-diff-port-094310 kubelet[3896]: E0731 18:20:57.072312    3896 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mskwc" podUID="c57990b4-1d91-4764-9c33-2fd5f7d2f83b"
	Jul 31 18:21:12 default-k8s-diff-port-094310 kubelet[3896]: E0731 18:21:12.063771    3896 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mskwc" podUID="c57990b4-1d91-4764-9c33-2fd5f7d2f83b"
	Jul 31 18:21:23 default-k8s-diff-port-094310 kubelet[3896]: E0731 18:21:23.089324    3896 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 18:21:23 default-k8s-diff-port-094310 kubelet[3896]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 18:21:23 default-k8s-diff-port-094310 kubelet[3896]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 18:21:23 default-k8s-diff-port-094310 kubelet[3896]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 18:21:23 default-k8s-diff-port-094310 kubelet[3896]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 18:21:25 default-k8s-diff-port-094310 kubelet[3896]: E0731 18:21:25.063850    3896 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mskwc" podUID="c57990b4-1d91-4764-9c33-2fd5f7d2f83b"
	Jul 31 18:21:37 default-k8s-diff-port-094310 kubelet[3896]: E0731 18:21:37.064859    3896 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mskwc" podUID="c57990b4-1d91-4764-9c33-2fd5f7d2f83b"
	
	
	==> storage-provisioner [8867449b4946a6b46cb5df1ec153fc2d52afab53d8d19dc9fc5d2d8685f5c162] <==
	I0731 18:12:38.647075       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0731 18:12:38.667448       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0731 18:12:38.667515       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0731 18:12:38.688429       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0731 18:12:38.688714       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-094310_3fb7820d-59af-45e0-9f2c-6a4c796e2267!
	I0731 18:12:38.696716       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"44438c96-eb0c-4bdb-b67c-216ba6e640fa", APIVersion:"v1", ResourceVersion:"404", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-094310_3fb7820d-59af-45e0-9f2c-6a4c796e2267 became leader
	I0731 18:12:38.790297       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-094310_3fb7820d-59af-45e0-9f2c-6a4c796e2267!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-094310 -n default-k8s-diff-port-094310
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-094310 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-mskwc
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-094310 describe pod metrics-server-569cc877fc-mskwc
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-094310 describe pod metrics-server-569cc877fc-mskwc: exit status 1 (61.530763ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-mskwc" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-094310 describe pod metrics-server-569cc877fc-mskwc: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-436067 -n embed-certs-436067
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-31 18:21:57.838624473 +0000 UTC m=+6121.545362649
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-436067 -n embed-certs-436067
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-436067 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-436067 logs -n 25: (1.959892835s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-985288                           | enable-default-cni-985288    | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 17:58 UTC |
	|         | sudo cat                                               |                              |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-985288                           | enable-default-cni-985288    | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 17:58 UTC |
	|         | sudo containerd config dump                            |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-985288                           | enable-default-cni-985288    | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 17:58 UTC |
	|         | sudo systemctl status crio                             |                              |         |         |                     |                     |
	|         | --all --full --no-pager                                |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-985288                           | enable-default-cni-985288    | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 17:58 UTC |
	|         | sudo systemctl cat crio                                |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-985288                           | enable-default-cni-985288    | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 17:58 UTC |
	|         | sudo find /etc/crio -type f                            |                              |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                          |                              |         |         |                     |                     |
	|         | \;                                                     |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-985288                           | enable-default-cni-985288    | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 17:58 UTC |
	|         | sudo crio config                                       |                              |         |         |                     |                     |
	| delete  | -p enable-default-cni-985288                           | enable-default-cni-985288    | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 17:58 UTC |
	| delete  | -p                                                     | disable-driver-mounts-280161 | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 17:58 UTC |
	|         | disable-driver-mounts-280161                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-094310 | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 18:00 UTC |
	|         | default-k8s-diff-port-094310                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-673754             | no-preload-673754            | jenkins | v1.33.1 | 31 Jul 24 17:59 UTC | 31 Jul 24 17:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-673754                                   | no-preload-673754            | jenkins | v1.33.1 | 31 Jul 24 17:59 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-094310  | default-k8s-diff-port-094310 | jenkins | v1.33.1 | 31 Jul 24 18:00 UTC | 31 Jul 24 18:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-094310 | jenkins | v1.33.1 | 31 Jul 24 18:00 UTC |                     |
	|         | default-k8s-diff-port-094310                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-436067            | embed-certs-436067           | jenkins | v1.33.1 | 31 Jul 24 18:00 UTC | 31 Jul 24 18:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-436067                                  | embed-certs-436067           | jenkins | v1.33.1 | 31 Jul 24 18:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-276459        | old-k8s-version-276459       | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-673754                  | no-preload-673754            | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-673754 --memory=2200                     | no-preload-673754            | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC | 31 Jul 24 18:13 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-094310       | default-k8s-diff-port-094310 | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-436067                 | embed-certs-436067           | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-094310 | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC | 31 Jul 24 18:12 UTC |
	|         | default-k8s-diff-port-094310                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-436067                                  | embed-certs-436067           | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC | 31 Jul 24 18:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-276459                              | old-k8s-version-276459       | jenkins | v1.33.1 | 31 Jul 24 18:03 UTC | 31 Jul 24 18:03 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-276459             | old-k8s-version-276459       | jenkins | v1.33.1 | 31 Jul 24 18:03 UTC | 31 Jul 24 18:03 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-276459                              | old-k8s-version-276459       | jenkins | v1.33.1 | 31 Jul 24 18:03 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 18:03:55
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 18:03:55.344211   74203 out.go:291] Setting OutFile to fd 1 ...
	I0731 18:03:55.344313   74203 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:03:55.344321   74203 out.go:304] Setting ErrFile to fd 2...
	I0731 18:03:55.344324   74203 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:03:55.344541   74203 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19349-8084/.minikube/bin
	I0731 18:03:55.345055   74203 out.go:298] Setting JSON to false
	I0731 18:03:55.345905   74203 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6379,"bootTime":1722442656,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 18:03:55.345962   74203 start.go:139] virtualization: kvm guest
	I0731 18:03:55.347848   74203 out.go:177] * [old-k8s-version-276459] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 18:03:55.349045   74203 notify.go:220] Checking for updates...
	I0731 18:03:55.349052   74203 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 18:03:55.350359   74203 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 18:03:55.351583   74203 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 18:03:55.352789   74203 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19349-8084/.minikube
	I0731 18:03:55.354046   74203 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 18:03:55.355244   74203 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 18:03:55.356819   74203 config.go:182] Loaded profile config "old-k8s-version-276459": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0731 18:03:55.357218   74203 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:03:55.357268   74203 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:03:55.372081   74203 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38815
	I0731 18:03:55.372424   74203 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:03:55.372950   74203 main.go:141] libmachine: Using API Version  1
	I0731 18:03:55.372972   74203 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:03:55.373263   74203 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:03:55.373466   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:03:55.375198   74203 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0731 18:03:55.376370   74203 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 18:03:55.376714   74203 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:03:55.376748   74203 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:03:55.390924   74203 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46311
	I0731 18:03:55.391380   74203 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:03:55.391853   74203 main.go:141] libmachine: Using API Version  1
	I0731 18:03:55.391875   74203 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:03:55.392165   74203 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:03:55.392389   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:03:55.425283   74203 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 18:03:55.426485   74203 start.go:297] selected driver: kvm2
	I0731 18:03:55.426517   74203 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-276459 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-276459 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:03:55.426632   74203 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 18:03:55.427322   74203 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 18:03:55.427419   74203 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19349-8084/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 18:03:55.441518   74203 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 18:03:55.441891   74203 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 18:03:55.441921   74203 cni.go:84] Creating CNI manager for ""
	I0731 18:03:55.441928   74203 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:03:55.441970   74203 start.go:340] cluster config:
	{Name:old-k8s-version-276459 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-276459 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:03:55.442088   74203 iso.go:125] acquiring lock: {Name:mk4494bb5a63902bf12a909c3109db85229bc516 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 18:03:55.443745   74203 out.go:177] * Starting "old-k8s-version-276459" primary control-plane node in "old-k8s-version-276459" cluster
	I0731 18:03:55.299338   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:03:55.445026   74203 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 18:03:55.445062   74203 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0731 18:03:55.445085   74203 cache.go:56] Caching tarball of preloaded images
	I0731 18:03:55.445157   74203 preload.go:172] Found /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 18:03:55.445167   74203 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0731 18:03:55.445250   74203 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/config.json ...
	I0731 18:03:55.445412   74203 start.go:360] acquireMachinesLock for old-k8s-version-276459: {Name:mkd78eacfab7d6c3058c6674434b3d889ec957e0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 18:03:58.371340   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:04.451379   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:07.523408   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:13.603407   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:16.675437   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:22.755418   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:25.827434   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:31.907379   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:34.979426   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:41.059417   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:44.131434   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:50.211391   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:53.283445   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:59.363428   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:02.435450   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:08.515394   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:11.587394   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:17.667388   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:20.739413   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:26.819368   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:29.891394   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:35.971391   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:39.043445   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:45.123378   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:48.195378   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:54.275417   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:57.347374   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:03.427390   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:06.499378   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:12.579395   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:15.651447   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:21.731394   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:24.803405   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:30.883468   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:33.955397   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:40.035387   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:43.107448   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:49.187413   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:52.259420   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:58.339413   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:07:01.411396   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:07:04.416121   73696 start.go:364] duration metric: took 4m18.256589549s to acquireMachinesLock for "default-k8s-diff-port-094310"
	I0731 18:07:04.416183   73696 start.go:96] Skipping create...Using existing machine configuration
	I0731 18:07:04.416192   73696 fix.go:54] fixHost starting: 
	I0731 18:07:04.416522   73696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:07:04.416570   73696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:07:04.432249   73696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32817
	I0731 18:07:04.432715   73696 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:07:04.433206   73696 main.go:141] libmachine: Using API Version  1
	I0731 18:07:04.433234   73696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:07:04.433616   73696 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:07:04.433833   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:07:04.434001   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetState
	I0731 18:07:04.436061   73696 fix.go:112] recreateIfNeeded on default-k8s-diff-port-094310: state=Stopped err=<nil>
	I0731 18:07:04.436082   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	W0731 18:07:04.436241   73696 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 18:07:04.438139   73696 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-094310" ...
	I0731 18:07:04.439463   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .Start
	I0731 18:07:04.439678   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Ensuring networks are active...
	I0731 18:07:04.440645   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Ensuring network default is active
	I0731 18:07:04.441067   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Ensuring network mk-default-k8s-diff-port-094310 is active
	I0731 18:07:04.441473   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Getting domain xml...
	I0731 18:07:04.442331   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Creating domain...
	I0731 18:07:05.660745   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting to get IP...
	I0731 18:07:05.661963   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:05.662532   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:05.662620   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:05.662524   74854 retry.go:31] will retry after 294.438382ms: waiting for machine to come up
	I0731 18:07:05.959200   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:05.959668   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:05.959699   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:05.959619   74854 retry.go:31] will retry after 331.316387ms: waiting for machine to come up
	I0731 18:07:04.413166   73479 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 18:07:04.413216   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetMachineName
	I0731 18:07:04.413580   73479 buildroot.go:166] provisioning hostname "no-preload-673754"
	I0731 18:07:04.413609   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetMachineName
	I0731 18:07:04.413827   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:07:04.415964   73479 machine.go:97] duration metric: took 4m37.431900974s to provisionDockerMachine
	I0731 18:07:04.416013   73479 fix.go:56] duration metric: took 4m37.452176305s for fixHost
	I0731 18:07:04.416023   73479 start.go:83] releasing machines lock for "no-preload-673754", held for 4m37.452227129s
	W0731 18:07:04.416048   73479 start.go:714] error starting host: provision: host is not running
	W0731 18:07:04.416143   73479 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0731 18:07:04.416157   73479 start.go:729] Will try again in 5 seconds ...
	I0731 18:07:06.292146   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:06.292533   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:06.292555   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:06.292487   74854 retry.go:31] will retry after 324.512889ms: waiting for machine to come up
	I0731 18:07:06.619045   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:06.619440   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:06.619470   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:06.619404   74854 retry.go:31] will retry after 556.332506ms: waiting for machine to come up
	I0731 18:07:07.177224   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:07.177689   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:07.177722   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:07.177631   74854 retry.go:31] will retry after 599.567638ms: waiting for machine to come up
	I0731 18:07:07.778444   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:07.778848   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:07.778885   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:07.778820   74854 retry.go:31] will retry after 944.17246ms: waiting for machine to come up
	I0731 18:07:08.724983   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:08.725484   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:08.725512   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:08.725433   74854 retry.go:31] will retry after 1.077726279s: waiting for machine to come up
	I0731 18:07:09.805196   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:09.805629   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:09.805667   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:09.805575   74854 retry.go:31] will retry after 1.140059854s: waiting for machine to come up
	I0731 18:07:10.951633   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:10.952066   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:10.952091   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:10.952028   74854 retry.go:31] will retry after 1.691707383s: waiting for machine to come up
	I0731 18:07:09.418606   73479 start.go:360] acquireMachinesLock for no-preload-673754: {Name:mkd78eacfab7d6c3058c6674434b3d889ec957e0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 18:07:12.645970   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:12.646588   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:12.646623   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:12.646525   74854 retry.go:31] will retry after 2.257630784s: waiting for machine to come up
	I0731 18:07:14.905494   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:14.905891   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:14.905922   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:14.905833   74854 retry.go:31] will retry after 2.877713561s: waiting for machine to come up
	I0731 18:07:17.786797   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:17.787173   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:17.787194   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:17.787140   74854 retry.go:31] will retry after 3.028611559s: waiting for machine to come up
	I0731 18:07:20.817593   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:20.817898   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Found IP for machine: 192.168.72.197
	I0731 18:07:20.817921   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Reserving static IP address...
	I0731 18:07:20.817934   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has current primary IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:20.818352   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-094310", mac: "52:54:00:a9:b2:ae", ip: "192.168.72.197"} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:20.818379   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Reserved static IP address: 192.168.72.197
	I0731 18:07:20.818400   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | skip adding static IP to network mk-default-k8s-diff-port-094310 - found existing host DHCP lease matching {name: "default-k8s-diff-port-094310", mac: "52:54:00:a9:b2:ae", ip: "192.168.72.197"}
	I0731 18:07:20.818414   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for SSH to be available...
	I0731 18:07:20.818431   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | Getting to WaitForSSH function...
	I0731 18:07:20.820417   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:20.820731   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:20.820758   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:20.820893   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | Using SSH client type: external
	I0731 18:07:20.820916   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | Using SSH private key: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/default-k8s-diff-port-094310/id_rsa (-rw-------)
	I0731 18:07:20.820940   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.197 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19349-8084/.minikube/machines/default-k8s-diff-port-094310/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 18:07:20.820950   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | About to run SSH command:
	I0731 18:07:20.820959   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | exit 0
	I0731 18:07:20.943348   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | SSH cmd err, output: <nil>: 
	I0731 18:07:20.943708   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetConfigRaw
	I0731 18:07:20.944373   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetIP
	I0731 18:07:20.947080   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:20.947465   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:20.947499   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:20.947731   73696 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/default-k8s-diff-port-094310/config.json ...
	I0731 18:07:20.947909   73696 machine.go:94] provisionDockerMachine start ...
	I0731 18:07:20.947926   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:07:20.948124   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:07:20.950698   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:20.951056   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:20.951083   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:20.951228   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:07:20.951443   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:20.951608   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:20.951780   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:07:20.952016   73696 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:20.952208   73696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.197 22 <nil> <nil>}
	I0731 18:07:20.952220   73696 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 18:07:21.051082   73696 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 18:07:21.051137   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetMachineName
	I0731 18:07:21.051424   73696 buildroot.go:166] provisioning hostname "default-k8s-diff-port-094310"
	I0731 18:07:21.051454   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetMachineName
	I0731 18:07:21.051650   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:07:21.054527   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.054913   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:21.054940   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.055151   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:07:21.055377   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:21.055516   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:21.055670   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:07:21.055838   73696 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:21.056037   73696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.197 22 <nil> <nil>}
	I0731 18:07:21.056051   73696 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-094310 && echo "default-k8s-diff-port-094310" | sudo tee /etc/hostname
	I0731 18:07:22.127802   73800 start.go:364] duration metric: took 4m27.5245732s to acquireMachinesLock for "embed-certs-436067"
	I0731 18:07:22.127861   73800 start.go:96] Skipping create...Using existing machine configuration
	I0731 18:07:22.127871   73800 fix.go:54] fixHost starting: 
	I0731 18:07:22.128296   73800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:07:22.128386   73800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:07:22.144783   73800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34313
	I0731 18:07:22.145111   73800 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:07:22.145531   73800 main.go:141] libmachine: Using API Version  1
	I0731 18:07:22.145549   73800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:07:22.145894   73800 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:07:22.146086   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:07:22.146226   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetState
	I0731 18:07:22.147718   73800 fix.go:112] recreateIfNeeded on embed-certs-436067: state=Stopped err=<nil>
	I0731 18:07:22.147737   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	W0731 18:07:22.147878   73800 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 18:07:22.149896   73800 out.go:177] * Restarting existing kvm2 VM for "embed-certs-436067" ...
	I0731 18:07:21.168797   73696 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-094310
	
	I0731 18:07:21.168828   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:07:21.171672   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.172012   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:21.172043   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.172183   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:07:21.172351   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:21.172510   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:21.172633   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:07:21.172800   73696 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:21.172976   73696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.197 22 <nil> <nil>}
	I0731 18:07:21.173010   73696 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-094310' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-094310/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-094310' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 18:07:21.284583   73696 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 18:07:21.284610   73696 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19349-8084/.minikube CaCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19349-8084/.minikube}
	I0731 18:07:21.284633   73696 buildroot.go:174] setting up certificates
	I0731 18:07:21.284645   73696 provision.go:84] configureAuth start
	I0731 18:07:21.284656   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetMachineName
	I0731 18:07:21.284931   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetIP
	I0731 18:07:21.287526   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.287945   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:21.287973   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.288161   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:07:21.290169   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.290469   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:21.290495   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.290602   73696 provision.go:143] copyHostCerts
	I0731 18:07:21.290661   73696 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem, removing ...
	I0731 18:07:21.290673   73696 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem
	I0731 18:07:21.290757   73696 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem (1082 bytes)
	I0731 18:07:21.290844   73696 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem, removing ...
	I0731 18:07:21.290856   73696 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem
	I0731 18:07:21.290881   73696 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem (1123 bytes)
	I0731 18:07:21.290933   73696 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem, removing ...
	I0731 18:07:21.290939   73696 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem
	I0731 18:07:21.290959   73696 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem (1675 bytes)
	I0731 18:07:21.291005   73696 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-094310 san=[127.0.0.1 192.168.72.197 default-k8s-diff-port-094310 localhost minikube]
	I0731 18:07:21.483241   73696 provision.go:177] copyRemoteCerts
	I0731 18:07:21.483314   73696 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 18:07:21.483343   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:07:21.486231   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.486619   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:21.486659   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.486850   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:07:21.487084   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:21.487285   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:07:21.487443   73696 sshutil.go:53] new ssh client: &{IP:192.168.72.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/default-k8s-diff-port-094310/id_rsa Username:docker}
	I0731 18:07:21.568564   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 18:07:21.598766   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0731 18:07:21.621602   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 18:07:21.643361   73696 provision.go:87] duration metric: took 358.702982ms to configureAuth
	I0731 18:07:21.643393   73696 buildroot.go:189] setting minikube options for container-runtime
	I0731 18:07:21.643598   73696 config.go:182] Loaded profile config "default-k8s-diff-port-094310": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:07:21.643699   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:07:21.646487   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.646921   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:21.646967   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.647126   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:07:21.647331   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:21.647527   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:21.647675   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:07:21.647879   73696 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:21.648051   73696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.197 22 <nil> <nil>}
	I0731 18:07:21.648066   73696 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 18:07:21.896109   73696 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 18:07:21.896138   73696 machine.go:97] duration metric: took 948.216479ms to provisionDockerMachine
	I0731 18:07:21.896152   73696 start.go:293] postStartSetup for "default-k8s-diff-port-094310" (driver="kvm2")
	I0731 18:07:21.896166   73696 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 18:07:21.896185   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:07:21.896500   73696 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 18:07:21.896533   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:07:21.899447   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.899784   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:21.899817   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.899936   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:07:21.900136   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:21.900268   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:07:21.900415   73696 sshutil.go:53] new ssh client: &{IP:192.168.72.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/default-k8s-diff-port-094310/id_rsa Username:docker}
	I0731 18:07:21.981347   73696 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 18:07:21.985297   73696 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 18:07:21.985324   73696 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/addons for local assets ...
	I0731 18:07:21.985397   73696 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/files for local assets ...
	I0731 18:07:21.985513   73696 filesync.go:149] local asset: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem -> 152592.pem in /etc/ssl/certs
	I0731 18:07:21.985646   73696 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 18:07:21.994700   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /etc/ssl/certs/152592.pem (1708 bytes)
	I0731 18:07:22.022005   73696 start.go:296] duration metric: took 125.838186ms for postStartSetup
	I0731 18:07:22.022052   73696 fix.go:56] duration metric: took 17.605858897s for fixHost
	I0731 18:07:22.022075   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:07:22.025151   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:22.025445   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:22.025478   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:22.025622   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:07:22.025829   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:22.026023   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:22.026199   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:07:22.026390   73696 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:22.026632   73696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.197 22 <nil> <nil>}
	I0731 18:07:22.026653   73696 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 18:07:22.127643   73696 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722449242.103036947
	
	I0731 18:07:22.127668   73696 fix.go:216] guest clock: 1722449242.103036947
	I0731 18:07:22.127675   73696 fix.go:229] Guest: 2024-07-31 18:07:22.103036947 +0000 UTC Remote: 2024-07-31 18:07:22.022056299 +0000 UTC m=+275.995802468 (delta=80.980648ms)
	I0731 18:07:22.127698   73696 fix.go:200] guest clock delta is within tolerance: 80.980648ms
	I0731 18:07:22.127704   73696 start.go:83] releasing machines lock for "default-k8s-diff-port-094310", held for 17.711543911s
	I0731 18:07:22.127735   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:07:22.128006   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetIP
	I0731 18:07:22.130905   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:22.131291   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:22.131322   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:22.131568   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:07:22.132072   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:07:22.132244   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:07:22.132334   73696 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 18:07:22.132373   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:07:22.132488   73696 ssh_runner.go:195] Run: cat /version.json
	I0731 18:07:22.132511   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:07:22.134976   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:22.135269   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:22.135350   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:22.135386   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:22.135583   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:07:22.135702   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:22.135735   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:22.135751   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:22.135837   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:07:22.135945   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:07:22.135966   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:22.136068   73696 sshutil.go:53] new ssh client: &{IP:192.168.72.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/default-k8s-diff-port-094310/id_rsa Username:docker}
	I0731 18:07:22.136101   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:07:22.136246   73696 sshutil.go:53] new ssh client: &{IP:192.168.72.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/default-k8s-diff-port-094310/id_rsa Username:docker}
	I0731 18:07:22.245752   73696 ssh_runner.go:195] Run: systemctl --version
	I0731 18:07:22.251574   73696 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 18:07:22.391398   73696 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 18:07:22.396765   73696 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 18:07:22.396842   73696 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 18:07:22.412102   73696 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 18:07:22.412119   73696 start.go:495] detecting cgroup driver to use...
	I0731 18:07:22.412170   73696 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 18:07:22.427198   73696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 18:07:22.441511   73696 docker.go:217] disabling cri-docker service (if available) ...
	I0731 18:07:22.441589   73696 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 18:07:22.455498   73696 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 18:07:22.469702   73696 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 18:07:22.584218   73696 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 18:07:22.719105   73696 docker.go:233] disabling docker service ...
	I0731 18:07:22.719195   73696 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 18:07:22.733625   73696 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 18:07:22.746500   73696 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 18:07:22.893624   73696 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 18:07:23.012965   73696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 18:07:23.027132   73696 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 18:07:23.044766   73696 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 18:07:23.044832   73696 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:23.054276   73696 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 18:07:23.054363   73696 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:23.063873   73696 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:23.073392   73696 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:23.082908   73696 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 18:07:23.093468   73696 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:23.103419   73696 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:23.119920   73696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:23.130427   73696 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 18:07:23.139397   73696 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 18:07:23.139465   73696 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 18:07:23.152275   73696 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 18:07:23.162439   73696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:07:23.280030   73696 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 18:07:23.412019   73696 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 18:07:23.412083   73696 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 18:07:23.416884   73696 start.go:563] Will wait 60s for crictl version
	I0731 18:07:23.416930   73696 ssh_runner.go:195] Run: which crictl
	I0731 18:07:23.420518   73696 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 18:07:23.458895   73696 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 18:07:23.458976   73696 ssh_runner.go:195] Run: crio --version
	I0731 18:07:23.486961   73696 ssh_runner.go:195] Run: crio --version
	I0731 18:07:23.519648   73696 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 18:07:22.151159   73800 main.go:141] libmachine: (embed-certs-436067) Calling .Start
	I0731 18:07:22.151319   73800 main.go:141] libmachine: (embed-certs-436067) Ensuring networks are active...
	I0731 18:07:22.151951   73800 main.go:141] libmachine: (embed-certs-436067) Ensuring network default is active
	I0731 18:07:22.152245   73800 main.go:141] libmachine: (embed-certs-436067) Ensuring network mk-embed-certs-436067 is active
	I0731 18:07:22.152747   73800 main.go:141] libmachine: (embed-certs-436067) Getting domain xml...
	I0731 18:07:22.153446   73800 main.go:141] libmachine: (embed-certs-436067) Creating domain...
	I0731 18:07:23.410530   73800 main.go:141] libmachine: (embed-certs-436067) Waiting to get IP...
	I0731 18:07:23.411687   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:23.412152   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:23.412231   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:23.412133   74994 retry.go:31] will retry after 233.281104ms: waiting for machine to come up
	I0731 18:07:23.646659   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:23.647147   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:23.647174   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:23.647069   74994 retry.go:31] will retry after 307.068766ms: waiting for machine to come up
	I0731 18:07:23.955614   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:23.956140   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:23.956166   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:23.956094   74994 retry.go:31] will retry after 410.095032ms: waiting for machine to come up
	I0731 18:07:24.367793   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:24.368231   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:24.368264   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:24.368188   74994 retry.go:31] will retry after 366.242055ms: waiting for machine to come up
	I0731 18:07:23.520927   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetIP
	I0731 18:07:23.524167   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:23.524615   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:23.524663   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:23.524913   73696 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0731 18:07:23.528924   73696 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:07:23.540496   73696 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-094310 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-094310 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.197 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 18:07:23.540633   73696 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 18:07:23.540681   73696 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 18:07:23.579224   73696 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 18:07:23.579295   73696 ssh_runner.go:195] Run: which lz4
	I0731 18:07:23.583060   73696 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 18:07:23.586888   73696 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 18:07:23.586922   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 18:07:24.864241   73696 crio.go:462] duration metric: took 1.281254602s to copy over tarball
	I0731 18:07:24.864321   73696 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 18:07:24.735741   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:24.736325   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:24.736356   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:24.736275   74994 retry.go:31] will retry after 593.179812ms: waiting for machine to come up
	I0731 18:07:25.331004   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:25.331406   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:25.331470   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:25.331381   74994 retry.go:31] will retry after 778.352855ms: waiting for machine to come up
	I0731 18:07:26.111327   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:26.111828   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:26.111855   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:26.111757   74994 retry.go:31] will retry after 993.157171ms: waiting for machine to come up
	I0731 18:07:27.106111   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:27.106543   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:27.106574   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:27.106507   74994 retry.go:31] will retry after 963.581879ms: waiting for machine to come up
	I0731 18:07:28.072100   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:28.072628   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:28.072657   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:28.072560   74994 retry.go:31] will retry after 1.608497907s: waiting for machine to come up
	I0731 18:07:27.052512   73696 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.188157854s)
	I0731 18:07:27.052542   73696 crio.go:469] duration metric: took 2.188269884s to extract the tarball
	I0731 18:07:27.052557   73696 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 18:07:27.089250   73696 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 18:07:27.130507   73696 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 18:07:27.130536   73696 cache_images.go:84] Images are preloaded, skipping loading
	I0731 18:07:27.130546   73696 kubeadm.go:934] updating node { 192.168.72.197 8444 v1.30.3 crio true true} ...
	I0731 18:07:27.130666   73696 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-094310 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.197
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-094310 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 18:07:27.130751   73696 ssh_runner.go:195] Run: crio config
	I0731 18:07:27.176571   73696 cni.go:84] Creating CNI manager for ""
	I0731 18:07:27.176598   73696 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:07:27.176614   73696 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 18:07:27.176640   73696 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.197 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-094310 NodeName:default-k8s-diff-port-094310 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.197"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.197 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 18:07:27.176821   73696 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.197
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-094310"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.197
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.197"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 18:07:27.176904   73696 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 18:07:27.186582   73696 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 18:07:27.186647   73696 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 18:07:27.195571   73696 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0731 18:07:27.211103   73696 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 18:07:27.226226   73696 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0731 18:07:27.241763   73696 ssh_runner.go:195] Run: grep 192.168.72.197	control-plane.minikube.internal$ /etc/hosts
	I0731 18:07:27.245286   73696 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.197	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:07:27.256317   73696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:07:27.377904   73696 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 18:07:27.394151   73696 certs.go:68] Setting up /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/default-k8s-diff-port-094310 for IP: 192.168.72.197
	I0731 18:07:27.394181   73696 certs.go:194] generating shared ca certs ...
	I0731 18:07:27.394201   73696 certs.go:226] acquiring lock for ca certs: {Name:mk49eed439085f9c30746706460ce213570d1997 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:07:27.394382   73696 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key
	I0731 18:07:27.394451   73696 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key
	I0731 18:07:27.394465   73696 certs.go:256] generating profile certs ...
	I0731 18:07:27.394577   73696 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/default-k8s-diff-port-094310/client.key
	I0731 18:07:27.394656   73696 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/default-k8s-diff-port-094310/apiserver.key.5264b27d
	I0731 18:07:27.394703   73696 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/default-k8s-diff-port-094310/proxy-client.key
	I0731 18:07:27.394851   73696 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem (1338 bytes)
	W0731 18:07:27.394896   73696 certs.go:480] ignoring /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259_empty.pem, impossibly tiny 0 bytes
	I0731 18:07:27.394908   73696 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 18:07:27.394935   73696 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem (1082 bytes)
	I0731 18:07:27.394969   73696 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem (1123 bytes)
	I0731 18:07:27.394990   73696 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem (1675 bytes)
	I0731 18:07:27.395028   73696 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem (1708 bytes)
	I0731 18:07:27.395749   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 18:07:27.425292   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 18:07:27.452753   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 18:07:27.481508   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 18:07:27.506990   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/default-k8s-diff-port-094310/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0731 18:07:27.544385   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/default-k8s-diff-port-094310/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 18:07:27.572947   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/default-k8s-diff-port-094310/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 18:07:27.597895   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/default-k8s-diff-port-094310/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 18:07:27.619324   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /usr/share/ca-certificates/152592.pem (1708 bytes)
	I0731 18:07:27.641000   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 18:07:27.662483   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem --> /usr/share/ca-certificates/15259.pem (1338 bytes)
	I0731 18:07:27.684400   73696 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 18:07:27.700058   73696 ssh_runner.go:195] Run: openssl version
	I0731 18:07:27.705637   73696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/152592.pem && ln -fs /usr/share/ca-certificates/152592.pem /etc/ssl/certs/152592.pem"
	I0731 18:07:27.715558   73696 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/152592.pem
	I0731 18:07:27.719545   73696 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 16:54 /usr/share/ca-certificates/152592.pem
	I0731 18:07:27.719611   73696 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/152592.pem
	I0731 18:07:27.725076   73696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/152592.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 18:07:27.736589   73696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 18:07:27.747908   73696 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:07:27.752392   73696 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 16:42 /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:07:27.752448   73696 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:07:27.757939   73696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 18:07:27.769571   73696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15259.pem && ln -fs /usr/share/ca-certificates/15259.pem /etc/ssl/certs/15259.pem"
	I0731 18:07:27.780730   73696 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15259.pem
	I0731 18:07:27.785059   73696 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 16:54 /usr/share/ca-certificates/15259.pem
	I0731 18:07:27.785112   73696 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15259.pem
	I0731 18:07:27.790477   73696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15259.pem /etc/ssl/certs/51391683.0"
	I0731 18:07:27.801519   73696 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 18:07:27.805654   73696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 18:07:27.811381   73696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 18:07:27.816786   73696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 18:07:27.822643   73696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 18:07:27.828371   73696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 18:07:27.833908   73696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 18:07:27.839455   73696 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-094310 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-094310 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.197 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:07:27.839537   73696 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 18:07:27.839605   73696 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 18:07:27.882993   73696 cri.go:89] found id: ""
	I0731 18:07:27.883055   73696 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 18:07:27.894363   73696 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 18:07:27.894386   73696 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 18:07:27.894431   73696 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 18:07:27.905192   73696 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 18:07:27.906138   73696 kubeconfig.go:125] found "default-k8s-diff-port-094310" server: "https://192.168.72.197:8444"
	I0731 18:07:27.908339   73696 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 18:07:27.918565   73696 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.197
	I0731 18:07:27.918603   73696 kubeadm.go:1160] stopping kube-system containers ...
	I0731 18:07:27.918613   73696 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 18:07:27.918663   73696 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 18:07:27.955675   73696 cri.go:89] found id: ""
	I0731 18:07:27.955744   73696 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 18:07:27.972234   73696 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 18:07:27.981273   73696 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 18:07:27.981289   73696 kubeadm.go:157] found existing configuration files:
	
	I0731 18:07:27.981323   73696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0731 18:07:27.989775   73696 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 18:07:27.989837   73696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 18:07:27.998816   73696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0731 18:07:28.007142   73696 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 18:07:28.007197   73696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 18:07:28.016124   73696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0731 18:07:28.024471   73696 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 18:07:28.024519   73696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 18:07:28.033105   73696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0731 18:07:28.041306   73696 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 18:07:28.041355   73696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 18:07:28.049958   73696 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 18:07:28.058718   73696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:28.167720   73696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:29.013539   73696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:29.225696   73696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:29.300822   73696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:29.403471   73696 api_server.go:52] waiting for apiserver process to appear ...
	I0731 18:07:29.403567   73696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:07:29.903755   73696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:07:30.403896   73696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:07:30.904160   73696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:07:29.683622   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:29.684148   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:29.684180   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:29.684088   74994 retry.go:31] will retry after 1.813922887s: waiting for machine to come up
	I0731 18:07:31.500225   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:31.500738   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:31.500769   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:31.500694   74994 retry.go:31] will retry after 2.381670698s: waiting for machine to come up
	I0731 18:07:33.884129   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:33.884564   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:33.884587   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:33.884539   74994 retry.go:31] will retry after 3.269400744s: waiting for machine to come up
	I0731 18:07:31.404093   73696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:07:31.417483   73696 api_server.go:72] duration metric: took 2.014013675s to wait for apiserver process to appear ...
	I0731 18:07:31.417511   73696 api_server.go:88] waiting for apiserver healthz status ...
	I0731 18:07:31.417533   73696 api_server.go:253] Checking apiserver healthz at https://192.168.72.197:8444/healthz ...
	I0731 18:07:34.340211   73696 api_server.go:279] https://192.168.72.197:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 18:07:34.340240   73696 api_server.go:103] status: https://192.168.72.197:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 18:07:34.340274   73696 api_server.go:253] Checking apiserver healthz at https://192.168.72.197:8444/healthz ...
	I0731 18:07:34.426446   73696 api_server.go:279] https://192.168.72.197:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 18:07:34.426504   73696 api_server.go:103] status: https://192.168.72.197:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 18:07:34.426522   73696 api_server.go:253] Checking apiserver healthz at https://192.168.72.197:8444/healthz ...
	I0731 18:07:34.436383   73696 api_server.go:279] https://192.168.72.197:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 18:07:34.436416   73696 api_server.go:103] status: https://192.168.72.197:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 18:07:34.918371   73696 api_server.go:253] Checking apiserver healthz at https://192.168.72.197:8444/healthz ...
	I0731 18:07:34.922668   73696 api_server.go:279] https://192.168.72.197:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 18:07:34.922699   73696 api_server.go:103] status: https://192.168.72.197:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 18:07:35.418265   73696 api_server.go:253] Checking apiserver healthz at https://192.168.72.197:8444/healthz ...
	I0731 18:07:35.435931   73696 api_server.go:279] https://192.168.72.197:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 18:07:35.435966   73696 api_server.go:103] status: https://192.168.72.197:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 18:07:35.918570   73696 api_server.go:253] Checking apiserver healthz at https://192.168.72.197:8444/healthz ...
	I0731 18:07:35.923674   73696 api_server.go:279] https://192.168.72.197:8444/healthz returned 200:
	ok
	I0731 18:07:35.929781   73696 api_server.go:141] control plane version: v1.30.3
	I0731 18:07:35.929809   73696 api_server.go:131] duration metric: took 4.512290009s to wait for apiserver health ...
	I0731 18:07:35.929820   73696 cni.go:84] Creating CNI manager for ""
	I0731 18:07:35.929827   73696 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:07:35.931827   73696 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 18:07:35.933104   73696 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 18:07:35.943548   73696 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 18:07:35.961932   73696 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 18:07:35.977855   73696 system_pods.go:59] 8 kube-system pods found
	I0731 18:07:35.977894   73696 system_pods.go:61] "coredns-7db6d8ff4d-kvxmb" [df8cf19b-5e62-4c38-9124-3257fea48fbb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:07:35.977905   73696 system_pods.go:61] "etcd-default-k8s-diff-port-094310" [fe526f06-bd6c-4708-a0f3-e49b731e3a61] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 18:07:35.977915   73696 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-094310" [f0191941-87ad-4934-a02a-75b07649d5dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 18:07:35.977924   73696 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-094310" [28b4bdc4-4eea-41c0-9182-b07034d7363e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 18:07:35.977936   73696 system_pods.go:61] "kube-proxy-8bgl7" [577052d5-fe7d-4547-bfbf-d3c938884767] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 18:07:35.977946   73696 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-094310" [df25971f-b25a-4344-a91e-c4b0c9ee5282] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 18:07:35.977964   73696 system_pods.go:61] "metrics-server-569cc877fc-64hp4" [847243bf-6568-41ff-a1e4-70b0a89c63dd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 18:07:35.977978   73696 system_pods.go:61] "storage-provisioner" [6493bfa6-e40b-405c-93b6-ee5053efbdf6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 18:07:35.977991   73696 system_pods.go:74] duration metric: took 16.038231ms to wait for pod list to return data ...
	I0731 18:07:35.978003   73696 node_conditions.go:102] verifying NodePressure condition ...
	I0731 18:07:35.983206   73696 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 18:07:35.983234   73696 node_conditions.go:123] node cpu capacity is 2
	I0731 18:07:35.983251   73696 node_conditions.go:105] duration metric: took 5.239492ms to run NodePressure ...
	I0731 18:07:35.983270   73696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:37.155307   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:37.155787   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:37.155822   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:37.155717   74994 retry.go:31] will retry after 3.095991533s: waiting for machine to come up
	I0731 18:07:36.249072   73696 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 18:07:36.253639   73696 kubeadm.go:739] kubelet initialised
	I0731 18:07:36.253661   73696 kubeadm.go:740] duration metric: took 4.559461ms waiting for restarted kubelet to initialise ...
	I0731 18:07:36.253669   73696 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:07:36.258632   73696 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-kvxmb" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:36.262785   73696 pod_ready.go:97] node "default-k8s-diff-port-094310" hosting pod "coredns-7db6d8ff4d-kvxmb" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-094310" has status "Ready":"False"
	I0731 18:07:36.262811   73696 pod_ready.go:81] duration metric: took 4.157359ms for pod "coredns-7db6d8ff4d-kvxmb" in "kube-system" namespace to be "Ready" ...
	E0731 18:07:36.262823   73696 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-094310" hosting pod "coredns-7db6d8ff4d-kvxmb" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-094310" has status "Ready":"False"
	I0731 18:07:36.262831   73696 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:36.269224   73696 pod_ready.go:97] node "default-k8s-diff-port-094310" hosting pod "etcd-default-k8s-diff-port-094310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-094310" has status "Ready":"False"
	I0731 18:07:36.269250   73696 pod_ready.go:81] duration metric: took 6.406018ms for pod "etcd-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	E0731 18:07:36.269263   73696 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-094310" hosting pod "etcd-default-k8s-diff-port-094310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-094310" has status "Ready":"False"
	I0731 18:07:36.269270   73696 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:36.273379   73696 pod_ready.go:97] node "default-k8s-diff-port-094310" hosting pod "kube-apiserver-default-k8s-diff-port-094310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-094310" has status "Ready":"False"
	I0731 18:07:36.273400   73696 pod_ready.go:81] duration metric: took 4.119945ms for pod "kube-apiserver-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	E0731 18:07:36.273408   73696 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-094310" hosting pod "kube-apiserver-default-k8s-diff-port-094310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-094310" has status "Ready":"False"
	I0731 18:07:36.273414   73696 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:36.365153   73696 pod_ready.go:97] node "default-k8s-diff-port-094310" hosting pod "kube-controller-manager-default-k8s-diff-port-094310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-094310" has status "Ready":"False"
	I0731 18:07:36.365183   73696 pod_ready.go:81] duration metric: took 91.758203ms for pod "kube-controller-manager-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	E0731 18:07:36.365195   73696 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-094310" hosting pod "kube-controller-manager-default-k8s-diff-port-094310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-094310" has status "Ready":"False"
	I0731 18:07:36.365201   73696 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8bgl7" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:36.765371   73696 pod_ready.go:92] pod "kube-proxy-8bgl7" in "kube-system" namespace has status "Ready":"True"
	I0731 18:07:36.765393   73696 pod_ready.go:81] duration metric: took 400.181854ms for pod "kube-proxy-8bgl7" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:36.765405   73696 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:38.770757   73696 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-094310" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:40.772702   73696 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-094310" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:41.552094   74203 start.go:364] duration metric: took 3m46.106649241s to acquireMachinesLock for "old-k8s-version-276459"
	I0731 18:07:41.552166   74203 start.go:96] Skipping create...Using existing machine configuration
	I0731 18:07:41.552174   74203 fix.go:54] fixHost starting: 
	I0731 18:07:41.552553   74203 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:07:41.552595   74203 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:07:41.569965   74203 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43329
	I0731 18:07:41.570361   74203 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:07:41.570884   74203 main.go:141] libmachine: Using API Version  1
	I0731 18:07:41.570905   74203 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:07:41.571247   74203 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:07:41.571454   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:07:41.571605   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetState
	I0731 18:07:41.573081   74203 fix.go:112] recreateIfNeeded on old-k8s-version-276459: state=Stopped err=<nil>
	I0731 18:07:41.573114   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	W0731 18:07:41.573276   74203 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 18:07:41.575254   74203 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-276459" ...
	I0731 18:07:40.254868   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.255367   73800 main.go:141] libmachine: (embed-certs-436067) Found IP for machine: 192.168.50.86
	I0731 18:07:40.255385   73800 main.go:141] libmachine: (embed-certs-436067) Reserving static IP address...
	I0731 18:07:40.255405   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has current primary IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.255798   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "embed-certs-436067", mac: "52:54:00:87:1e:25", ip: "192.168.50.86"} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:40.255822   73800 main.go:141] libmachine: (embed-certs-436067) Reserved static IP address: 192.168.50.86
	I0731 18:07:40.255839   73800 main.go:141] libmachine: (embed-certs-436067) DBG | skip adding static IP to network mk-embed-certs-436067 - found existing host DHCP lease matching {name: "embed-certs-436067", mac: "52:54:00:87:1e:25", ip: "192.168.50.86"}
	I0731 18:07:40.255853   73800 main.go:141] libmachine: (embed-certs-436067) DBG | Getting to WaitForSSH function...
	I0731 18:07:40.255865   73800 main.go:141] libmachine: (embed-certs-436067) Waiting for SSH to be available...
	I0731 18:07:40.257994   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.258304   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:40.258331   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.258475   73800 main.go:141] libmachine: (embed-certs-436067) DBG | Using SSH client type: external
	I0731 18:07:40.258492   73800 main.go:141] libmachine: (embed-certs-436067) DBG | Using SSH private key: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/embed-certs-436067/id_rsa (-rw-------)
	I0731 18:07:40.258594   73800 main.go:141] libmachine: (embed-certs-436067) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.86 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19349-8084/.minikube/machines/embed-certs-436067/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 18:07:40.258625   73800 main.go:141] libmachine: (embed-certs-436067) DBG | About to run SSH command:
	I0731 18:07:40.258644   73800 main.go:141] libmachine: (embed-certs-436067) DBG | exit 0
	I0731 18:07:40.387051   73800 main.go:141] libmachine: (embed-certs-436067) DBG | SSH cmd err, output: <nil>: 
	I0731 18:07:40.387459   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetConfigRaw
	I0731 18:07:40.388093   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetIP
	I0731 18:07:40.390805   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.391260   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:40.391306   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.391534   73800 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/embed-certs-436067/config.json ...
	I0731 18:07:40.391769   73800 machine.go:94] provisionDockerMachine start ...
	I0731 18:07:40.391793   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:07:40.392012   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:07:40.394412   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.394809   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:40.394839   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.395029   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:07:40.395209   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:40.395372   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:40.395480   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:07:40.395624   73800 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:40.395808   73800 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.86 22 <nil> <nil>}
	I0731 18:07:40.395817   73800 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 18:07:40.503041   73800 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 18:07:40.503073   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetMachineName
	I0731 18:07:40.503326   73800 buildroot.go:166] provisioning hostname "embed-certs-436067"
	I0731 18:07:40.503352   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetMachineName
	I0731 18:07:40.503539   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:07:40.506604   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.506940   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:40.506967   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.507124   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:07:40.507296   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:40.507438   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:40.507577   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:07:40.507752   73800 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:40.507912   73800 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.86 22 <nil> <nil>}
	I0731 18:07:40.507927   73800 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-436067 && echo "embed-certs-436067" | sudo tee /etc/hostname
	I0731 18:07:40.632627   73800 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-436067
	
	I0731 18:07:40.632678   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:07:40.635632   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.635989   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:40.636017   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.636168   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:07:40.636386   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:40.636554   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:40.636751   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:07:40.636963   73800 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:40.637192   73800 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.86 22 <nil> <nil>}
	I0731 18:07:40.637213   73800 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-436067' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-436067/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-436067' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 18:07:40.755249   73800 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 18:07:40.755273   73800 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19349-8084/.minikube CaCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19349-8084/.minikube}
	I0731 18:07:40.755291   73800 buildroot.go:174] setting up certificates
	I0731 18:07:40.755301   73800 provision.go:84] configureAuth start
	I0731 18:07:40.755310   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetMachineName
	I0731 18:07:40.755602   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetIP
	I0731 18:07:40.758306   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.758705   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:40.758731   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.758865   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:07:40.760790   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.761061   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:40.761090   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.761244   73800 provision.go:143] copyHostCerts
	I0731 18:07:40.761299   73800 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem, removing ...
	I0731 18:07:40.761323   73800 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem
	I0731 18:07:40.761376   73800 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem (1082 bytes)
	I0731 18:07:40.761479   73800 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem, removing ...
	I0731 18:07:40.761488   73800 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem
	I0731 18:07:40.761509   73800 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem (1123 bytes)
	I0731 18:07:40.761562   73800 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem, removing ...
	I0731 18:07:40.761569   73800 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem
	I0731 18:07:40.761586   73800 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem (1675 bytes)
	I0731 18:07:40.761635   73800 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem org=jenkins.embed-certs-436067 san=[127.0.0.1 192.168.50.86 embed-certs-436067 localhost minikube]
	I0731 18:07:40.874612   73800 provision.go:177] copyRemoteCerts
	I0731 18:07:40.874666   73800 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 18:07:40.874691   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:07:40.877623   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.878044   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:40.878075   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.878206   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:07:40.878403   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:40.878556   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:07:40.878706   73800 sshutil.go:53] new ssh client: &{IP:192.168.50.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/embed-certs-436067/id_rsa Username:docker}
	I0731 18:07:40.965720   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 18:07:40.987836   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0731 18:07:41.012423   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 18:07:41.036366   73800 provision.go:87] duration metric: took 281.054266ms to configureAuth
	I0731 18:07:41.036392   73800 buildroot.go:189] setting minikube options for container-runtime
	I0731 18:07:41.036561   73800 config.go:182] Loaded profile config "embed-certs-436067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:07:41.036626   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:07:41.039204   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.039587   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:41.039615   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.039814   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:07:41.040021   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:41.040162   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:41.040293   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:07:41.040462   73800 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:41.040642   73800 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.86 22 <nil> <nil>}
	I0731 18:07:41.040663   73800 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 18:07:41.307915   73800 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 18:07:41.307945   73800 machine.go:97] duration metric: took 916.161297ms to provisionDockerMachine
	I0731 18:07:41.307958   73800 start.go:293] postStartSetup for "embed-certs-436067" (driver="kvm2")
	I0731 18:07:41.307971   73800 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 18:07:41.307990   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:07:41.308383   73800 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 18:07:41.308409   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:07:41.311172   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.311532   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:41.311559   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.311712   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:07:41.311940   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:41.312132   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:07:41.312251   73800 sshutil.go:53] new ssh client: &{IP:192.168.50.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/embed-certs-436067/id_rsa Username:docker}
	I0731 18:07:41.397229   73800 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 18:07:41.401356   73800 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 18:07:41.401380   73800 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/addons for local assets ...
	I0731 18:07:41.401458   73800 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/files for local assets ...
	I0731 18:07:41.401571   73800 filesync.go:149] local asset: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem -> 152592.pem in /etc/ssl/certs
	I0731 18:07:41.401696   73800 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 18:07:41.410540   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /etc/ssl/certs/152592.pem (1708 bytes)
	I0731 18:07:41.434298   73800 start.go:296] duration metric: took 126.324424ms for postStartSetup
	I0731 18:07:41.434342   73800 fix.go:56] duration metric: took 19.306472215s for fixHost
	I0731 18:07:41.434363   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:07:41.437502   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.438007   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:41.438038   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.438221   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:07:41.438435   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:41.438613   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:41.438752   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:07:41.438932   73800 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:41.439086   73800 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.86 22 <nil> <nil>}
	I0731 18:07:41.439095   73800 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 18:07:41.551915   73800 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722449261.529568895
	
	I0731 18:07:41.551937   73800 fix.go:216] guest clock: 1722449261.529568895
	I0731 18:07:41.551944   73800 fix.go:229] Guest: 2024-07-31 18:07:41.529568895 +0000 UTC Remote: 2024-07-31 18:07:41.434346377 +0000 UTC m=+286.960766339 (delta=95.222518ms)
	I0731 18:07:41.551999   73800 fix.go:200] guest clock delta is within tolerance: 95.222518ms
	I0731 18:07:41.552010   73800 start.go:83] releasing machines lock for "embed-certs-436067", held for 19.42417291s
	I0731 18:07:41.552036   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:07:41.552377   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetIP
	I0731 18:07:41.554945   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.555385   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:41.555415   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.555583   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:07:41.556139   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:07:41.556362   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:07:41.556448   73800 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 18:07:41.556507   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:07:41.556619   73800 ssh_runner.go:195] Run: cat /version.json
	I0731 18:07:41.556634   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:07:41.559700   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.559847   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.560160   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:41.560227   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.560277   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:07:41.560374   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:41.560440   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.560582   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:41.560652   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:07:41.560697   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:07:41.560745   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:41.560833   73800 sshutil.go:53] new ssh client: &{IP:192.168.50.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/embed-certs-436067/id_rsa Username:docker}
	I0731 18:07:41.560909   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:07:41.561060   73800 sshutil.go:53] new ssh client: &{IP:192.168.50.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/embed-certs-436067/id_rsa Username:docker}
	I0731 18:07:41.640796   73800 ssh_runner.go:195] Run: systemctl --version
	I0731 18:07:41.671461   73800 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 18:07:41.820881   73800 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 18:07:41.826610   73800 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 18:07:41.826673   73800 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 18:07:41.841766   73800 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 18:07:41.841789   73800 start.go:495] detecting cgroup driver to use...
	I0731 18:07:41.841872   73800 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 18:07:41.858636   73800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 18:07:41.873090   73800 docker.go:217] disabling cri-docker service (if available) ...
	I0731 18:07:41.873152   73800 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 18:07:41.890967   73800 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 18:07:41.907886   73800 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 18:07:42.022724   73800 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 18:07:42.173885   73800 docker.go:233] disabling docker service ...
	I0731 18:07:42.173969   73800 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 18:07:42.190959   73800 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 18:07:42.205274   73800 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 18:07:42.358130   73800 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 18:07:42.497981   73800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 18:07:42.513774   73800 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 18:07:42.532713   73800 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 18:07:42.532808   73800 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:42.544367   73800 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 18:07:42.544427   73800 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:42.556427   73800 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:42.566399   73800 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:42.576633   73800 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 18:07:42.588508   73800 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:42.600011   73800 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:42.618858   73800 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:42.630437   73800 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 18:07:42.641459   73800 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 18:07:42.641528   73800 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 18:07:42.655000   73800 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 18:07:42.664912   73800 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:07:42.791781   73800 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 18:07:42.936709   73800 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 18:07:42.936778   73800 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 18:07:42.941132   73800 start.go:563] Will wait 60s for crictl version
	I0731 18:07:42.941189   73800 ssh_runner.go:195] Run: which crictl
	I0731 18:07:42.944870   73800 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 18:07:42.983069   73800 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 18:07:42.983181   73800 ssh_runner.go:195] Run: crio --version
	I0731 18:07:43.011636   73800 ssh_runner.go:195] Run: crio --version
	I0731 18:07:43.043295   73800 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 18:07:43.044545   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetIP
	I0731 18:07:43.047635   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:43.048049   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:43.048080   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:43.048330   73800 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0731 18:07:43.052269   73800 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:07:43.064116   73800 kubeadm.go:883] updating cluster {Name:embed-certs-436067 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-436067 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.86 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 18:07:43.064283   73800 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 18:07:43.064361   73800 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 18:07:43.100437   73800 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 18:07:43.100516   73800 ssh_runner.go:195] Run: which lz4
	I0731 18:07:43.104627   73800 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 18:07:43.108552   73800 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 18:07:43.108586   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 18:07:44.368238   73800 crio.go:462] duration metric: took 1.263636259s to copy over tarball
	I0731 18:07:44.368322   73800 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 18:07:41.576648   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .Start
	I0731 18:07:41.576823   74203 main.go:141] libmachine: (old-k8s-version-276459) Ensuring networks are active...
	I0731 18:07:41.577511   74203 main.go:141] libmachine: (old-k8s-version-276459) Ensuring network default is active
	I0731 18:07:41.578015   74203 main.go:141] libmachine: (old-k8s-version-276459) Ensuring network mk-old-k8s-version-276459 is active
	I0731 18:07:41.578469   74203 main.go:141] libmachine: (old-k8s-version-276459) Getting domain xml...
	I0731 18:07:41.579474   74203 main.go:141] libmachine: (old-k8s-version-276459) Creating domain...
	I0731 18:07:42.876409   74203 main.go:141] libmachine: (old-k8s-version-276459) Waiting to get IP...
	I0731 18:07:42.877345   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:42.877788   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:42.877841   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:42.877763   75164 retry.go:31] will retry after 218.764988ms: waiting for machine to come up
	I0731 18:07:43.098230   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:43.098697   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:43.098722   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:43.098650   75164 retry.go:31] will retry after 285.579707ms: waiting for machine to come up
	I0731 18:07:43.386356   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:43.386897   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:43.386928   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:43.386852   75164 retry.go:31] will retry after 389.197253ms: waiting for machine to come up
	I0731 18:07:43.778183   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:43.778672   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:43.778698   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:43.778622   75164 retry.go:31] will retry after 484.5108ms: waiting for machine to come up
	I0731 18:07:44.264412   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:44.265042   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:44.265073   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:44.264955   75164 retry.go:31] will retry after 621.551625ms: waiting for machine to come up
	I0731 18:07:44.887986   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:44.888534   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:44.888563   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:44.888489   75164 retry.go:31] will retry after 610.567971ms: waiting for machine to come up
	I0731 18:07:42.773583   73696 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-094310" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:44.272853   73696 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-094310" in "kube-system" namespace has status "Ready":"True"
	I0731 18:07:44.272874   73696 pod_ready.go:81] duration metric: took 7.507462023s for pod "kube-scheduler-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:44.272886   73696 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:46.689701   73800 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.321340678s)
	I0731 18:07:46.689730   73800 crio.go:469] duration metric: took 2.321463484s to extract the tarball
	I0731 18:07:46.689738   73800 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 18:07:46.749205   73800 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 18:07:46.805950   73800 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 18:07:46.805979   73800 cache_images.go:84] Images are preloaded, skipping loading
	I0731 18:07:46.805990   73800 kubeadm.go:934] updating node { 192.168.50.86 8443 v1.30.3 crio true true} ...
	I0731 18:07:46.806135   73800 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-436067 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.86
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-436067 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 18:07:46.806233   73800 ssh_runner.go:195] Run: crio config
	I0731 18:07:46.865815   73800 cni.go:84] Creating CNI manager for ""
	I0731 18:07:46.865838   73800 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:07:46.865852   73800 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 18:07:46.865873   73800 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.86 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-436067 NodeName:embed-certs-436067 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.86"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.86 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 18:07:46.866048   73800 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.86
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-436067"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.86
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.86"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 18:07:46.866121   73800 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 18:07:46.875722   73800 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 18:07:46.875786   73800 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 18:07:46.885107   73800 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0731 18:07:46.903868   73800 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 18:07:46.919585   73800 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0731 18:07:46.939034   73800 ssh_runner.go:195] Run: grep 192.168.50.86	control-plane.minikube.internal$ /etc/hosts
	I0731 18:07:46.943460   73800 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.86	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:07:46.957699   73800 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:07:47.065714   73800 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 18:07:47.080655   73800 certs.go:68] Setting up /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/embed-certs-436067 for IP: 192.168.50.86
	I0731 18:07:47.080681   73800 certs.go:194] generating shared ca certs ...
	I0731 18:07:47.080717   73800 certs.go:226] acquiring lock for ca certs: {Name:mk49eed439085f9c30746706460ce213570d1997 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:07:47.080879   73800 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key
	I0731 18:07:47.080938   73800 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key
	I0731 18:07:47.080950   73800 certs.go:256] generating profile certs ...
	I0731 18:07:47.081046   73800 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/embed-certs-436067/client.key
	I0731 18:07:47.081113   73800 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/embed-certs-436067/apiserver.key.7b8160da
	I0731 18:07:47.081168   73800 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/embed-certs-436067/proxy-client.key
	I0731 18:07:47.081312   73800 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem (1338 bytes)
	W0731 18:07:47.081367   73800 certs.go:480] ignoring /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259_empty.pem, impossibly tiny 0 bytes
	I0731 18:07:47.081380   73800 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 18:07:47.081413   73800 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem (1082 bytes)
	I0731 18:07:47.081438   73800 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem (1123 bytes)
	I0731 18:07:47.081468   73800 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem (1675 bytes)
	I0731 18:07:47.081508   73800 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem (1708 bytes)
	I0731 18:07:47.082355   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 18:07:47.130037   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 18:07:47.171218   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 18:07:47.215745   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 18:07:47.244883   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/embed-certs-436067/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0731 18:07:47.270032   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/embed-certs-436067/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 18:07:47.294900   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/embed-certs-436067/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 18:07:47.317285   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/embed-certs-436067/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 18:07:47.343000   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 18:07:47.369906   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem --> /usr/share/ca-certificates/15259.pem (1338 bytes)
	I0731 18:07:47.392022   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /usr/share/ca-certificates/152592.pem (1708 bytes)
	I0731 18:07:47.414219   73800 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 18:07:47.431931   73800 ssh_runner.go:195] Run: openssl version
	I0731 18:07:47.437602   73800 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 18:07:47.447585   73800 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:07:47.451779   73800 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 16:42 /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:07:47.451833   73800 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:07:47.457309   73800 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 18:07:47.466917   73800 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15259.pem && ln -fs /usr/share/ca-certificates/15259.pem /etc/ssl/certs/15259.pem"
	I0731 18:07:47.476211   73800 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15259.pem
	I0731 18:07:47.480149   73800 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 16:54 /usr/share/ca-certificates/15259.pem
	I0731 18:07:47.480215   73800 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15259.pem
	I0731 18:07:47.485412   73800 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15259.pem /etc/ssl/certs/51391683.0"
	I0731 18:07:47.494852   73800 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/152592.pem && ln -fs /usr/share/ca-certificates/152592.pem /etc/ssl/certs/152592.pem"
	I0731 18:07:47.504407   73800 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/152592.pem
	I0731 18:07:47.509594   73800 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 16:54 /usr/share/ca-certificates/152592.pem
	I0731 18:07:47.509658   73800 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/152592.pem
	I0731 18:07:47.515728   73800 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/152592.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 18:07:47.525660   73800 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 18:07:47.529953   73800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 18:07:47.535576   73800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 18:07:47.541158   73800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 18:07:47.546633   73800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 18:07:47.551827   73800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 18:07:47.557100   73800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 18:07:47.562447   73800 kubeadm.go:392] StartCluster: {Name:embed-certs-436067 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-436067 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.86 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:07:47.562551   73800 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 18:07:47.562616   73800 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 18:07:47.610318   73800 cri.go:89] found id: ""
	I0731 18:07:47.610382   73800 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 18:07:47.623036   73800 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 18:07:47.623053   73800 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 18:07:47.623101   73800 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 18:07:47.631709   73800 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 18:07:47.632699   73800 kubeconfig.go:125] found "embed-certs-436067" server: "https://192.168.50.86:8443"
	I0731 18:07:47.634724   73800 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 18:07:47.643183   73800 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.86
	I0731 18:07:47.643207   73800 kubeadm.go:1160] stopping kube-system containers ...
	I0731 18:07:47.643218   73800 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 18:07:47.643264   73800 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 18:07:47.677438   73800 cri.go:89] found id: ""
	I0731 18:07:47.677527   73800 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 18:07:47.693427   73800 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 18:07:47.702889   73800 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 18:07:47.702907   73800 kubeadm.go:157] found existing configuration files:
	
	I0731 18:07:47.702956   73800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 18:07:47.713958   73800 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 18:07:47.714017   73800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 18:07:47.723931   73800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 18:07:47.732615   73800 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 18:07:47.732673   73800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 18:07:47.741168   73800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 18:07:47.749164   73800 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 18:07:47.749217   73800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 18:07:47.757691   73800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 18:07:47.765479   73800 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 18:07:47.765530   73800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 18:07:47.774002   73800 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 18:07:47.783757   73800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:47.890835   73800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:48.951421   73800 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.060547503s)
	I0731 18:07:48.951466   73800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:49.152745   73800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:49.224334   73800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:49.341066   73800 api_server.go:52] waiting for apiserver process to appear ...
	I0731 18:07:49.341147   73800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:07:45.500400   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:45.500938   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:45.500966   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:45.500890   75164 retry.go:31] will retry after 1.069889786s: waiting for machine to come up
	I0731 18:07:46.572634   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:46.573085   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:46.573128   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:46.572979   75164 retry.go:31] will retry after 1.047722466s: waiting for machine to come up
	I0731 18:07:47.622035   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:47.622479   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:47.622507   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:47.622435   75164 retry.go:31] will retry after 1.292658555s: waiting for machine to come up
	I0731 18:07:48.916255   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:48.916755   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:48.916778   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:48.916701   75164 retry.go:31] will retry after 2.006539925s: waiting for machine to come up
	I0731 18:07:46.281654   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:49.189881   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:49.841397   73800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:07:50.341264   73800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:07:50.409398   73800 api_server.go:72] duration metric: took 1.068329172s to wait for apiserver process to appear ...
	I0731 18:07:50.409432   73800 api_server.go:88] waiting for apiserver healthz status ...
	I0731 18:07:50.409457   73800 api_server.go:253] Checking apiserver healthz at https://192.168.50.86:8443/healthz ...
	I0731 18:07:50.410135   73800 api_server.go:269] stopped: https://192.168.50.86:8443/healthz: Get "https://192.168.50.86:8443/healthz": dial tcp 192.168.50.86:8443: connect: connection refused
	I0731 18:07:50.909802   73800 api_server.go:253] Checking apiserver healthz at https://192.168.50.86:8443/healthz ...
	I0731 18:07:52.636930   73800 api_server.go:279] https://192.168.50.86:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 18:07:52.636972   73800 api_server.go:103] status: https://192.168.50.86:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 18:07:52.636989   73800 api_server.go:253] Checking apiserver healthz at https://192.168.50.86:8443/healthz ...
	I0731 18:07:52.666947   73800 api_server.go:279] https://192.168.50.86:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 18:07:52.666980   73800 api_server.go:103] status: https://192.168.50.86:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 18:07:52.910391   73800 api_server.go:253] Checking apiserver healthz at https://192.168.50.86:8443/healthz ...
	I0731 18:07:52.916305   73800 api_server.go:279] https://192.168.50.86:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 18:07:52.916342   73800 api_server.go:103] status: https://192.168.50.86:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 18:07:53.409623   73800 api_server.go:253] Checking apiserver healthz at https://192.168.50.86:8443/healthz ...
	I0731 18:07:53.419159   73800 api_server.go:279] https://192.168.50.86:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 18:07:53.419205   73800 api_server.go:103] status: https://192.168.50.86:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 18:07:53.909654   73800 api_server.go:253] Checking apiserver healthz at https://192.168.50.86:8443/healthz ...
	I0731 18:07:53.913518   73800 api_server.go:279] https://192.168.50.86:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 18:07:53.913541   73800 api_server.go:103] status: https://192.168.50.86:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 18:07:54.409879   73800 api_server.go:253] Checking apiserver healthz at https://192.168.50.86:8443/healthz ...
	I0731 18:07:54.413948   73800 api_server.go:279] https://192.168.50.86:8443/healthz returned 200:
	ok
	I0731 18:07:54.422414   73800 api_server.go:141] control plane version: v1.30.3
	I0731 18:07:54.422444   73800 api_server.go:131] duration metric: took 4.013003689s to wait for apiserver health ...
	I0731 18:07:54.422458   73800 cni.go:84] Creating CNI manager for ""
	I0731 18:07:54.422467   73800 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:07:54.424680   73800 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 18:07:54.425887   73800 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 18:07:54.436394   73800 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 18:07:54.454533   73800 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 18:07:54.464268   73800 system_pods.go:59] 8 kube-system pods found
	I0731 18:07:54.464304   73800 system_pods.go:61] "coredns-7db6d8ff4d-h6ckp" [84faf557-0c8d-4026-b620-37265e017ed0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:07:54.464315   73800 system_pods.go:61] "etcd-embed-certs-436067" [787466df-6e3f-4209-a996-037875d63dc8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 18:07:54.464326   73800 system_pods.go:61] "kube-apiserver-embed-certs-436067" [6366e38e-21f3-41a4-af7a-433953b70eaa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 18:07:54.464335   73800 system_pods.go:61] "kube-controller-manager-embed-certs-436067" [a97f6a49-40cf-433a-8196-c433e3cda8e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 18:07:54.464341   73800 system_pods.go:61] "kube-proxy-tl9pj" [0124eb62-5c00-4f75-a73f-c3e92ddc4a42] Running
	I0731 18:07:54.464354   73800 system_pods.go:61] "kube-scheduler-embed-certs-436067" [afbb9117-f229-44ea-8939-d28c4a402c6b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 18:07:54.464366   73800 system_pods.go:61] "metrics-server-569cc877fc-fzxrw" [2ecdab2a-8ce8-4771-bd94-4e24dee34386] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 18:07:54.464374   73800 system_pods.go:61] "storage-provisioner" [29b17f6d-f9e4-4272-b6da-368431264701] Running
	I0731 18:07:54.464382   73800 system_pods.go:74] duration metric: took 9.82125ms to wait for pod list to return data ...
	I0731 18:07:54.464395   73800 node_conditions.go:102] verifying NodePressure condition ...
	I0731 18:07:54.467718   73800 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 18:07:54.467748   73800 node_conditions.go:123] node cpu capacity is 2
	I0731 18:07:54.467761   73800 node_conditions.go:105] duration metric: took 3.3602ms to run NodePressure ...
	I0731 18:07:54.467779   73800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:50.925369   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:50.925835   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:50.925856   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:50.925790   75164 retry.go:31] will retry after 2.875577792s: waiting for machine to come up
	I0731 18:07:53.802729   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:53.803164   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:53.803192   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:53.803122   75164 retry.go:31] will retry after 2.352020729s: waiting for machine to come up
	I0731 18:07:51.279883   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:53.279992   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:55.778812   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:54.732921   73800 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 18:07:54.736779   73800 kubeadm.go:739] kubelet initialised
	I0731 18:07:54.736798   73800 kubeadm.go:740] duration metric: took 3.850446ms waiting for restarted kubelet to initialise ...
	I0731 18:07:54.736809   73800 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:07:54.741733   73800 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-h6ckp" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:54.745722   73800 pod_ready.go:97] node "embed-certs-436067" hosting pod "coredns-7db6d8ff4d-h6ckp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436067" has status "Ready":"False"
	I0731 18:07:54.745742   73800 pod_ready.go:81] duration metric: took 3.986968ms for pod "coredns-7db6d8ff4d-h6ckp" in "kube-system" namespace to be "Ready" ...
	E0731 18:07:54.745751   73800 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-436067" hosting pod "coredns-7db6d8ff4d-h6ckp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436067" has status "Ready":"False"
	I0731 18:07:54.745757   73800 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:54.749650   73800 pod_ready.go:97] node "embed-certs-436067" hosting pod "etcd-embed-certs-436067" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436067" has status "Ready":"False"
	I0731 18:07:54.749666   73800 pod_ready.go:81] duration metric: took 3.895483ms for pod "etcd-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	E0731 18:07:54.749673   73800 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-436067" hosting pod "etcd-embed-certs-436067" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436067" has status "Ready":"False"
	I0731 18:07:54.749679   73800 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:54.753326   73800 pod_ready.go:97] node "embed-certs-436067" hosting pod "kube-apiserver-embed-certs-436067" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436067" has status "Ready":"False"
	I0731 18:07:54.753351   73800 pod_ready.go:81] duration metric: took 3.66496ms for pod "kube-apiserver-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	E0731 18:07:54.753362   73800 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-436067" hosting pod "kube-apiserver-embed-certs-436067" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436067" has status "Ready":"False"
	I0731 18:07:54.753370   73800 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:54.857956   73800 pod_ready.go:97] node "embed-certs-436067" hosting pod "kube-controller-manager-embed-certs-436067" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436067" has status "Ready":"False"
	I0731 18:07:54.857978   73800 pod_ready.go:81] duration metric: took 104.599259ms for pod "kube-controller-manager-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	E0731 18:07:54.857988   73800 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-436067" hosting pod "kube-controller-manager-embed-certs-436067" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436067" has status "Ready":"False"
	I0731 18:07:54.857995   73800 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-tl9pj" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:55.257589   73800 pod_ready.go:92] pod "kube-proxy-tl9pj" in "kube-system" namespace has status "Ready":"True"
	I0731 18:07:55.257621   73800 pod_ready.go:81] duration metric: took 399.617003ms for pod "kube-proxy-tl9pj" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:55.257630   73800 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:57.262770   73800 pod_ready.go:102] pod "kube-scheduler-embed-certs-436067" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:59.271094   73800 pod_ready.go:102] pod "kube-scheduler-embed-certs-436067" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:56.157721   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:56.158176   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:56.158216   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:56.158110   75164 retry.go:31] will retry after 3.552824334s: waiting for machine to come up
	I0731 18:07:59.712249   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.712759   74203 main.go:141] libmachine: (old-k8s-version-276459) Found IP for machine: 192.168.39.26
	I0731 18:07:59.712783   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has current primary IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.712793   74203 main.go:141] libmachine: (old-k8s-version-276459) Reserving static IP address...
	I0731 18:07:59.713268   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "old-k8s-version-276459", mac: "52:54:00:79:9d:96", ip: "192.168.39.26"} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:07:59.713297   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | skip adding static IP to network mk-old-k8s-version-276459 - found existing host DHCP lease matching {name: "old-k8s-version-276459", mac: "52:54:00:79:9d:96", ip: "192.168.39.26"}
	I0731 18:07:59.713320   74203 main.go:141] libmachine: (old-k8s-version-276459) Reserved static IP address: 192.168.39.26
	I0731 18:07:59.713343   74203 main.go:141] libmachine: (old-k8s-version-276459) Waiting for SSH to be available...
	I0731 18:07:59.713355   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | Getting to WaitForSSH function...
	I0731 18:07:59.716068   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.716456   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:07:59.716490   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.716701   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | Using SSH client type: external
	I0731 18:07:59.716725   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | Using SSH private key: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459/id_rsa (-rw-------)
	I0731 18:07:59.716762   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.26 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 18:07:59.716776   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | About to run SSH command:
	I0731 18:07:59.716792   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | exit 0
	I0731 18:07:59.847720   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | SSH cmd err, output: <nil>: 
	I0731 18:07:59.848089   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetConfigRaw
	I0731 18:07:59.848847   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetIP
	I0731 18:07:59.851632   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.852004   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:07:59.852030   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.852321   74203 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/config.json ...
	I0731 18:07:59.852505   74203 machine.go:94] provisionDockerMachine start ...
	I0731 18:07:59.852524   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:07:59.852752   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:07:59.855198   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.855596   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:07:59.855626   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.855756   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:07:59.855920   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:07:59.856071   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:07:59.856208   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:07:59.856372   74203 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:59.856601   74203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0731 18:07:59.856614   74203 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 18:07:59.963492   74203 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 18:07:59.963524   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetMachineName
	I0731 18:07:59.963762   74203 buildroot.go:166] provisioning hostname "old-k8s-version-276459"
	I0731 18:07:59.963794   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetMachineName
	I0731 18:07:59.963992   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:07:59.967261   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.967725   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:07:59.967762   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.967938   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:07:59.968131   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:07:59.968316   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:07:59.968487   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:07:59.968687   74203 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:59.968872   74203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0731 18:07:59.968890   74203 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-276459 && echo "old-k8s-version-276459" | sudo tee /etc/hostname
	I0731 18:08:00.084360   74203 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-276459
	
	I0731 18:08:00.084390   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:08:00.087433   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.087833   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.087862   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.088016   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:08:00.088187   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.088371   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.088521   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:08:00.088719   74203 main.go:141] libmachine: Using SSH client type: native
	I0731 18:08:00.088893   74203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0731 18:08:00.088915   74203 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-276459' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-276459/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-276459' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 18:08:00.200012   74203 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 18:08:00.200038   74203 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19349-8084/.minikube CaCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19349-8084/.minikube}
	I0731 18:08:00.200069   74203 buildroot.go:174] setting up certificates
	I0731 18:08:00.200081   74203 provision.go:84] configureAuth start
	I0731 18:08:00.200093   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetMachineName
	I0731 18:08:00.200360   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetIP
	I0731 18:08:00.203352   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.203694   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.203721   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.203951   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:08:00.206061   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.206398   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.206432   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.206510   74203 provision.go:143] copyHostCerts
	I0731 18:08:00.206580   74203 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem, removing ...
	I0731 18:08:00.206591   74203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem
	I0731 18:08:00.206654   74203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem (1082 bytes)
	I0731 18:08:00.206759   74203 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem, removing ...
	I0731 18:08:00.206769   74203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem
	I0731 18:08:00.206799   74203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem (1123 bytes)
	I0731 18:08:00.206876   74203 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem, removing ...
	I0731 18:08:00.206885   74203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem
	I0731 18:08:00.206913   74203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem (1675 bytes)
	I0731 18:08:00.207047   74203 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-276459 san=[127.0.0.1 192.168.39.26 localhost minikube old-k8s-version-276459]
	I0731 18:08:00.279363   74203 provision.go:177] copyRemoteCerts
	I0731 18:08:00.279423   74203 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 18:08:00.279456   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:08:00.282234   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.282601   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.282630   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.282751   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:08:00.283004   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.283178   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:08:00.283361   74203 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459/id_rsa Username:docker}
	I0731 18:08:00.935990   73479 start.go:364] duration metric: took 51.517312901s to acquireMachinesLock for "no-preload-673754"
	I0731 18:08:00.936054   73479 start.go:96] Skipping create...Using existing machine configuration
	I0731 18:08:00.936066   73479 fix.go:54] fixHost starting: 
	I0731 18:08:00.936534   73479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:08:00.936589   73479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:08:00.954868   73479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39013
	I0731 18:08:00.955405   73479 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:08:00.955980   73479 main.go:141] libmachine: Using API Version  1
	I0731 18:08:00.956012   73479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:08:00.956386   73479 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:08:00.956589   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 18:08:00.956752   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetState
	I0731 18:08:00.958461   73479 fix.go:112] recreateIfNeeded on no-preload-673754: state=Stopped err=<nil>
	I0731 18:08:00.958485   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	W0731 18:08:00.958655   73479 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 18:08:00.960117   73479 out.go:177] * Restarting existing kvm2 VM for "no-preload-673754" ...
	I0731 18:07:57.779258   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:59.780834   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:00.961340   73479 main.go:141] libmachine: (no-preload-673754) Calling .Start
	I0731 18:08:00.961543   73479 main.go:141] libmachine: (no-preload-673754) Ensuring networks are active...
	I0731 18:08:00.962332   73479 main.go:141] libmachine: (no-preload-673754) Ensuring network default is active
	I0731 18:08:00.962661   73479 main.go:141] libmachine: (no-preload-673754) Ensuring network mk-no-preload-673754 is active
	I0731 18:08:00.963165   73479 main.go:141] libmachine: (no-preload-673754) Getting domain xml...
	I0731 18:08:00.963982   73479 main.go:141] libmachine: (no-preload-673754) Creating domain...
	I0731 18:08:00.365254   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 18:08:00.389729   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0731 18:08:00.413143   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 18:08:00.436040   74203 provision.go:87] duration metric: took 235.932619ms to configureAuth
	I0731 18:08:00.436080   74203 buildroot.go:189] setting minikube options for container-runtime
	I0731 18:08:00.436288   74203 config.go:182] Loaded profile config "old-k8s-version-276459": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0731 18:08:00.436403   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:08:00.439184   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.439543   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.439575   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.439734   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:08:00.439898   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.440093   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.440271   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:08:00.440450   74203 main.go:141] libmachine: Using SSH client type: native
	I0731 18:08:00.440661   74203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0731 18:08:00.440679   74203 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 18:08:00.707438   74203 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 18:08:00.707467   74203 machine.go:97] duration metric: took 854.948491ms to provisionDockerMachine
	I0731 18:08:00.707482   74203 start.go:293] postStartSetup for "old-k8s-version-276459" (driver="kvm2")
	I0731 18:08:00.707494   74203 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 18:08:00.707510   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:08:00.707811   74203 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 18:08:00.707837   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:08:00.710726   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.711285   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.711315   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.711458   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:08:00.711703   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.711895   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:08:00.712049   74203 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459/id_rsa Username:docker}
	I0731 18:08:00.793719   74203 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 18:08:00.797858   74203 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 18:08:00.797888   74203 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/addons for local assets ...
	I0731 18:08:00.797960   74203 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/files for local assets ...
	I0731 18:08:00.798038   74203 filesync.go:149] local asset: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem -> 152592.pem in /etc/ssl/certs
	I0731 18:08:00.798130   74203 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 18:08:00.807013   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /etc/ssl/certs/152592.pem (1708 bytes)
	I0731 18:08:00.829440   74203 start.go:296] duration metric: took 121.944271ms for postStartSetup
	I0731 18:08:00.829487   74203 fix.go:56] duration metric: took 19.277312964s for fixHost
	I0731 18:08:00.829518   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:08:00.832718   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.833048   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.833082   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.833317   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:08:00.833533   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.833752   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.833887   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:08:00.834189   74203 main.go:141] libmachine: Using SSH client type: native
	I0731 18:08:00.834364   74203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0731 18:08:00.834377   74203 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 18:08:00.935834   74203 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722449280.899364873
	
	I0731 18:08:00.935853   74203 fix.go:216] guest clock: 1722449280.899364873
	I0731 18:08:00.935860   74203 fix.go:229] Guest: 2024-07-31 18:08:00.899364873 +0000 UTC Remote: 2024-07-31 18:08:00.829491013 +0000 UTC m=+245.518063325 (delta=69.87386ms)
	I0731 18:08:00.935894   74203 fix.go:200] guest clock delta is within tolerance: 69.87386ms
	I0731 18:08:00.935899   74203 start.go:83] releasing machines lock for "old-k8s-version-276459", held for 19.38376262s
	I0731 18:08:00.935937   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:08:00.936220   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetIP
	I0731 18:08:00.939282   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.939691   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.939722   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.939911   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:08:00.940506   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:08:00.940704   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:08:00.940790   74203 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 18:08:00.940831   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:08:00.940960   74203 ssh_runner.go:195] Run: cat /version.json
	I0731 18:08:00.941043   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:08:00.943883   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.943909   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.944361   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.944405   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.944429   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.944442   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.944542   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:08:00.944639   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:08:00.944766   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.944817   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.944899   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:08:00.944979   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:08:00.945039   74203 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459/id_rsa Username:docker}
	I0731 18:08:00.945110   74203 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459/id_rsa Username:docker}
	I0731 18:08:01.023818   74203 ssh_runner.go:195] Run: systemctl --version
	I0731 18:08:01.063390   74203 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 18:08:01.205084   74203 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 18:08:01.210972   74203 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 18:08:01.211049   74203 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 18:08:01.226156   74203 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 18:08:01.226180   74203 start.go:495] detecting cgroup driver to use...
	I0731 18:08:01.226257   74203 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 18:08:01.241506   74203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 18:08:01.256615   74203 docker.go:217] disabling cri-docker service (if available) ...
	I0731 18:08:01.256671   74203 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 18:08:01.271515   74203 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 18:08:01.287213   74203 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 18:08:01.415827   74203 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 18:08:01.578122   74203 docker.go:233] disabling docker service ...
	I0731 18:08:01.578208   74203 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 18:08:01.596564   74203 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 18:08:01.611984   74203 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 18:08:01.748972   74203 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 18:08:01.896911   74203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 18:08:01.912921   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 18:08:01.931671   74203 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0731 18:08:01.931749   74203 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:01.943737   74203 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 18:08:01.943798   74203 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:01.954571   74203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:01.964733   74203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:01.976087   74203 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 18:08:01.987193   74203 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 18:08:01.996620   74203 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 18:08:01.996670   74203 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 18:08:02.011046   74203 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 18:08:02.022199   74203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:08:02.147855   74203 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 18:08:02.309868   74203 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 18:08:02.309940   74203 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 18:08:02.314966   74203 start.go:563] Will wait 60s for crictl version
	I0731 18:08:02.315031   74203 ssh_runner.go:195] Run: which crictl
	I0731 18:08:02.318685   74203 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 18:08:02.359361   74203 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 18:08:02.359460   74203 ssh_runner.go:195] Run: crio --version
	I0731 18:08:02.387053   74203 ssh_runner.go:195] Run: crio --version
	I0731 18:08:02.417054   74203 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0731 18:08:01.265323   73800 pod_ready.go:92] pod "kube-scheduler-embed-certs-436067" in "kube-system" namespace has status "Ready":"True"
	I0731 18:08:01.265363   73800 pod_ready.go:81] duration metric: took 6.007715949s for pod "kube-scheduler-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:01.265376   73800 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:03.271693   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:02.418272   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetIP
	I0731 18:08:02.421211   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:02.421714   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:02.421743   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:02.421949   74203 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 18:08:02.425878   74203 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:08:02.438082   74203 kubeadm.go:883] updating cluster {Name:old-k8s-version-276459 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-276459 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 18:08:02.438222   74203 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 18:08:02.438293   74203 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 18:08:02.484113   74203 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0731 18:08:02.484189   74203 ssh_runner.go:195] Run: which lz4
	I0731 18:08:02.488365   74203 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 18:08:02.492321   74203 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 18:08:02.492352   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0731 18:08:03.946187   74203 crio.go:462] duration metric: took 1.457852426s to copy over tarball
	I0731 18:08:03.946261   74203 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 18:08:01.781606   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:03.781786   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:02.287159   73479 main.go:141] libmachine: (no-preload-673754) Waiting to get IP...
	I0731 18:08:02.288338   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:02.288812   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:02.288879   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:02.288799   75356 retry.go:31] will retry after 229.074083ms: waiting for machine to come up
	I0731 18:08:02.519266   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:02.519697   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:02.519720   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:02.519663   75356 retry.go:31] will retry after 328.345922ms: waiting for machine to come up
	I0731 18:08:02.849290   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:02.849839   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:02.849871   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:02.849787   75356 retry.go:31] will retry after 339.030371ms: waiting for machine to come up
	I0731 18:08:03.190065   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:03.190587   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:03.190620   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:03.190539   75356 retry.go:31] will retry after 514.955663ms: waiting for machine to come up
	I0731 18:08:03.707808   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:03.708382   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:03.708418   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:03.708349   75356 retry.go:31] will retry after 543.558992ms: waiting for machine to come up
	I0731 18:08:04.253224   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:04.253760   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:04.253781   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:04.253708   75356 retry.go:31] will retry after 925.348689ms: waiting for machine to come up
	I0731 18:08:05.180439   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:05.180833   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:05.180857   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:05.180786   75356 retry.go:31] will retry after 1.014666798s: waiting for machine to come up
	I0731 18:08:06.196879   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:06.197321   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:06.197355   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:06.197258   75356 retry.go:31] will retry after 1.163649074s: waiting for machine to come up
	I0731 18:08:05.278001   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:07.771870   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:06.945760   74203 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.99946679s)
	I0731 18:08:06.945790   74203 crio.go:469] duration metric: took 2.999576832s to extract the tarball
	I0731 18:08:06.945800   74203 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 18:08:06.989081   74203 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 18:08:07.024521   74203 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0731 18:08:07.024545   74203 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 18:08:07.024615   74203 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:08:07.024645   74203 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 18:08:07.024695   74203 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 18:08:07.024729   74203 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 18:08:07.024718   74203 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0731 18:08:07.024780   74203 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0731 18:08:07.024822   74203 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0731 18:08:07.024716   74203 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 18:08:07.026228   74203 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 18:08:07.026237   74203 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 18:08:07.026242   74203 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0731 18:08:07.026263   74203 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:08:07.026231   74203 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 18:08:07.026231   74203 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0731 18:08:07.026863   74203 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 18:08:07.027091   74203 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0731 18:08:07.282735   74203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0731 18:08:07.284464   74203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0731 18:08:07.287001   74203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 18:08:07.305873   74203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0731 18:08:07.307144   74203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0731 18:08:07.311401   74203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0731 18:08:07.318119   74203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0731 18:08:07.366929   74203 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0731 18:08:07.366979   74203 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0731 18:08:07.367041   74203 ssh_runner.go:195] Run: which crictl
	I0731 18:08:07.393481   74203 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0731 18:08:07.393534   74203 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0731 18:08:07.393594   74203 ssh_runner.go:195] Run: which crictl
	I0731 18:08:07.441987   74203 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0731 18:08:07.442036   74203 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 18:08:07.442083   74203 ssh_runner.go:195] Run: which crictl
	I0731 18:08:07.449033   74203 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0731 18:08:07.449085   74203 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 18:08:07.449137   74203 ssh_runner.go:195] Run: which crictl
	I0731 18:08:07.465248   74203 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0731 18:08:07.465291   74203 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 18:08:07.465341   74203 ssh_runner.go:195] Run: which crictl
	I0731 18:08:07.476013   74203 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0731 18:08:07.476053   74203 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0731 18:08:07.476074   74203 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 18:08:07.476090   74203 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0731 18:08:07.476111   74203 ssh_runner.go:195] Run: which crictl
	I0731 18:08:07.476129   74203 ssh_runner.go:195] Run: which crictl
	I0731 18:08:07.476146   74203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0731 18:08:07.476111   74203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0731 18:08:07.476196   74203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 18:08:07.476220   74203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0731 18:08:07.476273   74203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0731 18:08:07.592532   74203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0731 18:08:07.592677   74203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0731 18:08:07.592709   74203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0731 18:08:07.592797   74203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0731 18:08:07.637254   74203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0731 18:08:07.637276   74203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0731 18:08:07.637288   74203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0731 18:08:07.637292   74203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0731 18:08:07.640419   74203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0731 18:08:07.860814   74203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:08:08.002115   74203 cache_images.go:92] duration metric: took 977.553376ms to LoadCachedImages
	W0731 18:08:08.002248   74203 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0731 18:08:08.002267   74203 kubeadm.go:934] updating node { 192.168.39.26 8443 v1.20.0 crio true true} ...
	I0731 18:08:08.002404   74203 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-276459 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.26
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-276459 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 18:08:08.002500   74203 ssh_runner.go:195] Run: crio config
	I0731 18:08:08.059237   74203 cni.go:84] Creating CNI manager for ""
	I0731 18:08:08.059264   74203 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:08:08.059281   74203 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 18:08:08.059313   74203 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.26 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-276459 NodeName:old-k8s-version-276459 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.26"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.26 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0731 18:08:08.059503   74203 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.26
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-276459"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.26
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.26"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 18:08:08.059575   74203 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0731 18:08:08.070299   74203 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 18:08:08.070388   74203 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 18:08:08.082083   74203 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0731 18:08:08.101728   74203 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 18:08:08.120721   74203 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0731 18:08:08.137997   74203 ssh_runner.go:195] Run: grep 192.168.39.26	control-plane.minikube.internal$ /etc/hosts
	I0731 18:08:08.141797   74203 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.26	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:08:08.156861   74203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:08:08.287700   74203 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 18:08:08.307598   74203 certs.go:68] Setting up /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459 for IP: 192.168.39.26
	I0731 18:08:08.307623   74203 certs.go:194] generating shared ca certs ...
	I0731 18:08:08.307644   74203 certs.go:226] acquiring lock for ca certs: {Name:mk49eed439085f9c30746706460ce213570d1997 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:08:08.307811   74203 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key
	I0731 18:08:08.307855   74203 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key
	I0731 18:08:08.307868   74203 certs.go:256] generating profile certs ...
	I0731 18:08:08.307987   74203 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/client.key
	I0731 18:08:08.308062   74203 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/apiserver.key.7c620cac
	I0731 18:08:08.308123   74203 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/proxy-client.key
	I0731 18:08:08.308283   74203 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem (1338 bytes)
	W0731 18:08:08.308315   74203 certs.go:480] ignoring /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259_empty.pem, impossibly tiny 0 bytes
	I0731 18:08:08.308324   74203 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 18:08:08.308362   74203 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem (1082 bytes)
	I0731 18:08:08.308382   74203 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem (1123 bytes)
	I0731 18:08:08.308402   74203 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem (1675 bytes)
	I0731 18:08:08.308438   74203 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem (1708 bytes)
	I0731 18:08:08.309095   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 18:08:08.355508   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 18:08:08.391999   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 18:08:08.427937   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 18:08:08.456268   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0731 18:08:08.486991   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 18:08:08.519564   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 18:08:08.557029   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 18:08:08.583971   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /usr/share/ca-certificates/152592.pem (1708 bytes)
	I0731 18:08:08.608505   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 18:08:08.630279   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem --> /usr/share/ca-certificates/15259.pem (1338 bytes)
	I0731 18:08:08.655012   74203 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 18:08:08.671907   74203 ssh_runner.go:195] Run: openssl version
	I0731 18:08:08.677538   74203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/152592.pem && ln -fs /usr/share/ca-certificates/152592.pem /etc/ssl/certs/152592.pem"
	I0731 18:08:08.687877   74203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/152592.pem
	I0731 18:08:08.692201   74203 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 16:54 /usr/share/ca-certificates/152592.pem
	I0731 18:08:08.692258   74203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/152592.pem
	I0731 18:08:08.698563   74203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/152592.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 18:08:08.708986   74203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 18:08:08.719132   74203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:08:08.723242   74203 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 16:42 /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:08:08.723299   74203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:08:08.729032   74203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 18:08:08.739306   74203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15259.pem && ln -fs /usr/share/ca-certificates/15259.pem /etc/ssl/certs/15259.pem"
	I0731 18:08:08.749759   74203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15259.pem
	I0731 18:08:08.754167   74203 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 16:54 /usr/share/ca-certificates/15259.pem
	I0731 18:08:08.754228   74203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15259.pem
	I0731 18:08:08.759786   74203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15259.pem /etc/ssl/certs/51391683.0"
	I0731 18:08:08.770180   74203 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 18:08:08.775414   74203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 18:08:08.781830   74203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 18:08:08.787876   74203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 18:08:08.793927   74203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 18:08:08.800090   74203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 18:08:08.806169   74203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 18:08:08.811895   74203 kubeadm.go:392] StartCluster: {Name:old-k8s-version-276459 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-276459 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:08:08.811983   74203 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 18:08:08.812040   74203 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 18:08:08.853889   74203 cri.go:89] found id: ""
	I0731 18:08:08.853989   74203 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 18:08:08.863817   74203 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 18:08:08.863837   74203 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 18:08:08.863887   74203 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 18:08:08.873411   74203 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 18:08:08.874616   74203 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-276459" does not appear in /home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 18:08:08.875356   74203 kubeconfig.go:62] /home/jenkins/minikube-integration/19349-8084/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-276459" cluster setting kubeconfig missing "old-k8s-version-276459" context setting]
	I0731 18:08:08.876650   74203 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/kubeconfig: {Name:mkf2340bc1ada3c683f132aca37228d7588e1fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:08:08.918433   74203 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 18:08:08.931013   74203 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.26
	I0731 18:08:08.931067   74203 kubeadm.go:1160] stopping kube-system containers ...
	I0731 18:08:08.931083   74203 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 18:08:08.931163   74203 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 18:08:08.964683   74203 cri.go:89] found id: ""
	I0731 18:08:08.964759   74203 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 18:08:08.980459   74203 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 18:08:08.989969   74203 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 18:08:08.989997   74203 kubeadm.go:157] found existing configuration files:
	
	I0731 18:08:08.990049   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 18:08:08.999015   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 18:08:08.999074   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 18:08:09.008055   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 18:08:09.016532   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 18:08:09.016599   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 18:08:09.025791   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 18:08:09.034160   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 18:08:09.034227   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 18:08:09.043381   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 18:08:09.053419   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 18:08:09.053832   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 18:08:09.064966   74203 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 18:08:09.073962   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:09.198503   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:10.048258   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:10.283812   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:06.285091   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:08.779998   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:10.780198   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:07.362756   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:07.363299   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:07.363328   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:07.363231   75356 retry.go:31] will retry after 1.508296616s: waiting for machine to come up
	I0731 18:08:08.873528   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:08.874013   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:08.874051   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:08.873971   75356 retry.go:31] will retry after 2.281343566s: waiting for machine to come up
	I0731 18:08:11.157083   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:11.157578   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:11.157609   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:11.157537   75356 retry.go:31] will retry after 2.49049752s: waiting for machine to come up
	I0731 18:08:09.802010   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:12.271900   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:10.390012   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:10.477969   74203 api_server.go:52] waiting for apiserver process to appear ...
	I0731 18:08:10.478093   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:10.978427   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:11.478715   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:11.978685   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:12.478211   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:12.978218   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:13.478493   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:13.978778   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:14.478489   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:14.978983   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:13.278943   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:15.778760   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:13.650131   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:13.650459   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:13.650480   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:13.650428   75356 retry.go:31] will retry after 3.437877467s: waiting for machine to come up
	I0731 18:08:14.771879   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:17.272673   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:15.478444   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:15.978399   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:16.478641   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:16.979036   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:17.479053   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:17.978819   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:18.478280   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:18.978448   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:19.479056   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:19.978969   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:18.279604   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:20.778532   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:17.089986   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:17.090556   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:17.090590   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:17.090509   75356 retry.go:31] will retry after 2.95036051s: waiting for machine to come up
	I0731 18:08:20.044455   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.044914   73479 main.go:141] libmachine: (no-preload-673754) Found IP for machine: 192.168.61.126
	I0731 18:08:20.044935   73479 main.go:141] libmachine: (no-preload-673754) Reserving static IP address...
	I0731 18:08:20.044948   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has current primary IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.045286   73479 main.go:141] libmachine: (no-preload-673754) Reserved static IP address: 192.168.61.126
	I0731 18:08:20.045308   73479 main.go:141] libmachine: (no-preload-673754) Waiting for SSH to be available...
	I0731 18:08:20.045331   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "no-preload-673754", mac: "52:54:00:5a:ec:78", ip: "192.168.61.126"} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:20.045352   73479 main.go:141] libmachine: (no-preload-673754) DBG | skip adding static IP to network mk-no-preload-673754 - found existing host DHCP lease matching {name: "no-preload-673754", mac: "52:54:00:5a:ec:78", ip: "192.168.61.126"}
	I0731 18:08:20.045367   73479 main.go:141] libmachine: (no-preload-673754) DBG | Getting to WaitForSSH function...
	I0731 18:08:20.047574   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.047913   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:20.047939   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.048069   73479 main.go:141] libmachine: (no-preload-673754) DBG | Using SSH client type: external
	I0731 18:08:20.048106   73479 main.go:141] libmachine: (no-preload-673754) DBG | Using SSH private key: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/no-preload-673754/id_rsa (-rw-------)
	I0731 18:08:20.048150   73479 main.go:141] libmachine: (no-preload-673754) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.126 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19349-8084/.minikube/machines/no-preload-673754/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 18:08:20.048168   73479 main.go:141] libmachine: (no-preload-673754) DBG | About to run SSH command:
	I0731 18:08:20.048181   73479 main.go:141] libmachine: (no-preload-673754) DBG | exit 0
	I0731 18:08:20.175606   73479 main.go:141] libmachine: (no-preload-673754) DBG | SSH cmd err, output: <nil>: 
	I0731 18:08:20.175917   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetConfigRaw
	I0731 18:08:20.176508   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetIP
	I0731 18:08:20.179035   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.179374   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:20.179404   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.179686   73479 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/no-preload-673754/config.json ...
	I0731 18:08:20.179869   73479 machine.go:94] provisionDockerMachine start ...
	I0731 18:08:20.179885   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 18:08:20.180088   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:20.182345   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.182702   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:20.182727   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.182848   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:20.183060   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:20.183227   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:20.183414   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:20.183572   73479 main.go:141] libmachine: Using SSH client type: native
	I0731 18:08:20.183747   73479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I0731 18:08:20.183757   73479 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 18:08:20.295090   73479 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 18:08:20.295149   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetMachineName
	I0731 18:08:20.295424   73479 buildroot.go:166] provisioning hostname "no-preload-673754"
	I0731 18:08:20.295454   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetMachineName
	I0731 18:08:20.295631   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:20.298467   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.298771   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:20.298815   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.298897   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:20.299094   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:20.299276   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:20.299462   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:20.299652   73479 main.go:141] libmachine: Using SSH client type: native
	I0731 18:08:20.299806   73479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I0731 18:08:20.299817   73479 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-673754 && echo "no-preload-673754" | sudo tee /etc/hostname
	I0731 18:08:20.424901   73479 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-673754
	
	I0731 18:08:20.424951   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:20.427679   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.428049   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:20.428083   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.428230   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:20.428419   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:20.428601   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:20.428767   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:20.428965   73479 main.go:141] libmachine: Using SSH client type: native
	I0731 18:08:20.429127   73479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I0731 18:08:20.429142   73479 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-673754' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-673754/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-673754' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 18:08:20.546853   73479 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 18:08:20.546884   73479 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19349-8084/.minikube CaCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19349-8084/.minikube}
	I0731 18:08:20.546938   73479 buildroot.go:174] setting up certificates
	I0731 18:08:20.546955   73479 provision.go:84] configureAuth start
	I0731 18:08:20.546971   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetMachineName
	I0731 18:08:20.547275   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetIP
	I0731 18:08:20.550019   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.550372   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:20.550400   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.550525   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:20.552914   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.553261   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:20.553290   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.553416   73479 provision.go:143] copyHostCerts
	I0731 18:08:20.553479   73479 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem, removing ...
	I0731 18:08:20.553490   73479 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem
	I0731 18:08:20.553547   73479 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem (1082 bytes)
	I0731 18:08:20.553675   73479 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem, removing ...
	I0731 18:08:20.553687   73479 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem
	I0731 18:08:20.553718   73479 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem (1123 bytes)
	I0731 18:08:20.553796   73479 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem, removing ...
	I0731 18:08:20.553806   73479 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem
	I0731 18:08:20.553826   73479 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem (1675 bytes)
	I0731 18:08:20.553883   73479 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem org=jenkins.no-preload-673754 san=[127.0.0.1 192.168.61.126 localhost minikube no-preload-673754]
	I0731 18:08:20.878891   73479 provision.go:177] copyRemoteCerts
	I0731 18:08:20.878963   73479 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 18:08:20.878990   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:20.881529   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.881868   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:20.881900   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.882053   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:20.882245   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:20.882450   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:20.882617   73479 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/no-preload-673754/id_rsa Username:docker}
	I0731 18:08:20.968757   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 18:08:20.992136   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0731 18:08:21.013768   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 18:08:21.035808   73479 provision.go:87] duration metric: took 488.837788ms to configureAuth
	I0731 18:08:21.035839   73479 buildroot.go:189] setting minikube options for container-runtime
	I0731 18:08:21.036018   73479 config.go:182] Loaded profile config "no-preload-673754": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 18:08:21.036099   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:21.038949   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.039335   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:21.039363   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.039556   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:21.039756   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:21.039960   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:21.040071   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:21.040219   73479 main.go:141] libmachine: Using SSH client type: native
	I0731 18:08:21.040380   73479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I0731 18:08:21.040396   73479 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 18:08:21.319623   73479 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 18:08:21.319657   73479 machine.go:97] duration metric: took 1.139776085s to provisionDockerMachine
	I0731 18:08:21.319672   73479 start.go:293] postStartSetup for "no-preload-673754" (driver="kvm2")
	I0731 18:08:21.319689   73479 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 18:08:21.319710   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 18:08:21.320049   73479 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 18:08:21.320076   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:21.322963   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.323436   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:21.323465   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.323634   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:21.323809   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:21.324003   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:21.324127   73479 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/no-preload-673754/id_rsa Username:docker}
	I0731 18:08:21.409076   73479 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 18:08:21.412884   73479 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 18:08:21.412917   73479 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/addons for local assets ...
	I0731 18:08:21.413020   73479 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/files for local assets ...
	I0731 18:08:21.413108   73479 filesync.go:149] local asset: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem -> 152592.pem in /etc/ssl/certs
	I0731 18:08:21.413233   73479 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 18:08:21.421812   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /etc/ssl/certs/152592.pem (1708 bytes)
	I0731 18:08:21.447124   73479 start.go:296] duration metric: took 127.423498ms for postStartSetup
	I0731 18:08:21.447196   73479 fix.go:56] duration metric: took 20.511108968s for fixHost
	I0731 18:08:21.447226   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:21.450022   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.450408   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:21.450431   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.450628   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:21.450846   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:21.451009   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:21.451161   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:21.451327   73479 main.go:141] libmachine: Using SSH client type: native
	I0731 18:08:21.451527   73479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I0731 18:08:21.451541   73479 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 18:08:21.563653   73479 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722449301.536356236
	
	I0731 18:08:21.563672   73479 fix.go:216] guest clock: 1722449301.536356236
	I0731 18:08:21.563679   73479 fix.go:229] Guest: 2024-07-31 18:08:21.536356236 +0000 UTC Remote: 2024-07-31 18:08:21.447206545 +0000 UTC m=+354.621330953 (delta=89.149691ms)
	I0731 18:08:21.563702   73479 fix.go:200] guest clock delta is within tolerance: 89.149691ms
	I0731 18:08:21.563709   73479 start.go:83] releasing machines lock for "no-preload-673754", held for 20.627680156s
	I0731 18:08:21.563734   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 18:08:21.563992   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetIP
	I0731 18:08:21.566875   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.567265   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:21.567290   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.567505   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 18:08:21.568045   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 18:08:21.568237   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 18:08:21.568368   73479 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 18:08:21.568408   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:21.568465   73479 ssh_runner.go:195] Run: cat /version.json
	I0731 18:08:21.568492   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:21.571178   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.571554   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:21.571603   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.571653   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.571729   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:21.571902   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:21.572071   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:21.572213   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:21.572240   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.572256   73479 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/no-preload-673754/id_rsa Username:docker}
	I0731 18:08:21.572373   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:21.572505   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:21.572609   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:21.572739   73479 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/no-preload-673754/id_rsa Username:docker}
	I0731 18:08:21.682894   73479 ssh_runner.go:195] Run: systemctl --version
	I0731 18:08:21.689126   73479 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 18:08:21.829572   73479 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 18:08:21.836507   73479 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 18:08:21.836589   73479 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 18:08:21.855127   73479 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 18:08:21.855176   73479 start.go:495] detecting cgroup driver to use...
	I0731 18:08:21.855256   73479 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 18:08:21.870886   73479 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 18:08:21.884762   73479 docker.go:217] disabling cri-docker service (if available) ...
	I0731 18:08:21.884833   73479 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 18:08:21.899480   73479 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 18:08:21.912438   73479 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 18:08:22.024528   73479 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 18:08:22.177400   73479 docker.go:233] disabling docker service ...
	I0731 18:08:22.177500   73479 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 18:08:22.191225   73479 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 18:08:22.204004   73479 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 18:08:22.327408   73479 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 18:08:22.449116   73479 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 18:08:22.463031   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 18:08:22.481864   73479 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0731 18:08:22.481935   73479 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:22.491687   73479 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 18:08:22.491768   73479 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:22.501686   73479 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:22.511207   73479 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:22.521390   73479 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 18:08:22.531355   73479 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:22.541544   73479 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:22.556829   73479 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:22.566012   73479 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 18:08:22.574865   73479 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 18:08:22.574938   73479 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 18:08:22.588125   73479 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 18:08:22.597257   73479 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:08:22.716379   73479 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 18:08:22.855465   73479 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 18:08:22.855526   73479 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 18:08:22.860016   73479 start.go:563] Will wait 60s for crictl version
	I0731 18:08:22.860088   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:08:22.863395   73479 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 18:08:22.904523   73479 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 18:08:22.904611   73479 ssh_runner.go:195] Run: crio --version
	I0731 18:08:22.934571   73479 ssh_runner.go:195] Run: crio --version
	I0731 18:08:22.965884   73479 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0731 18:08:19.771740   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:22.272491   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:20.478866   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:20.978311   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:21.478333   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:21.978289   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:22.479138   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:22.978406   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:23.478138   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:23.979189   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:24.478688   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:24.978795   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:22.779215   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:24.782366   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:22.967087   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetIP
	I0731 18:08:22.969442   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:22.969722   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:22.969746   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:22.970005   73479 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0731 18:08:22.974229   73479 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:08:22.986153   73479 kubeadm.go:883] updating cluster {Name:no-preload-673754 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-673754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.126 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 18:08:22.986292   73479 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0731 18:08:22.986321   73479 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 18:08:23.020129   73479 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0731 18:08:23.020153   73479 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 18:08:23.020215   73479 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:08:23.020234   73479 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 18:08:23.020266   73479 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 18:08:23.020322   73479 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 18:08:23.020337   73479 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0731 18:08:23.020390   73479 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0731 18:08:23.020431   73479 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 18:08:23.020457   73479 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 18:08:23.021820   73479 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:08:23.021821   73479 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0731 18:08:23.021820   73479 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 18:08:23.021901   73479 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0731 18:08:23.021978   73479 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 18:08:23.021833   73479 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 18:08:23.021826   73479 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 18:08:23.021821   73479 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 18:08:23.254700   73479 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 18:08:23.268999   73479 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0731 18:08:23.271466   73479 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0731 18:08:23.272011   73479 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 18:08:23.275695   73479 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 18:08:23.298363   73479 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 18:08:23.320031   73479 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0731 18:08:23.340960   73479 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0731 18:08:23.341004   73479 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 18:08:23.341050   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:08:23.381391   73479 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0731 18:08:23.381441   73479 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 18:08:23.381511   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:08:23.508590   73479 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0731 18:08:23.508650   73479 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 18:08:23.508676   73479 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0731 18:08:23.508702   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:08:23.508716   73479 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 18:08:23.508729   73479 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0731 18:08:23.508751   73479 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 18:08:23.508772   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:08:23.508781   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:08:23.508800   73479 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0731 18:08:23.508830   73479 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0731 18:08:23.508838   73479 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 18:08:23.508860   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:08:23.508879   73479 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0731 18:08:23.519809   73479 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 18:08:23.519834   73479 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 18:08:23.519907   73479 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 18:08:23.595474   73479 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0731 18:08:23.595484   73479 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0731 18:08:23.595590   73479 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0731 18:08:23.595628   73479 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0731 18:08:23.595683   73479 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0731 18:08:23.622893   73479 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0731 18:08:23.623024   73479 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0731 18:08:23.629140   73479 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0731 18:08:23.629173   73479 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0731 18:08:23.629242   73479 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0731 18:08:23.629246   73479 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0731 18:08:23.659281   73479 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0731 18:08:23.659321   73479 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0731 18:08:23.659336   73479 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0731 18:08:23.659379   73479 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0731 18:08:23.659385   73479 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0731 18:08:23.659425   73479 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0731 18:08:23.659381   73479 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0731 18:08:23.659465   73479 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0731 18:08:23.659494   73479 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0731 18:08:23.857129   73479 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:08:26.136212   73479 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.476802709s)
	I0731 18:08:26.136251   73479 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0731 18:08:26.136264   73479 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0: (2.476807388s)
	I0731 18:08:26.136276   73479 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0731 18:08:26.136293   73479 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0731 18:08:26.136329   73479 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0731 18:08:26.136366   73479 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.279204335s)
	I0731 18:08:26.136423   73479 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0731 18:08:26.136474   73479 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:08:26.136521   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:08:24.770974   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:26.771954   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:29.274931   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:25.478432   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:25.978823   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:26.478416   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:26.979075   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:27.478228   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:27.978970   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:28.478319   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:28.979028   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:29.479060   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:29.978544   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:27.278482   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:29.279820   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:27.993828   73479 ssh_runner.go:235] Completed: which crictl: (1.857279777s)
	I0731 18:08:27.993908   73479 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:08:27.993918   73479 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.857561411s)
	I0731 18:08:27.993947   73479 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0731 18:08:27.993981   73479 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0731 18:08:27.994029   73479 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0731 18:08:28.037163   73479 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0731 18:08:28.037288   73479 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0731 18:08:29.880343   73479 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.843037657s)
	I0731 18:08:29.880392   73479 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0731 18:08:29.880339   73479 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.886261639s)
	I0731 18:08:29.880412   73479 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0731 18:08:29.880442   73479 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0731 18:08:29.880509   73479 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0731 18:08:31.229448   73479 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.348909634s)
	I0731 18:08:31.229478   73479 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0731 18:08:31.229512   73479 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0731 18:08:31.229575   73479 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0731 18:08:31.771695   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:34.271817   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:30.478387   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:30.978443   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:31.478484   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:31.979231   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:32.478928   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:32.978790   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:33.478426   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:33.978839   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:34.478411   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:34.978378   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:31.280261   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:33.780411   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:35.783181   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:33.084098   73479 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.854499641s)
	I0731 18:08:33.084136   73479 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0731 18:08:33.084175   73479 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0731 18:08:33.084255   73479 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0731 18:08:36.378466   73479 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.294181026s)
	I0731 18:08:36.378501   73479 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0731 18:08:36.378530   73479 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0731 18:08:36.378575   73479 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0731 18:08:36.772963   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:39.270915   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:35.478287   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:35.978546   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:36.479138   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:36.979173   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:37.478319   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:37.978768   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:38.479161   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:38.979129   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:39.478128   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:39.979147   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:38.278970   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:40.279298   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:37.022757   73479 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0731 18:08:37.022807   73479 cache_images.go:123] Successfully loaded all cached images
	I0731 18:08:37.022815   73479 cache_images.go:92] duration metric: took 14.002647196s to LoadCachedImages
	I0731 18:08:37.022829   73479 kubeadm.go:934] updating node { 192.168.61.126 8443 v1.31.0-beta.0 crio true true} ...
	I0731 18:08:37.022954   73479 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-673754 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.126
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-673754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 18:08:37.023035   73479 ssh_runner.go:195] Run: crio config
	I0731 18:08:37.064803   73479 cni.go:84] Creating CNI manager for ""
	I0731 18:08:37.064825   73479 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:08:37.064834   73479 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 18:08:37.064856   73479 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.126 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-673754 NodeName:no-preload-673754 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.126"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.126 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 18:08:37.065028   73479 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.126
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-673754"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.126
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.126"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 18:08:37.065108   73479 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0731 18:08:37.077141   73479 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 18:08:37.077215   73479 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 18:08:37.086553   73479 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0731 18:08:37.102646   73479 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0731 18:08:37.118113   73479 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0731 18:08:37.134702   73479 ssh_runner.go:195] Run: grep 192.168.61.126	control-plane.minikube.internal$ /etc/hosts
	I0731 18:08:37.138593   73479 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.126	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:08:37.151319   73479 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:08:37.270019   73479 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 18:08:37.287378   73479 certs.go:68] Setting up /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/no-preload-673754 for IP: 192.168.61.126
	I0731 18:08:37.287400   73479 certs.go:194] generating shared ca certs ...
	I0731 18:08:37.287413   73479 certs.go:226] acquiring lock for ca certs: {Name:mk49eed439085f9c30746706460ce213570d1997 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:08:37.287540   73479 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key
	I0731 18:08:37.287577   73479 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key
	I0731 18:08:37.287584   73479 certs.go:256] generating profile certs ...
	I0731 18:08:37.287692   73479 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/no-preload-673754/client.key
	I0731 18:08:37.287761   73479 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/no-preload-673754/apiserver.key.3fff3ffc
	I0731 18:08:37.287803   73479 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/no-preload-673754/proxy-client.key
	I0731 18:08:37.287938   73479 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem (1338 bytes)
	W0731 18:08:37.287973   73479 certs.go:480] ignoring /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259_empty.pem, impossibly tiny 0 bytes
	I0731 18:08:37.287985   73479 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 18:08:37.288020   73479 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem (1082 bytes)
	I0731 18:08:37.288049   73479 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem (1123 bytes)
	I0731 18:08:37.288079   73479 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem (1675 bytes)
	I0731 18:08:37.288143   73479 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem (1708 bytes)
	I0731 18:08:37.288831   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 18:08:37.334317   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 18:08:37.370553   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 18:08:37.403436   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 18:08:37.449133   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/no-preload-673754/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0731 18:08:37.486169   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/no-preload-673754/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 18:08:37.517241   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/no-preload-673754/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 18:08:37.541089   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/no-preload-673754/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 18:08:37.563068   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem --> /usr/share/ca-certificates/15259.pem (1338 bytes)
	I0731 18:08:37.585396   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /usr/share/ca-certificates/152592.pem (1708 bytes)
	I0731 18:08:37.608142   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 18:08:37.630178   73479 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 18:08:37.645994   73479 ssh_runner.go:195] Run: openssl version
	I0731 18:08:37.651663   73479 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15259.pem && ln -fs /usr/share/ca-certificates/15259.pem /etc/ssl/certs/15259.pem"
	I0731 18:08:37.661494   73479 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15259.pem
	I0731 18:08:37.665519   73479 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 16:54 /usr/share/ca-certificates/15259.pem
	I0731 18:08:37.665575   73479 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15259.pem
	I0731 18:08:37.671143   73479 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15259.pem /etc/ssl/certs/51391683.0"
	I0731 18:08:37.681076   73479 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/152592.pem && ln -fs /usr/share/ca-certificates/152592.pem /etc/ssl/certs/152592.pem"
	I0731 18:08:37.692253   73479 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/152592.pem
	I0731 18:08:37.696802   73479 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 16:54 /usr/share/ca-certificates/152592.pem
	I0731 18:08:37.696850   73479 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/152592.pem
	I0731 18:08:37.702282   73479 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/152592.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 18:08:37.713051   73479 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 18:08:37.723644   73479 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:08:37.728170   73479 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 16:42 /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:08:37.728225   73479 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:08:37.733912   73479 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 18:08:37.744004   73479 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 18:08:37.748076   73479 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 18:08:37.753645   73479 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 18:08:37.759077   73479 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 18:08:37.764344   73479 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 18:08:37.769735   73479 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 18:08:37.775894   73479 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 18:08:37.781699   73479 kubeadm.go:392] StartCluster: {Name:no-preload-673754 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-673754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.126 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:08:37.781771   73479 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 18:08:37.781833   73479 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 18:08:37.825614   73479 cri.go:89] found id: ""
	I0731 18:08:37.825685   73479 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 18:08:37.835584   73479 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 18:08:37.835604   73479 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 18:08:37.835659   73479 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 18:08:37.844529   73479 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 18:08:37.845534   73479 kubeconfig.go:125] found "no-preload-673754" server: "https://192.168.61.126:8443"
	I0731 18:08:37.847698   73479 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 18:08:37.856360   73479 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.126
	I0731 18:08:37.856386   73479 kubeadm.go:1160] stopping kube-system containers ...
	I0731 18:08:37.856396   73479 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 18:08:37.856440   73479 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 18:08:37.894614   73479 cri.go:89] found id: ""
	I0731 18:08:37.894689   73479 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 18:08:37.910921   73479 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 18:08:37.919796   73479 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 18:08:37.919814   73479 kubeadm.go:157] found existing configuration files:
	
	I0731 18:08:37.919859   73479 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 18:08:37.928562   73479 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 18:08:37.928617   73479 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 18:08:37.937099   73479 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 18:08:37.945298   73479 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 18:08:37.945378   73479 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 18:08:37.953976   73479 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 18:08:37.962069   73479 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 18:08:37.962119   73479 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 18:08:37.970719   73479 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 18:08:37.979265   73479 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 18:08:37.979318   73479 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 18:08:37.988286   73479 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 18:08:37.997742   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:38.105503   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:39.403672   73479 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.298131314s)
	I0731 18:08:39.403710   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:39.609739   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:39.677484   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:39.773387   73479 api_server.go:52] waiting for apiserver process to appear ...
	I0731 18:08:39.773469   73479 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:40.274185   73479 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:40.774562   73479 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:40.792346   73479 api_server.go:72] duration metric: took 1.018961231s to wait for apiserver process to appear ...
	I0731 18:08:40.792368   73479 api_server.go:88] waiting for apiserver healthz status ...
	I0731 18:08:40.792384   73479 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0731 18:08:41.271890   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:43.771546   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:43.476911   73479 api_server.go:279] https://192.168.61.126:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 18:08:43.476938   73479 api_server.go:103] status: https://192.168.61.126:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 18:08:43.476952   73479 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0731 18:08:43.536762   73479 api_server.go:279] https://192.168.61.126:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 18:08:43.536794   73479 api_server.go:103] status: https://192.168.61.126:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 18:08:43.793157   73479 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0731 18:08:43.798895   73479 api_server.go:279] https://192.168.61.126:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 18:08:43.798924   73479 api_server.go:103] status: https://192.168.61.126:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 18:08:44.292527   73479 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0731 18:08:44.300596   73479 api_server.go:279] https://192.168.61.126:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 18:08:44.300632   73479 api_server.go:103] status: https://192.168.61.126:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 18:08:44.793206   73479 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0731 18:08:44.797982   73479 api_server.go:279] https://192.168.61.126:8443/healthz returned 200:
	ok
	I0731 18:08:44.806150   73479 api_server.go:141] control plane version: v1.31.0-beta.0
	I0731 18:08:44.806172   73479 api_server.go:131] duration metric: took 4.013797537s to wait for apiserver health ...
	I0731 18:08:44.806183   73479 cni.go:84] Creating CNI manager for ""
	I0731 18:08:44.806191   73479 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:08:44.807774   73479 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 18:08:40.478967   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:40.978610   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:41.479192   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:41.978737   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:42.479051   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:42.978274   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:43.478957   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:43.978973   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:44.478269   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:44.978737   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:42.778330   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:44.779163   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:44.809068   73479 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 18:08:44.823284   73479 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 18:08:44.878894   73479 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 18:08:44.892969   73479 system_pods.go:59] 8 kube-system pods found
	I0731 18:08:44.893020   73479 system_pods.go:61] "coredns-5cfdc65f69-k7clq" [e4be77b6-aa7c-45e2-90a1-6a8264fd5101] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:08:44.893031   73479 system_pods.go:61] "etcd-no-preload-673754" [d64e9194-dd33-4a28-b8c3-d25462f1dae8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 18:08:44.893042   73479 system_pods.go:61] "kube-apiserver-no-preload-673754" [4497614d-51de-4a5f-93dc-1397446fe4c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 18:08:44.893055   73479 system_pods.go:61] "kube-controller-manager-no-preload-673754" [fe7f714e-d778-49e7-b4ef-44e6efb5bc33] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 18:08:44.893067   73479 system_pods.go:61] "kube-proxy-hqxh6" [4fb623c0-4be3-421c-85d6-1d76a90b874f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 18:08:44.893078   73479 system_pods.go:61] "kube-scheduler-no-preload-673754" [558e9b78-a6a9-46b8-9db8-004126359c74] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 18:08:44.893088   73479 system_pods.go:61] "metrics-server-78fcd8795b-27pkr" [a63a156b-0446-4bd9-8619-de75edaeb481] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 18:08:44.893098   73479 system_pods.go:61] "storage-provisioner" [a9f0ab70-18f4-4fac-b858-a9177077fe29] Running
	I0731 18:08:44.893109   73479 system_pods.go:74] duration metric: took 14.191984ms to wait for pod list to return data ...
	I0731 18:08:44.893120   73479 node_conditions.go:102] verifying NodePressure condition ...
	I0731 18:08:44.908236   73479 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 18:08:44.908270   73479 node_conditions.go:123] node cpu capacity is 2
	I0731 18:08:44.908283   73479 node_conditions.go:105] duration metric: took 15.154491ms to run NodePressure ...
	I0731 18:08:44.908307   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:45.248571   73479 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 18:08:45.252305   73479 kubeadm.go:739] kubelet initialised
	I0731 18:08:45.252332   73479 kubeadm.go:740] duration metric: took 3.734022ms waiting for restarted kubelet to initialise ...
	I0731 18:08:45.252342   73479 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:08:45.256748   73479 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-k7clq" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:45.261130   73479 pod_ready.go:97] node "no-preload-673754" hosting pod "coredns-5cfdc65f69-k7clq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:45.261149   73479 pod_ready.go:81] duration metric: took 4.373068ms for pod "coredns-5cfdc65f69-k7clq" in "kube-system" namespace to be "Ready" ...
	E0731 18:08:45.261157   73479 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-673754" hosting pod "coredns-5cfdc65f69-k7clq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:45.261162   73479 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:45.265115   73479 pod_ready.go:97] node "no-preload-673754" hosting pod "etcd-no-preload-673754" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:45.265135   73479 pod_ready.go:81] duration metric: took 3.965586ms for pod "etcd-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	E0731 18:08:45.265142   73479 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-673754" hosting pod "etcd-no-preload-673754" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:45.265147   73479 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:45.269566   73479 pod_ready.go:97] node "no-preload-673754" hosting pod "kube-apiserver-no-preload-673754" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:45.269585   73479 pod_ready.go:81] duration metric: took 4.431367ms for pod "kube-apiserver-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	E0731 18:08:45.269595   73479 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-673754" hosting pod "kube-apiserver-no-preload-673754" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:45.269603   73479 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:45.281026   73479 pod_ready.go:97] node "no-preload-673754" hosting pod "kube-controller-manager-no-preload-673754" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:45.281048   73479 pod_ready.go:81] duration metric: took 11.435327ms for pod "kube-controller-manager-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	E0731 18:08:45.281057   73479 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-673754" hosting pod "kube-controller-manager-no-preload-673754" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:45.281065   73479 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-hqxh6" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:45.684313   73479 pod_ready.go:97] node "no-preload-673754" hosting pod "kube-proxy-hqxh6" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:45.684347   73479 pod_ready.go:81] duration metric: took 403.272559ms for pod "kube-proxy-hqxh6" in "kube-system" namespace to be "Ready" ...
	E0731 18:08:45.684356   73479 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-673754" hosting pod "kube-proxy-hqxh6" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:45.684362   73479 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:46.082388   73479 pod_ready.go:97] node "no-preload-673754" hosting pod "kube-scheduler-no-preload-673754" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:46.082419   73479 pod_ready.go:81] duration metric: took 398.048808ms for pod "kube-scheduler-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	E0731 18:08:46.082432   73479 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-673754" hosting pod "kube-scheduler-no-preload-673754" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:46.082442   73479 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:46.482445   73479 pod_ready.go:97] node "no-preload-673754" hosting pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:46.482472   73479 pod_ready.go:81] duration metric: took 400.02111ms for pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace to be "Ready" ...
	E0731 18:08:46.482486   73479 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-673754" hosting pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:46.482493   73479 pod_ready.go:38] duration metric: took 1.230141723s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:08:46.482509   73479 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 18:08:46.495481   73479 ops.go:34] apiserver oom_adj: -16
	I0731 18:08:46.495502   73479 kubeadm.go:597] duration metric: took 8.65989212s to restartPrimaryControlPlane
	I0731 18:08:46.495513   73479 kubeadm.go:394] duration metric: took 8.71382049s to StartCluster
	I0731 18:08:46.495533   73479 settings.go:142] acquiring lock: {Name:mk8ac8269296bf0b887a00ff74caaad180005f89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:08:46.495615   73479 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 18:08:46.497426   73479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/kubeconfig: {Name:mkf2340bc1ada3c683f132aca37228d7588e1fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:08:46.497742   73479 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.126 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 18:08:46.497816   73479 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 18:08:46.497911   73479 addons.go:69] Setting storage-provisioner=true in profile "no-preload-673754"
	I0731 18:08:46.497929   73479 addons.go:69] Setting default-storageclass=true in profile "no-preload-673754"
	I0731 18:08:46.497956   73479 addons.go:69] Setting metrics-server=true in profile "no-preload-673754"
	I0731 18:08:46.497973   73479 config.go:182] Loaded profile config "no-preload-673754": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 18:08:46.497979   73479 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-673754"
	I0731 18:08:46.497988   73479 addons.go:234] Setting addon metrics-server=true in "no-preload-673754"
	W0731 18:08:46.498008   73479 addons.go:243] addon metrics-server should already be in state true
	I0731 18:08:46.497946   73479 addons.go:234] Setting addon storage-provisioner=true in "no-preload-673754"
	I0731 18:08:46.498056   73479 host.go:66] Checking if "no-preload-673754" exists ...
	W0731 18:08:46.498064   73479 addons.go:243] addon storage-provisioner should already be in state true
	I0731 18:08:46.498109   73479 host.go:66] Checking if "no-preload-673754" exists ...
	I0731 18:08:46.498315   73479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:08:46.498315   73479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:08:46.498333   73479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:08:46.498340   73479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:08:46.498448   73479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:08:46.498470   73479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:08:46.501144   73479 out.go:177] * Verifying Kubernetes components...
	I0731 18:08:46.502755   73479 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:08:46.514922   73479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46797
	I0731 18:08:46.514923   73479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44241
	I0731 18:08:46.515418   73479 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:08:46.515618   73479 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:08:46.515928   73479 main.go:141] libmachine: Using API Version  1
	I0731 18:08:46.515950   73479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:08:46.516066   73479 main.go:141] libmachine: Using API Version  1
	I0731 18:08:46.516089   73479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:08:46.516370   73479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40219
	I0731 18:08:46.516440   73479 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:08:46.516663   73479 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:08:46.516809   73479 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:08:46.516811   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetState
	I0731 18:08:46.517213   73479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:08:46.517247   73479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:08:46.517280   73479 main.go:141] libmachine: Using API Version  1
	I0731 18:08:46.517302   73479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:08:46.517618   73479 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:08:46.518191   73479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:08:46.518220   73479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:08:46.520511   73479 addons.go:234] Setting addon default-storageclass=true in "no-preload-673754"
	W0731 18:08:46.520536   73479 addons.go:243] addon default-storageclass should already be in state true
	I0731 18:08:46.520566   73479 host.go:66] Checking if "no-preload-673754" exists ...
	I0731 18:08:46.520917   73479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:08:46.520968   73479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:08:46.533349   73479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38669
	I0731 18:08:46.533802   73479 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:08:46.534250   73479 main.go:141] libmachine: Using API Version  1
	I0731 18:08:46.534272   73479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:08:46.534582   73479 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:08:46.534720   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetState
	I0731 18:08:46.535556   73479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46055
	I0731 18:08:46.535979   73479 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:08:46.536648   73479 main.go:141] libmachine: Using API Version  1
	I0731 18:08:46.536667   73479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:08:46.537080   73479 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:08:46.537331   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 18:08:46.537398   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetState
	I0731 18:08:46.538365   73479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39435
	I0731 18:08:46.538929   73479 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:08:46.539194   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 18:08:46.539401   73479 main.go:141] libmachine: Using API Version  1
	I0731 18:08:46.539419   73479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:08:46.539766   73479 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:08:46.540360   73479 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:08:46.540447   73479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:08:46.540801   73479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:08:46.541139   73479 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0731 18:08:46.541916   73479 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 18:08:46.541932   73479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 18:08:46.541952   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:46.542506   73479 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 18:08:46.542524   73479 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 18:08:46.542541   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:46.545293   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:46.545631   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:46.545759   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:46.545829   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:46.545985   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:46.546116   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:46.546256   73479 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/no-preload-673754/id_rsa Username:docker}
	I0731 18:08:46.546384   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:46.546888   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:46.546907   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:46.546924   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:46.547090   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:46.547256   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:46.547434   73479 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/no-preload-673754/id_rsa Username:docker}
	I0731 18:08:46.570759   73479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40485
	I0731 18:08:46.571222   73479 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:08:46.571668   73479 main.go:141] libmachine: Using API Version  1
	I0731 18:08:46.571688   73479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:08:46.572207   73479 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:08:46.572367   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetState
	I0731 18:08:46.574368   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 18:08:46.574582   73479 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 18:08:46.574607   73479 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 18:08:46.574627   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:46.577768   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:46.578542   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:46.578567   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:46.578741   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:46.578911   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:46.579047   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:46.579459   73479 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/no-preload-673754/id_rsa Username:docker}
	I0731 18:08:46.700752   73479 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 18:08:46.720967   73479 node_ready.go:35] waiting up to 6m0s for node "no-preload-673754" to be "Ready" ...
	I0731 18:08:46.798188   73479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 18:08:46.802534   73479 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 18:08:46.802564   73479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0731 18:08:46.828038   73479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 18:08:46.859309   73479 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 18:08:46.859337   73479 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 18:08:46.921507   73479 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 18:08:46.921536   73479 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 18:08:46.958759   73479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 18:08:48.106542   73479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.278462071s)
	I0731 18:08:48.106599   73479 main.go:141] libmachine: Making call to close driver server
	I0731 18:08:48.106608   73479 main.go:141] libmachine: (no-preload-673754) Calling .Close
	I0731 18:08:48.107151   73479 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:08:48.107177   73479 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:08:48.107187   73479 main.go:141] libmachine: Making call to close driver server
	I0731 18:08:48.107196   73479 main.go:141] libmachine: (no-preload-673754) Calling .Close
	I0731 18:08:48.107601   73479 main.go:141] libmachine: (no-preload-673754) DBG | Closing plugin on server side
	I0731 18:08:48.107604   73479 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:08:48.107631   73479 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:08:48.107831   73479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.309610972s)
	I0731 18:08:48.107872   73479 main.go:141] libmachine: Making call to close driver server
	I0731 18:08:48.107882   73479 main.go:141] libmachine: (no-preload-673754) Calling .Close
	I0731 18:08:48.108105   73479 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:08:48.108119   73479 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:08:48.108138   73479 main.go:141] libmachine: Making call to close driver server
	I0731 18:08:48.108150   73479 main.go:141] libmachine: (no-preload-673754) Calling .Close
	I0731 18:08:48.108351   73479 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:08:48.108367   73479 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:08:48.118038   73479 main.go:141] libmachine: Making call to close driver server
	I0731 18:08:48.118055   73479 main.go:141] libmachine: (no-preload-673754) Calling .Close
	I0731 18:08:48.118329   73479 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:08:48.118349   73479 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:08:48.128563   73479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.169765123s)
	I0731 18:08:48.128606   73479 main.go:141] libmachine: Making call to close driver server
	I0731 18:08:48.128619   73479 main.go:141] libmachine: (no-preload-673754) Calling .Close
	I0731 18:08:48.128901   73479 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:08:48.128915   73479 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:08:48.128924   73479 main.go:141] libmachine: Making call to close driver server
	I0731 18:08:48.128932   73479 main.go:141] libmachine: (no-preload-673754) Calling .Close
	I0731 18:08:48.129137   73479 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:08:48.129152   73479 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:08:48.129162   73479 addons.go:475] Verifying addon metrics-server=true in "no-preload-673754"
	I0731 18:08:48.129174   73479 main.go:141] libmachine: (no-preload-673754) DBG | Closing plugin on server side
	I0731 18:08:48.130887   73479 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0731 18:08:46.271648   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:48.271754   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:45.478411   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:45.978802   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:46.478407   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:46.978134   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:47.479125   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:47.978991   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:48.478597   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:48.978742   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:49.479320   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:49.978288   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:46.779263   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:48.779361   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:48.131964   73479 addons.go:510] duration metric: took 1.634151286s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0731 18:08:48.725682   73479 node_ready.go:53] node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:51.231081   73479 node_ready.go:53] node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:50.771387   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:52.771438   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:50.478112   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:50.978272   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:51.478319   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:51.978880   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:52.479176   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:52.979001   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:53.478508   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:53.978517   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:54.478857   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:54.978290   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:51.278348   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:53.278456   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:55.278495   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:53.725153   73479 node_ready.go:53] node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:54.224475   73479 node_ready.go:49] node "no-preload-673754" has status "Ready":"True"
	I0731 18:08:54.224505   73479 node_ready.go:38] duration metric: took 7.503503116s for node "no-preload-673754" to be "Ready" ...
	I0731 18:08:54.224517   73479 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:08:54.231434   73479 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-k7clq" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:56.237804   73479 pod_ready.go:102] pod "coredns-5cfdc65f69-k7clq" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:54.772597   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:57.271778   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:55.478727   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:55.978552   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:56.478246   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:56.978732   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:57.478262   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:57.978216   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:58.478212   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:58.978406   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:59.478270   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:59.978221   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:57.781459   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:00.278913   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:58.740148   73479 pod_ready.go:102] pod "coredns-5cfdc65f69-k7clq" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:01.237849   73479 pod_ready.go:92] pod "coredns-5cfdc65f69-k7clq" in "kube-system" namespace has status "Ready":"True"
	I0731 18:09:01.237874   73479 pod_ready.go:81] duration metric: took 7.00641308s for pod "coredns-5cfdc65f69-k7clq" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.237887   73479 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.242105   73479 pod_ready.go:92] pod "etcd-no-preload-673754" in "kube-system" namespace has status "Ready":"True"
	I0731 18:09:01.242122   73479 pod_ready.go:81] duration metric: took 4.229266ms for pod "etcd-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.242133   73479 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.246652   73479 pod_ready.go:92] pod "kube-apiserver-no-preload-673754" in "kube-system" namespace has status "Ready":"True"
	I0731 18:09:01.246674   73479 pod_ready.go:81] duration metric: took 4.534937ms for pod "kube-apiserver-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.246686   73479 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.251284   73479 pod_ready.go:92] pod "kube-controller-manager-no-preload-673754" in "kube-system" namespace has status "Ready":"True"
	I0731 18:09:01.251302   73479 pod_ready.go:81] duration metric: took 4.608584ms for pod "kube-controller-manager-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.251321   73479 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hqxh6" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.255030   73479 pod_ready.go:92] pod "kube-proxy-hqxh6" in "kube-system" namespace has status "Ready":"True"
	I0731 18:09:01.255045   73479 pod_ready.go:81] duration metric: took 3.718917ms for pod "kube-proxy-hqxh6" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.255052   73479 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.636799   73479 pod_ready.go:92] pod "kube-scheduler-no-preload-673754" in "kube-system" namespace has status "Ready":"True"
	I0731 18:09:01.636826   73479 pod_ready.go:81] duration metric: took 381.767881ms for pod "kube-scheduler-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.636835   73479 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:59.771686   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:02.271396   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:00.478785   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:00.978350   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:01.478635   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:01.978192   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:02.478480   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:02.979021   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:03.478366   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:03.978984   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:04.479143   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:04.978913   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:02.279613   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:04.778482   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:03.642978   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:05.644941   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:04.771938   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:07.271165   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:05.478608   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:05.978345   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:06.478435   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:06.978551   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:07.478131   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:07.978354   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:08.478977   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:08.979122   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:09.478279   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:09.978350   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:06.780364   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:09.278573   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:08.142974   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:10.643136   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:09.771950   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:11.772464   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:13.773164   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:10.479086   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:10.479175   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:10.516364   74203 cri.go:89] found id: ""
	I0731 18:09:10.516389   74203 logs.go:276] 0 containers: []
	W0731 18:09:10.516405   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:10.516411   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:10.516464   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:10.549398   74203 cri.go:89] found id: ""
	I0731 18:09:10.549422   74203 logs.go:276] 0 containers: []
	W0731 18:09:10.549433   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:10.549440   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:10.549503   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:10.584290   74203 cri.go:89] found id: ""
	I0731 18:09:10.584314   74203 logs.go:276] 0 containers: []
	W0731 18:09:10.584322   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:10.584327   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:10.584381   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:10.615832   74203 cri.go:89] found id: ""
	I0731 18:09:10.615860   74203 logs.go:276] 0 containers: []
	W0731 18:09:10.615871   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:10.615878   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:10.615941   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:10.647597   74203 cri.go:89] found id: ""
	I0731 18:09:10.647617   74203 logs.go:276] 0 containers: []
	W0731 18:09:10.647624   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:10.647629   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:10.647686   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:10.680981   74203 cri.go:89] found id: ""
	I0731 18:09:10.681016   74203 logs.go:276] 0 containers: []
	W0731 18:09:10.681027   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:10.681033   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:10.681093   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:10.713798   74203 cri.go:89] found id: ""
	I0731 18:09:10.713839   74203 logs.go:276] 0 containers: []
	W0731 18:09:10.713851   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:10.713865   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:10.713937   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:10.746378   74203 cri.go:89] found id: ""
	I0731 18:09:10.746405   74203 logs.go:276] 0 containers: []
	W0731 18:09:10.746413   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:10.746423   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:10.746439   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:10.799156   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:10.799187   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:10.812388   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:10.812413   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:10.932251   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:10.932271   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:10.932285   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:10.996810   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:10.996840   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:13.533936   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:13.549194   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:13.549250   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:13.599350   74203 cri.go:89] found id: ""
	I0731 18:09:13.599389   74203 logs.go:276] 0 containers: []
	W0731 18:09:13.599400   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:13.599407   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:13.599466   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:13.651736   74203 cri.go:89] found id: ""
	I0731 18:09:13.651771   74203 logs.go:276] 0 containers: []
	W0731 18:09:13.651791   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:13.651798   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:13.651855   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:13.699804   74203 cri.go:89] found id: ""
	I0731 18:09:13.699832   74203 logs.go:276] 0 containers: []
	W0731 18:09:13.699841   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:13.699846   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:13.699906   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:13.732760   74203 cri.go:89] found id: ""
	I0731 18:09:13.732781   74203 logs.go:276] 0 containers: []
	W0731 18:09:13.732788   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:13.732794   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:13.732849   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:13.766865   74203 cri.go:89] found id: ""
	I0731 18:09:13.766892   74203 logs.go:276] 0 containers: []
	W0731 18:09:13.766902   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:13.766910   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:13.766964   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:13.804706   74203 cri.go:89] found id: ""
	I0731 18:09:13.804733   74203 logs.go:276] 0 containers: []
	W0731 18:09:13.804743   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:13.804757   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:13.804821   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:13.838432   74203 cri.go:89] found id: ""
	I0731 18:09:13.838461   74203 logs.go:276] 0 containers: []
	W0731 18:09:13.838472   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:13.838479   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:13.838534   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:13.870455   74203 cri.go:89] found id: ""
	I0731 18:09:13.870480   74203 logs.go:276] 0 containers: []
	W0731 18:09:13.870490   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:13.870498   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:13.870510   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:13.922911   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:13.922947   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:13.936075   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:13.936098   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:14.006766   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:14.006790   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:14.006810   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:14.071066   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:14.071100   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:11.278892   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:13.279644   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:15.280298   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:12.643341   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:14.643636   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:16.280976   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:18.772338   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:16.615212   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:16.627439   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:16.627499   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:16.660764   74203 cri.go:89] found id: ""
	I0731 18:09:16.660785   74203 logs.go:276] 0 containers: []
	W0731 18:09:16.660792   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:16.660798   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:16.660842   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:16.697154   74203 cri.go:89] found id: ""
	I0731 18:09:16.697182   74203 logs.go:276] 0 containers: []
	W0731 18:09:16.697196   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:16.697201   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:16.697259   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:16.730263   74203 cri.go:89] found id: ""
	I0731 18:09:16.730284   74203 logs.go:276] 0 containers: []
	W0731 18:09:16.730291   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:16.730318   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:16.730369   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:16.765226   74203 cri.go:89] found id: ""
	I0731 18:09:16.765249   74203 logs.go:276] 0 containers: []
	W0731 18:09:16.765257   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:16.765262   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:16.765336   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:16.800502   74203 cri.go:89] found id: ""
	I0731 18:09:16.800528   74203 logs.go:276] 0 containers: []
	W0731 18:09:16.800535   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:16.800541   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:16.800599   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:16.837391   74203 cri.go:89] found id: ""
	I0731 18:09:16.837418   74203 logs.go:276] 0 containers: []
	W0731 18:09:16.837427   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:16.837435   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:16.837490   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:16.867606   74203 cri.go:89] found id: ""
	I0731 18:09:16.867628   74203 logs.go:276] 0 containers: []
	W0731 18:09:16.867637   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:16.867642   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:16.867696   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:16.901639   74203 cri.go:89] found id: ""
	I0731 18:09:16.901669   74203 logs.go:276] 0 containers: []
	W0731 18:09:16.901681   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:16.901693   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:16.901707   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:16.951692   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:16.951729   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:16.965069   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:16.965101   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:17.040337   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:17.040358   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:17.040371   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:17.115058   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:17.115093   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:19.651538   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:19.663682   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:19.663739   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:19.697851   74203 cri.go:89] found id: ""
	I0731 18:09:19.697879   74203 logs.go:276] 0 containers: []
	W0731 18:09:19.697894   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:19.697900   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:19.697996   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:19.732745   74203 cri.go:89] found id: ""
	I0731 18:09:19.732772   74203 logs.go:276] 0 containers: []
	W0731 18:09:19.732783   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:19.732790   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:19.732855   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:19.763843   74203 cri.go:89] found id: ""
	I0731 18:09:19.763865   74203 logs.go:276] 0 containers: []
	W0731 18:09:19.763873   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:19.763878   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:19.763934   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:19.797398   74203 cri.go:89] found id: ""
	I0731 18:09:19.797422   74203 logs.go:276] 0 containers: []
	W0731 18:09:19.797429   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:19.797434   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:19.797504   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:19.833239   74203 cri.go:89] found id: ""
	I0731 18:09:19.833268   74203 logs.go:276] 0 containers: []
	W0731 18:09:19.833278   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:19.833284   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:19.833340   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:19.866135   74203 cri.go:89] found id: ""
	I0731 18:09:19.866163   74203 logs.go:276] 0 containers: []
	W0731 18:09:19.866173   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:19.866181   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:19.866242   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:19.900581   74203 cri.go:89] found id: ""
	I0731 18:09:19.900606   74203 logs.go:276] 0 containers: []
	W0731 18:09:19.900615   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:19.900621   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:19.900720   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:19.936451   74203 cri.go:89] found id: ""
	I0731 18:09:19.936475   74203 logs.go:276] 0 containers: []
	W0731 18:09:19.936487   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:19.936496   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:19.936508   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:19.990522   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:19.990559   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:20.003460   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:20.003487   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:20.070869   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:20.070893   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:20.070912   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:20.148316   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:20.148354   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:17.779144   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:19.781539   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:17.143894   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:19.642139   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:21.642234   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:21.271074   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:23.771002   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:22.685964   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:22.698740   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:22.698814   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:22.735321   74203 cri.go:89] found id: ""
	I0731 18:09:22.735350   74203 logs.go:276] 0 containers: []
	W0731 18:09:22.735360   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:22.735367   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:22.735428   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:22.767689   74203 cri.go:89] found id: ""
	I0731 18:09:22.767718   74203 logs.go:276] 0 containers: []
	W0731 18:09:22.767729   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:22.767736   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:22.767795   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:22.804010   74203 cri.go:89] found id: ""
	I0731 18:09:22.804036   74203 logs.go:276] 0 containers: []
	W0731 18:09:22.804045   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:22.804050   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:22.804101   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:22.836820   74203 cri.go:89] found id: ""
	I0731 18:09:22.836847   74203 logs.go:276] 0 containers: []
	W0731 18:09:22.836858   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:22.836874   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:22.836933   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:22.870163   74203 cri.go:89] found id: ""
	I0731 18:09:22.870187   74203 logs.go:276] 0 containers: []
	W0731 18:09:22.870194   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:22.870199   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:22.870270   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:22.905926   74203 cri.go:89] found id: ""
	I0731 18:09:22.905951   74203 logs.go:276] 0 containers: []
	W0731 18:09:22.905959   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:22.905965   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:22.906020   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:22.938926   74203 cri.go:89] found id: ""
	I0731 18:09:22.938949   74203 logs.go:276] 0 containers: []
	W0731 18:09:22.938957   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:22.938963   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:22.939008   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:22.975150   74203 cri.go:89] found id: ""
	I0731 18:09:22.975185   74203 logs.go:276] 0 containers: []
	W0731 18:09:22.975194   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:22.975204   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:22.975219   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:23.043265   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:23.043290   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:23.043302   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:23.122681   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:23.122717   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:23.161745   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:23.161769   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:23.211274   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:23.211305   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:22.278664   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:24.778771   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:23.643871   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:26.143509   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:25.771922   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:27.772156   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:25.724702   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:25.739335   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:25.739415   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:25.778238   74203 cri.go:89] found id: ""
	I0731 18:09:25.778264   74203 logs.go:276] 0 containers: []
	W0731 18:09:25.778274   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:25.778282   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:25.778349   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:25.816530   74203 cri.go:89] found id: ""
	I0731 18:09:25.816566   74203 logs.go:276] 0 containers: []
	W0731 18:09:25.816579   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:25.816587   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:25.816652   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:25.853524   74203 cri.go:89] found id: ""
	I0731 18:09:25.853562   74203 logs.go:276] 0 containers: []
	W0731 18:09:25.853575   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:25.853583   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:25.853661   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:25.889690   74203 cri.go:89] found id: ""
	I0731 18:09:25.889719   74203 logs.go:276] 0 containers: []
	W0731 18:09:25.889728   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:25.889734   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:25.889803   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:25.922409   74203 cri.go:89] found id: ""
	I0731 18:09:25.922441   74203 logs.go:276] 0 containers: []
	W0731 18:09:25.922452   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:25.922459   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:25.922512   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:25.956849   74203 cri.go:89] found id: ""
	I0731 18:09:25.956876   74203 logs.go:276] 0 containers: []
	W0731 18:09:25.956886   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:25.956893   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:25.956958   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:25.994190   74203 cri.go:89] found id: ""
	I0731 18:09:25.994212   74203 logs.go:276] 0 containers: []
	W0731 18:09:25.994220   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:25.994225   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:25.994270   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:26.027980   74203 cri.go:89] found id: ""
	I0731 18:09:26.028005   74203 logs.go:276] 0 containers: []
	W0731 18:09:26.028014   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:26.028025   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:26.028044   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:26.076627   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:26.076661   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:26.089439   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:26.089464   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:26.167298   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:26.167319   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:26.167333   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:26.244611   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:26.244644   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:28.787238   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:28.800136   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:28.800221   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:28.843038   74203 cri.go:89] found id: ""
	I0731 18:09:28.843062   74203 logs.go:276] 0 containers: []
	W0731 18:09:28.843070   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:28.843076   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:28.843154   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:28.876979   74203 cri.go:89] found id: ""
	I0731 18:09:28.877010   74203 logs.go:276] 0 containers: []
	W0731 18:09:28.877021   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:28.877028   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:28.877095   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:28.913105   74203 cri.go:89] found id: ""
	I0731 18:09:28.913137   74203 logs.go:276] 0 containers: []
	W0731 18:09:28.913147   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:28.913155   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:28.913216   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:28.949113   74203 cri.go:89] found id: ""
	I0731 18:09:28.949144   74203 logs.go:276] 0 containers: []
	W0731 18:09:28.949153   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:28.949160   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:28.949208   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:28.983159   74203 cri.go:89] found id: ""
	I0731 18:09:28.983187   74203 logs.go:276] 0 containers: []
	W0731 18:09:28.983195   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:28.983200   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:28.983276   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:29.016316   74203 cri.go:89] found id: ""
	I0731 18:09:29.016356   74203 logs.go:276] 0 containers: []
	W0731 18:09:29.016364   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:29.016370   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:29.016419   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:29.050015   74203 cri.go:89] found id: ""
	I0731 18:09:29.050047   74203 logs.go:276] 0 containers: []
	W0731 18:09:29.050058   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:29.050069   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:29.050124   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:29.084711   74203 cri.go:89] found id: ""
	I0731 18:09:29.084739   74203 logs.go:276] 0 containers: []
	W0731 18:09:29.084749   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:29.084760   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:29.084777   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:29.135474   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:29.135516   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:29.149989   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:29.150022   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:29.223652   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:29.223676   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:29.223688   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:29.307949   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:29.307983   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:26.779082   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:29.280030   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:28.143957   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:30.643349   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:30.271524   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:32.271862   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:31.848760   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:31.861409   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:31.861470   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:31.894485   74203 cri.go:89] found id: ""
	I0731 18:09:31.894505   74203 logs.go:276] 0 containers: []
	W0731 18:09:31.894513   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:31.894518   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:31.894563   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:31.926760   74203 cri.go:89] found id: ""
	I0731 18:09:31.926784   74203 logs.go:276] 0 containers: []
	W0731 18:09:31.926791   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:31.926797   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:31.926857   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:31.963010   74203 cri.go:89] found id: ""
	I0731 18:09:31.963042   74203 logs.go:276] 0 containers: []
	W0731 18:09:31.963055   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:31.963062   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:31.963165   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:31.995221   74203 cri.go:89] found id: ""
	I0731 18:09:31.995249   74203 logs.go:276] 0 containers: []
	W0731 18:09:31.995260   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:31.995268   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:31.995333   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:32.033912   74203 cri.go:89] found id: ""
	I0731 18:09:32.033942   74203 logs.go:276] 0 containers: []
	W0731 18:09:32.033955   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:32.033963   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:32.034038   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:32.066416   74203 cri.go:89] found id: ""
	I0731 18:09:32.066446   74203 logs.go:276] 0 containers: []
	W0731 18:09:32.066477   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:32.066486   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:32.066549   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:32.100097   74203 cri.go:89] found id: ""
	I0731 18:09:32.100121   74203 logs.go:276] 0 containers: []
	W0731 18:09:32.100129   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:32.100135   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:32.100191   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:32.133061   74203 cri.go:89] found id: ""
	I0731 18:09:32.133088   74203 logs.go:276] 0 containers: []
	W0731 18:09:32.133096   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:32.133106   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:32.133120   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:32.169869   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:32.169897   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:32.218668   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:32.218707   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:32.231016   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:32.231039   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:32.304319   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:32.304342   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:32.304353   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:34.880423   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:34.893775   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:34.893853   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:34.925073   74203 cri.go:89] found id: ""
	I0731 18:09:34.925101   74203 logs.go:276] 0 containers: []
	W0731 18:09:34.925109   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:34.925115   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:34.925178   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:34.960870   74203 cri.go:89] found id: ""
	I0731 18:09:34.960896   74203 logs.go:276] 0 containers: []
	W0731 18:09:34.960904   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:34.960910   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:34.960961   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:34.996290   74203 cri.go:89] found id: ""
	I0731 18:09:34.996332   74203 logs.go:276] 0 containers: []
	W0731 18:09:34.996341   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:34.996347   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:34.996401   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:35.027900   74203 cri.go:89] found id: ""
	I0731 18:09:35.027932   74203 logs.go:276] 0 containers: []
	W0731 18:09:35.027940   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:35.027945   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:35.028004   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:35.060533   74203 cri.go:89] found id: ""
	I0731 18:09:35.060562   74203 logs.go:276] 0 containers: []
	W0731 18:09:35.060579   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:35.060586   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:35.060653   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:35.095307   74203 cri.go:89] found id: ""
	I0731 18:09:35.095339   74203 logs.go:276] 0 containers: []
	W0731 18:09:35.095348   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:35.095355   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:35.095421   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:35.127060   74203 cri.go:89] found id: ""
	I0731 18:09:35.127082   74203 logs.go:276] 0 containers: []
	W0731 18:09:35.127090   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:35.127095   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:35.127169   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:35.161300   74203 cri.go:89] found id: ""
	I0731 18:09:35.161328   74203 logs.go:276] 0 containers: []
	W0731 18:09:35.161339   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:35.161350   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:35.161369   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:35.233033   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:35.233060   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:35.233074   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:35.313279   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:35.313311   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:31.779160   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:33.779209   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:32.644329   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:35.143744   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:34.774758   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:37.271690   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:35.356120   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:35.356145   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:35.408231   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:35.408263   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:37.921242   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:37.933986   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:37.934044   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:37.964524   74203 cri.go:89] found id: ""
	I0731 18:09:37.964558   74203 logs.go:276] 0 containers: []
	W0731 18:09:37.964567   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:37.964574   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:37.964632   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:37.998157   74203 cri.go:89] found id: ""
	I0731 18:09:37.998183   74203 logs.go:276] 0 containers: []
	W0731 18:09:37.998191   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:37.998196   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:37.998257   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:38.034611   74203 cri.go:89] found id: ""
	I0731 18:09:38.034637   74203 logs.go:276] 0 containers: []
	W0731 18:09:38.034645   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:38.034650   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:38.034708   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:38.068005   74203 cri.go:89] found id: ""
	I0731 18:09:38.068029   74203 logs.go:276] 0 containers: []
	W0731 18:09:38.068039   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:38.068047   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:38.068104   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:38.106110   74203 cri.go:89] found id: ""
	I0731 18:09:38.106133   74203 logs.go:276] 0 containers: []
	W0731 18:09:38.106141   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:38.106146   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:38.106192   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:38.138337   74203 cri.go:89] found id: ""
	I0731 18:09:38.138364   74203 logs.go:276] 0 containers: []
	W0731 18:09:38.138375   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:38.138383   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:38.138440   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:38.171517   74203 cri.go:89] found id: ""
	I0731 18:09:38.171546   74203 logs.go:276] 0 containers: []
	W0731 18:09:38.171557   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:38.171564   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:38.171643   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:38.208708   74203 cri.go:89] found id: ""
	I0731 18:09:38.208733   74203 logs.go:276] 0 containers: []
	W0731 18:09:38.208741   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:38.208750   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:38.208760   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:38.243711   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:38.243736   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:38.298673   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:38.298705   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:38.311936   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:38.311962   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:38.384023   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:38.384049   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:38.384067   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:36.278948   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:38.279423   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:40.281213   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:37.644041   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:40.143131   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:39.772098   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:42.272096   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:40.959426   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:40.972581   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:40.972645   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:41.008917   74203 cri.go:89] found id: ""
	I0731 18:09:41.008941   74203 logs.go:276] 0 containers: []
	W0731 18:09:41.008950   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:41.008957   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:41.009018   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:41.045342   74203 cri.go:89] found id: ""
	I0731 18:09:41.045375   74203 logs.go:276] 0 containers: []
	W0731 18:09:41.045384   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:41.045390   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:41.045454   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:41.081385   74203 cri.go:89] found id: ""
	I0731 18:09:41.081409   74203 logs.go:276] 0 containers: []
	W0731 18:09:41.081417   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:41.081423   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:41.081469   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:41.118028   74203 cri.go:89] found id: ""
	I0731 18:09:41.118051   74203 logs.go:276] 0 containers: []
	W0731 18:09:41.118062   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:41.118067   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:41.118114   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:41.154162   74203 cri.go:89] found id: ""
	I0731 18:09:41.154190   74203 logs.go:276] 0 containers: []
	W0731 18:09:41.154201   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:41.154209   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:41.154271   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:41.190789   74203 cri.go:89] found id: ""
	I0731 18:09:41.190814   74203 logs.go:276] 0 containers: []
	W0731 18:09:41.190822   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:41.190827   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:41.190887   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:41.226281   74203 cri.go:89] found id: ""
	I0731 18:09:41.226312   74203 logs.go:276] 0 containers: []
	W0731 18:09:41.226321   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:41.226327   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:41.226382   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:41.258270   74203 cri.go:89] found id: ""
	I0731 18:09:41.258299   74203 logs.go:276] 0 containers: []
	W0731 18:09:41.258309   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:41.258321   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:41.258335   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:41.342713   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:41.342749   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:41.389772   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:41.389795   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:41.442645   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:41.442676   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:41.455850   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:41.455874   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:41.522017   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:44.022439   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:44.035190   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:44.035258   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:44.070759   74203 cri.go:89] found id: ""
	I0731 18:09:44.070783   74203 logs.go:276] 0 containers: []
	W0731 18:09:44.070790   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:44.070796   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:44.070857   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:44.105313   74203 cri.go:89] found id: ""
	I0731 18:09:44.105350   74203 logs.go:276] 0 containers: []
	W0731 18:09:44.105358   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:44.105364   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:44.105416   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:44.140159   74203 cri.go:89] found id: ""
	I0731 18:09:44.140208   74203 logs.go:276] 0 containers: []
	W0731 18:09:44.140220   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:44.140229   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:44.140301   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:44.176407   74203 cri.go:89] found id: ""
	I0731 18:09:44.176429   74203 logs.go:276] 0 containers: []
	W0731 18:09:44.176437   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:44.176442   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:44.176490   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:44.210875   74203 cri.go:89] found id: ""
	I0731 18:09:44.210899   74203 logs.go:276] 0 containers: []
	W0731 18:09:44.210907   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:44.210916   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:44.210969   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:44.247021   74203 cri.go:89] found id: ""
	I0731 18:09:44.247045   74203 logs.go:276] 0 containers: []
	W0731 18:09:44.247055   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:44.247061   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:44.247141   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:44.282983   74203 cri.go:89] found id: ""
	I0731 18:09:44.283011   74203 logs.go:276] 0 containers: []
	W0731 18:09:44.283021   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:44.283029   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:44.283092   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:44.319717   74203 cri.go:89] found id: ""
	I0731 18:09:44.319742   74203 logs.go:276] 0 containers: []
	W0731 18:09:44.319750   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:44.319766   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:44.319781   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:44.398602   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:44.398636   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:44.435350   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:44.435384   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:44.488021   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:44.488053   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:44.501790   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:44.501813   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:44.578374   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:42.779304   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:45.279008   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:42.143287   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:44.144123   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:46.643499   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:44.771059   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:46.771846   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:48.772300   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:47.079192   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:47.093516   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:47.093597   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:47.132872   74203 cri.go:89] found id: ""
	I0731 18:09:47.132899   74203 logs.go:276] 0 containers: []
	W0731 18:09:47.132907   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:47.132913   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:47.132969   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:47.167428   74203 cri.go:89] found id: ""
	I0731 18:09:47.167460   74203 logs.go:276] 0 containers: []
	W0731 18:09:47.167472   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:47.167480   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:47.167551   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:47.202206   74203 cri.go:89] found id: ""
	I0731 18:09:47.202237   74203 logs.go:276] 0 containers: []
	W0731 18:09:47.202250   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:47.202256   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:47.202308   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:47.238513   74203 cri.go:89] found id: ""
	I0731 18:09:47.238537   74203 logs.go:276] 0 containers: []
	W0731 18:09:47.238545   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:47.238551   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:47.238604   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:47.271732   74203 cri.go:89] found id: ""
	I0731 18:09:47.271755   74203 logs.go:276] 0 containers: []
	W0731 18:09:47.271764   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:47.271770   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:47.271828   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:47.305906   74203 cri.go:89] found id: ""
	I0731 18:09:47.305932   74203 logs.go:276] 0 containers: []
	W0731 18:09:47.305943   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:47.305948   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:47.305996   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:47.338427   74203 cri.go:89] found id: ""
	I0731 18:09:47.338452   74203 logs.go:276] 0 containers: []
	W0731 18:09:47.338461   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:47.338468   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:47.338526   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:47.374909   74203 cri.go:89] found id: ""
	I0731 18:09:47.374943   74203 logs.go:276] 0 containers: []
	W0731 18:09:47.374954   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:47.374963   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:47.374976   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:47.387739   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:47.387765   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:47.480479   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:47.480505   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:47.480519   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:47.562857   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:47.562890   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:47.608435   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:47.608466   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:50.164351   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:50.177485   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:50.177546   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:50.211474   74203 cri.go:89] found id: ""
	I0731 18:09:50.211502   74203 logs.go:276] 0 containers: []
	W0731 18:09:50.211512   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:50.211520   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:50.211583   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:50.248167   74203 cri.go:89] found id: ""
	I0731 18:09:50.248190   74203 logs.go:276] 0 containers: []
	W0731 18:09:50.248197   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:50.248203   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:50.248250   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:50.286323   74203 cri.go:89] found id: ""
	I0731 18:09:50.286358   74203 logs.go:276] 0 containers: []
	W0731 18:09:50.286366   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:50.286372   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:50.286420   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:50.316634   74203 cri.go:89] found id: ""
	I0731 18:09:50.316661   74203 logs.go:276] 0 containers: []
	W0731 18:09:50.316670   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:50.316675   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:50.316726   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:47.279198   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:49.280511   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:49.144581   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:51.642915   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:51.272079   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:53.272815   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:50.349881   74203 cri.go:89] found id: ""
	I0731 18:09:50.349909   74203 logs.go:276] 0 containers: []
	W0731 18:09:50.349919   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:50.349926   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:50.349989   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:50.384147   74203 cri.go:89] found id: ""
	I0731 18:09:50.384181   74203 logs.go:276] 0 containers: []
	W0731 18:09:50.384194   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:50.384203   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:50.384272   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:50.418024   74203 cri.go:89] found id: ""
	I0731 18:09:50.418052   74203 logs.go:276] 0 containers: []
	W0731 18:09:50.418062   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:50.418069   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:50.418130   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:50.454484   74203 cri.go:89] found id: ""
	I0731 18:09:50.454517   74203 logs.go:276] 0 containers: []
	W0731 18:09:50.454525   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:50.454533   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:50.454544   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:50.505508   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:50.505545   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:50.518504   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:50.518529   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:50.587950   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:50.587974   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:50.587989   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:50.669268   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:50.669302   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:53.209229   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:53.222114   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:53.222175   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:53.255330   74203 cri.go:89] found id: ""
	I0731 18:09:53.255356   74203 logs.go:276] 0 containers: []
	W0731 18:09:53.255365   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:53.255371   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:53.255432   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:53.290354   74203 cri.go:89] found id: ""
	I0731 18:09:53.290375   74203 logs.go:276] 0 containers: []
	W0731 18:09:53.290382   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:53.290387   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:53.290438   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:53.323621   74203 cri.go:89] found id: ""
	I0731 18:09:53.323645   74203 logs.go:276] 0 containers: []
	W0731 18:09:53.323653   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:53.323658   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:53.323718   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:53.355850   74203 cri.go:89] found id: ""
	I0731 18:09:53.355877   74203 logs.go:276] 0 containers: []
	W0731 18:09:53.355887   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:53.355894   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:53.355957   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:53.388686   74203 cri.go:89] found id: ""
	I0731 18:09:53.388716   74203 logs.go:276] 0 containers: []
	W0731 18:09:53.388726   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:53.388733   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:53.388785   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:53.426924   74203 cri.go:89] found id: ""
	I0731 18:09:53.426952   74203 logs.go:276] 0 containers: []
	W0731 18:09:53.426961   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:53.426967   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:53.427019   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:53.462041   74203 cri.go:89] found id: ""
	I0731 18:09:53.462067   74203 logs.go:276] 0 containers: []
	W0731 18:09:53.462078   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:53.462084   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:53.462145   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:53.493810   74203 cri.go:89] found id: ""
	I0731 18:09:53.493833   74203 logs.go:276] 0 containers: []
	W0731 18:09:53.493842   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:53.493852   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:53.493867   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:53.530019   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:53.530053   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:53.580749   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:53.580782   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:53.594457   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:53.594482   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:53.662096   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:53.662116   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:53.662134   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:51.778292   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:53.779043   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:53.643914   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:56.142699   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:55.772106   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:58.271063   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:56.238479   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:56.251272   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:56.251350   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:56.287380   74203 cri.go:89] found id: ""
	I0731 18:09:56.287406   74203 logs.go:276] 0 containers: []
	W0731 18:09:56.287414   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:56.287419   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:56.287471   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:56.322490   74203 cri.go:89] found id: ""
	I0731 18:09:56.322512   74203 logs.go:276] 0 containers: []
	W0731 18:09:56.322520   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:56.322526   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:56.322572   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:56.355845   74203 cri.go:89] found id: ""
	I0731 18:09:56.355874   74203 logs.go:276] 0 containers: []
	W0731 18:09:56.355885   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:56.355895   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:56.355958   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:56.388304   74203 cri.go:89] found id: ""
	I0731 18:09:56.388330   74203 logs.go:276] 0 containers: []
	W0731 18:09:56.388340   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:56.388348   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:56.388411   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:56.420837   74203 cri.go:89] found id: ""
	I0731 18:09:56.420867   74203 logs.go:276] 0 containers: []
	W0731 18:09:56.420877   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:56.420884   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:56.420950   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:56.453095   74203 cri.go:89] found id: ""
	I0731 18:09:56.453135   74203 logs.go:276] 0 containers: []
	W0731 18:09:56.453146   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:56.453155   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:56.453214   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:56.484245   74203 cri.go:89] found id: ""
	I0731 18:09:56.484272   74203 logs.go:276] 0 containers: []
	W0731 18:09:56.484282   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:56.484296   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:56.484366   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:56.519473   74203 cri.go:89] found id: ""
	I0731 18:09:56.519501   74203 logs.go:276] 0 containers: []
	W0731 18:09:56.519508   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:56.519516   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:56.519530   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:56.532178   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:56.532203   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:56.600092   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:56.600122   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:56.600137   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:56.679176   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:56.679208   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:56.715464   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:56.715499   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:59.267214   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:59.280666   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:59.280740   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:59.312898   74203 cri.go:89] found id: ""
	I0731 18:09:59.312928   74203 logs.go:276] 0 containers: []
	W0731 18:09:59.312940   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:59.312947   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:59.313013   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:59.347881   74203 cri.go:89] found id: ""
	I0731 18:09:59.347907   74203 logs.go:276] 0 containers: []
	W0731 18:09:59.347915   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:59.347919   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:59.347978   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:59.382566   74203 cri.go:89] found id: ""
	I0731 18:09:59.382603   74203 logs.go:276] 0 containers: []
	W0731 18:09:59.382615   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:59.382629   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:59.382691   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:59.417123   74203 cri.go:89] found id: ""
	I0731 18:09:59.417148   74203 logs.go:276] 0 containers: []
	W0731 18:09:59.417157   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:59.417163   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:59.417220   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:59.452674   74203 cri.go:89] found id: ""
	I0731 18:09:59.452699   74203 logs.go:276] 0 containers: []
	W0731 18:09:59.452709   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:59.452715   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:59.452775   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:59.488879   74203 cri.go:89] found id: ""
	I0731 18:09:59.488905   74203 logs.go:276] 0 containers: []
	W0731 18:09:59.488913   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:59.488921   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:59.488981   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:59.521773   74203 cri.go:89] found id: ""
	I0731 18:09:59.521801   74203 logs.go:276] 0 containers: []
	W0731 18:09:59.521809   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:59.521816   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:59.521876   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:59.566619   74203 cri.go:89] found id: ""
	I0731 18:09:59.566649   74203 logs.go:276] 0 containers: []
	W0731 18:09:59.566659   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:59.566670   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:59.566687   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:59.638301   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:59.638351   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:59.638367   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:59.721561   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:59.721597   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:59.759371   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:59.759402   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:59.811223   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:59.811255   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:56.280351   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:58.777896   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:00.779028   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:58.144006   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:00.643536   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:00.772456   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:03.270710   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:02.325339   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:02.337908   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:02.337963   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:02.369343   74203 cri.go:89] found id: ""
	I0731 18:10:02.369369   74203 logs.go:276] 0 containers: []
	W0731 18:10:02.369378   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:02.369384   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:02.369442   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:02.406207   74203 cri.go:89] found id: ""
	I0731 18:10:02.406234   74203 logs.go:276] 0 containers: []
	W0731 18:10:02.406242   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:02.406247   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:02.406297   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:02.442001   74203 cri.go:89] found id: ""
	I0731 18:10:02.442031   74203 logs.go:276] 0 containers: []
	W0731 18:10:02.442041   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:02.442049   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:02.442109   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:02.478407   74203 cri.go:89] found id: ""
	I0731 18:10:02.478431   74203 logs.go:276] 0 containers: []
	W0731 18:10:02.478439   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:02.478444   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:02.478491   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:02.513832   74203 cri.go:89] found id: ""
	I0731 18:10:02.513875   74203 logs.go:276] 0 containers: []
	W0731 18:10:02.513888   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:02.513896   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:02.513962   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:02.550830   74203 cri.go:89] found id: ""
	I0731 18:10:02.550856   74203 logs.go:276] 0 containers: []
	W0731 18:10:02.550867   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:02.550874   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:02.550937   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:02.584649   74203 cri.go:89] found id: ""
	I0731 18:10:02.584676   74203 logs.go:276] 0 containers: []
	W0731 18:10:02.584684   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:02.584691   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:02.584752   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:02.617436   74203 cri.go:89] found id: ""
	I0731 18:10:02.617464   74203 logs.go:276] 0 containers: []
	W0731 18:10:02.617475   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:02.617485   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:02.617500   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:02.671571   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:02.671609   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:02.686657   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:02.686694   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:02.755974   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:02.756008   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:02.756025   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:02.837976   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:02.838012   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:02.779666   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:04.779994   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:02.644075   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:05.142859   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:05.272500   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:07.771599   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:05.375212   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:05.388635   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:05.388703   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:05.427583   74203 cri.go:89] found id: ""
	I0731 18:10:05.427610   74203 logs.go:276] 0 containers: []
	W0731 18:10:05.427617   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:05.427622   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:05.427673   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:05.462550   74203 cri.go:89] found id: ""
	I0731 18:10:05.462575   74203 logs.go:276] 0 containers: []
	W0731 18:10:05.462583   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:05.462589   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:05.462645   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:05.501768   74203 cri.go:89] found id: ""
	I0731 18:10:05.501790   74203 logs.go:276] 0 containers: []
	W0731 18:10:05.501797   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:05.501802   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:05.501860   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:05.539692   74203 cri.go:89] found id: ""
	I0731 18:10:05.539719   74203 logs.go:276] 0 containers: []
	W0731 18:10:05.539731   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:05.539737   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:05.539798   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:05.573844   74203 cri.go:89] found id: ""
	I0731 18:10:05.573872   74203 logs.go:276] 0 containers: []
	W0731 18:10:05.573884   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:05.573891   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:05.573953   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:05.607827   74203 cri.go:89] found id: ""
	I0731 18:10:05.607848   74203 logs.go:276] 0 containers: []
	W0731 18:10:05.607858   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:05.607863   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:05.607913   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:05.639644   74203 cri.go:89] found id: ""
	I0731 18:10:05.639673   74203 logs.go:276] 0 containers: []
	W0731 18:10:05.639684   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:05.639691   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:05.639753   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:05.673164   74203 cri.go:89] found id: ""
	I0731 18:10:05.673188   74203 logs.go:276] 0 containers: []
	W0731 18:10:05.673195   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:05.673203   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:05.673215   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:05.755189   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:05.755221   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:05.793686   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:05.793715   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:05.844930   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:05.844965   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:05.859150   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:05.859176   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:05.929945   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:08.430669   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:08.444918   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:08.444989   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:08.482598   74203 cri.go:89] found id: ""
	I0731 18:10:08.482625   74203 logs.go:276] 0 containers: []
	W0731 18:10:08.482635   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:08.482642   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:08.482708   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:08.519687   74203 cri.go:89] found id: ""
	I0731 18:10:08.519717   74203 logs.go:276] 0 containers: []
	W0731 18:10:08.519726   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:08.519734   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:08.519795   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:08.551600   74203 cri.go:89] found id: ""
	I0731 18:10:08.551638   74203 logs.go:276] 0 containers: []
	W0731 18:10:08.551649   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:08.551657   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:08.551713   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:08.585233   74203 cri.go:89] found id: ""
	I0731 18:10:08.585263   74203 logs.go:276] 0 containers: []
	W0731 18:10:08.585274   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:08.585282   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:08.585343   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:08.622464   74203 cri.go:89] found id: ""
	I0731 18:10:08.622492   74203 logs.go:276] 0 containers: []
	W0731 18:10:08.622502   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:08.622510   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:08.622569   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:08.658360   74203 cri.go:89] found id: ""
	I0731 18:10:08.658390   74203 logs.go:276] 0 containers: []
	W0731 18:10:08.658402   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:08.658410   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:08.658471   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:08.692076   74203 cri.go:89] found id: ""
	I0731 18:10:08.692100   74203 logs.go:276] 0 containers: []
	W0731 18:10:08.692109   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:08.692116   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:08.692179   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:08.729584   74203 cri.go:89] found id: ""
	I0731 18:10:08.729612   74203 logs.go:276] 0 containers: []
	W0731 18:10:08.729622   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:08.729633   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:08.729647   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:08.806395   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:08.806457   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:08.806485   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:08.884008   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:08.884046   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:08.924359   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:08.924398   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:08.978161   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:08.978195   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:07.279327   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:09.281214   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:07.143145   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:09.143995   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:11.643254   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:09.773024   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:12.272862   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:14.273615   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:11.491784   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:11.504711   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:11.504784   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:11.541314   74203 cri.go:89] found id: ""
	I0731 18:10:11.541353   74203 logs.go:276] 0 containers: []
	W0731 18:10:11.541361   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:11.541366   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:11.541424   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:11.576481   74203 cri.go:89] found id: ""
	I0731 18:10:11.576509   74203 logs.go:276] 0 containers: []
	W0731 18:10:11.576527   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:11.576535   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:11.576597   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:11.610370   74203 cri.go:89] found id: ""
	I0731 18:10:11.610395   74203 logs.go:276] 0 containers: []
	W0731 18:10:11.610404   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:11.610412   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:11.610470   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:11.645559   74203 cri.go:89] found id: ""
	I0731 18:10:11.645586   74203 logs.go:276] 0 containers: []
	W0731 18:10:11.645593   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:11.645598   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:11.645654   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:11.677576   74203 cri.go:89] found id: ""
	I0731 18:10:11.677613   74203 logs.go:276] 0 containers: []
	W0731 18:10:11.677624   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:11.677631   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:11.677681   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:11.710173   74203 cri.go:89] found id: ""
	I0731 18:10:11.710199   74203 logs.go:276] 0 containers: []
	W0731 18:10:11.710208   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:11.710215   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:11.710273   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:11.743722   74203 cri.go:89] found id: ""
	I0731 18:10:11.743752   74203 logs.go:276] 0 containers: []
	W0731 18:10:11.743763   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:11.743782   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:11.743857   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:11.776730   74203 cri.go:89] found id: ""
	I0731 18:10:11.776752   74203 logs.go:276] 0 containers: []
	W0731 18:10:11.776759   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:11.776766   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:11.776777   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:11.846385   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:11.846404   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:11.846415   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:11.923748   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:11.923779   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:11.959700   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:11.959734   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:12.009971   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:12.010002   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:14.524097   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:14.537349   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:14.537449   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:14.569907   74203 cri.go:89] found id: ""
	I0731 18:10:14.569934   74203 logs.go:276] 0 containers: []
	W0731 18:10:14.569941   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:14.569947   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:14.569999   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:14.605058   74203 cri.go:89] found id: ""
	I0731 18:10:14.605085   74203 logs.go:276] 0 containers: []
	W0731 18:10:14.605095   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:14.605102   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:14.605155   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:14.640941   74203 cri.go:89] found id: ""
	I0731 18:10:14.640964   74203 logs.go:276] 0 containers: []
	W0731 18:10:14.640975   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:14.640982   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:14.641039   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:14.678774   74203 cri.go:89] found id: ""
	I0731 18:10:14.678803   74203 logs.go:276] 0 containers: []
	W0731 18:10:14.678814   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:14.678822   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:14.678880   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:14.714123   74203 cri.go:89] found id: ""
	I0731 18:10:14.714152   74203 logs.go:276] 0 containers: []
	W0731 18:10:14.714163   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:14.714171   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:14.714230   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:14.750212   74203 cri.go:89] found id: ""
	I0731 18:10:14.750243   74203 logs.go:276] 0 containers: []
	W0731 18:10:14.750255   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:14.750262   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:14.750322   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:14.786820   74203 cri.go:89] found id: ""
	I0731 18:10:14.786842   74203 logs.go:276] 0 containers: []
	W0731 18:10:14.786850   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:14.786856   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:14.786904   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:14.819667   74203 cri.go:89] found id: ""
	I0731 18:10:14.819689   74203 logs.go:276] 0 containers: []
	W0731 18:10:14.819697   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:14.819705   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:14.819725   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:14.832525   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:14.832550   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:14.901190   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:14.901216   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:14.901229   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:14.977123   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:14.977158   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:15.014882   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:15.014912   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:11.779007   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:14.279638   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:14.142303   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:16.143713   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:16.770910   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:18.771058   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:17.564989   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:17.578676   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:17.578740   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:17.610077   74203 cri.go:89] found id: ""
	I0731 18:10:17.610103   74203 logs.go:276] 0 containers: []
	W0731 18:10:17.610112   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:17.610117   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:17.610169   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:17.643143   74203 cri.go:89] found id: ""
	I0731 18:10:17.643166   74203 logs.go:276] 0 containers: []
	W0731 18:10:17.643173   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:17.643179   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:17.643225   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:17.677979   74203 cri.go:89] found id: ""
	I0731 18:10:17.678002   74203 logs.go:276] 0 containers: []
	W0731 18:10:17.678010   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:17.678016   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:17.678086   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:17.711905   74203 cri.go:89] found id: ""
	I0731 18:10:17.711941   74203 logs.go:276] 0 containers: []
	W0731 18:10:17.711953   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:17.711960   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:17.712013   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:17.745842   74203 cri.go:89] found id: ""
	I0731 18:10:17.745870   74203 logs.go:276] 0 containers: []
	W0731 18:10:17.745881   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:17.745889   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:17.745949   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:17.778170   74203 cri.go:89] found id: ""
	I0731 18:10:17.778242   74203 logs.go:276] 0 containers: []
	W0731 18:10:17.778260   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:17.778272   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:17.778340   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:17.810717   74203 cri.go:89] found id: ""
	I0731 18:10:17.810744   74203 logs.go:276] 0 containers: []
	W0731 18:10:17.810755   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:17.810762   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:17.810823   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:17.843237   74203 cri.go:89] found id: ""
	I0731 18:10:17.843268   74203 logs.go:276] 0 containers: []
	W0731 18:10:17.843278   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:17.843288   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:17.843303   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:17.894338   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:17.894376   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:17.907898   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:17.907927   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:17.977115   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:17.977133   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:17.977145   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:18.059924   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:18.059968   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:16.279697   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:18.780698   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:18.144063   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:20.643891   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:20.772956   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:23.270974   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:20.600903   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:20.613609   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:20.613680   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:20.646352   74203 cri.go:89] found id: ""
	I0731 18:10:20.646379   74203 logs.go:276] 0 containers: []
	W0731 18:10:20.646388   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:20.646395   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:20.646453   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:20.680448   74203 cri.go:89] found id: ""
	I0731 18:10:20.680475   74203 logs.go:276] 0 containers: []
	W0731 18:10:20.680486   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:20.680493   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:20.680555   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:20.716330   74203 cri.go:89] found id: ""
	I0731 18:10:20.716365   74203 logs.go:276] 0 containers: []
	W0731 18:10:20.716378   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:20.716387   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:20.716448   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:20.748630   74203 cri.go:89] found id: ""
	I0731 18:10:20.748657   74203 logs.go:276] 0 containers: []
	W0731 18:10:20.748665   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:20.748670   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:20.748736   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:20.787769   74203 cri.go:89] found id: ""
	I0731 18:10:20.787793   74203 logs.go:276] 0 containers: []
	W0731 18:10:20.787802   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:20.787809   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:20.787869   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:20.819884   74203 cri.go:89] found id: ""
	I0731 18:10:20.819911   74203 logs.go:276] 0 containers: []
	W0731 18:10:20.819921   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:20.819929   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:20.819988   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:20.853414   74203 cri.go:89] found id: ""
	I0731 18:10:20.853437   74203 logs.go:276] 0 containers: []
	W0731 18:10:20.853445   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:20.853450   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:20.853508   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:20.889198   74203 cri.go:89] found id: ""
	I0731 18:10:20.889224   74203 logs.go:276] 0 containers: []
	W0731 18:10:20.889231   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:20.889239   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:20.889251   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:20.903240   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:20.903268   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:20.971003   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:20.971032   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:20.971051   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:21.045856   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:21.045888   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:21.086089   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:21.086121   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:23.639664   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:23.652573   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:23.652632   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:23.684719   74203 cri.go:89] found id: ""
	I0731 18:10:23.684746   74203 logs.go:276] 0 containers: []
	W0731 18:10:23.684757   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:23.684765   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:23.684820   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:23.717315   74203 cri.go:89] found id: ""
	I0731 18:10:23.717350   74203 logs.go:276] 0 containers: []
	W0731 18:10:23.717362   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:23.717369   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:23.717432   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:23.750251   74203 cri.go:89] found id: ""
	I0731 18:10:23.750275   74203 logs.go:276] 0 containers: []
	W0731 18:10:23.750286   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:23.750293   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:23.750397   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:23.785700   74203 cri.go:89] found id: ""
	I0731 18:10:23.785726   74203 logs.go:276] 0 containers: []
	W0731 18:10:23.785737   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:23.785745   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:23.785792   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:23.816856   74203 cri.go:89] found id: ""
	I0731 18:10:23.816885   74203 logs.go:276] 0 containers: []
	W0731 18:10:23.816895   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:23.816902   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:23.816965   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:23.849931   74203 cri.go:89] found id: ""
	I0731 18:10:23.849962   74203 logs.go:276] 0 containers: []
	W0731 18:10:23.849972   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:23.849980   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:23.850043   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:23.881413   74203 cri.go:89] found id: ""
	I0731 18:10:23.881444   74203 logs.go:276] 0 containers: []
	W0731 18:10:23.881452   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:23.881458   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:23.881516   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:23.914272   74203 cri.go:89] found id: ""
	I0731 18:10:23.914303   74203 logs.go:276] 0 containers: []
	W0731 18:10:23.914313   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:23.914325   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:23.914352   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:23.979988   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:23.980015   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:23.980027   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:24.057159   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:24.057198   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:24.097567   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:24.097603   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:24.154740   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:24.154781   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:21.279091   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:23.779103   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:25.779754   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:23.142423   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:25.642901   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:25.272277   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:27.771221   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:26.670324   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:26.683866   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:26.683951   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:26.717671   74203 cri.go:89] found id: ""
	I0731 18:10:26.717722   74203 logs.go:276] 0 containers: []
	W0731 18:10:26.717733   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:26.717739   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:26.717790   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:26.751201   74203 cri.go:89] found id: ""
	I0731 18:10:26.751228   74203 logs.go:276] 0 containers: []
	W0731 18:10:26.751236   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:26.751246   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:26.751315   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:26.784768   74203 cri.go:89] found id: ""
	I0731 18:10:26.784793   74203 logs.go:276] 0 containers: []
	W0731 18:10:26.784803   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:26.784811   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:26.784868   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:26.822269   74203 cri.go:89] found id: ""
	I0731 18:10:26.822298   74203 logs.go:276] 0 containers: []
	W0731 18:10:26.822307   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:26.822315   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:26.822378   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:26.854405   74203 cri.go:89] found id: ""
	I0731 18:10:26.854427   74203 logs.go:276] 0 containers: []
	W0731 18:10:26.854434   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:26.854441   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:26.854490   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:26.888975   74203 cri.go:89] found id: ""
	I0731 18:10:26.889000   74203 logs.go:276] 0 containers: []
	W0731 18:10:26.889007   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:26.889013   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:26.889085   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:26.922940   74203 cri.go:89] found id: ""
	I0731 18:10:26.922967   74203 logs.go:276] 0 containers: []
	W0731 18:10:26.922976   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:26.922981   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:26.923040   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:26.955717   74203 cri.go:89] found id: ""
	I0731 18:10:26.955743   74203 logs.go:276] 0 containers: []
	W0731 18:10:26.955754   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:26.955764   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:26.955779   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:27.006453   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:27.006481   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:27.019136   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:27.019159   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:27.086988   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:27.087014   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:27.087031   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:27.161574   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:27.161604   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:29.705620   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:29.718718   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:29.718775   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:29.751079   74203 cri.go:89] found id: ""
	I0731 18:10:29.751123   74203 logs.go:276] 0 containers: []
	W0731 18:10:29.751134   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:29.751142   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:29.751198   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:29.790944   74203 cri.go:89] found id: ""
	I0731 18:10:29.790971   74203 logs.go:276] 0 containers: []
	W0731 18:10:29.790982   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:29.790988   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:29.791041   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:29.827921   74203 cri.go:89] found id: ""
	I0731 18:10:29.827951   74203 logs.go:276] 0 containers: []
	W0731 18:10:29.827965   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:29.827971   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:29.828031   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:29.861365   74203 cri.go:89] found id: ""
	I0731 18:10:29.861398   74203 logs.go:276] 0 containers: []
	W0731 18:10:29.861409   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:29.861417   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:29.861472   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:29.894509   74203 cri.go:89] found id: ""
	I0731 18:10:29.894537   74203 logs.go:276] 0 containers: []
	W0731 18:10:29.894546   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:29.894552   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:29.894614   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:29.926793   74203 cri.go:89] found id: ""
	I0731 18:10:29.926821   74203 logs.go:276] 0 containers: []
	W0731 18:10:29.926832   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:29.926839   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:29.926904   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:29.963765   74203 cri.go:89] found id: ""
	I0731 18:10:29.963792   74203 logs.go:276] 0 containers: []
	W0731 18:10:29.963802   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:29.963809   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:29.963870   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:29.998577   74203 cri.go:89] found id: ""
	I0731 18:10:29.998604   74203 logs.go:276] 0 containers: []
	W0731 18:10:29.998611   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:29.998619   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:29.998630   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:30.050035   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:30.050072   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:30.064147   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:30.064178   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:30.136990   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:30.137012   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:30.137030   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:30.214687   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:30.214719   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:28.279257   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:30.778466   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:27.644082   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:30.144191   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:29.772316   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:32.272651   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:32.753503   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:32.766795   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:32.766873   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:32.812134   74203 cri.go:89] found id: ""
	I0731 18:10:32.812161   74203 logs.go:276] 0 containers: []
	W0731 18:10:32.812169   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:32.812175   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:32.812229   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:32.846997   74203 cri.go:89] found id: ""
	I0731 18:10:32.847029   74203 logs.go:276] 0 containers: []
	W0731 18:10:32.847039   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:32.847044   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:32.847092   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:32.884093   74203 cri.go:89] found id: ""
	I0731 18:10:32.884123   74203 logs.go:276] 0 containers: []
	W0731 18:10:32.884132   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:32.884138   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:32.884188   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:32.920160   74203 cri.go:89] found id: ""
	I0731 18:10:32.920186   74203 logs.go:276] 0 containers: []
	W0731 18:10:32.920197   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:32.920204   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:32.920263   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:32.952750   74203 cri.go:89] found id: ""
	I0731 18:10:32.952777   74203 logs.go:276] 0 containers: []
	W0731 18:10:32.952788   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:32.952795   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:32.952865   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:32.989086   74203 cri.go:89] found id: ""
	I0731 18:10:32.989115   74203 logs.go:276] 0 containers: []
	W0731 18:10:32.989125   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:32.989135   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:32.989189   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:33.021554   74203 cri.go:89] found id: ""
	I0731 18:10:33.021590   74203 logs.go:276] 0 containers: []
	W0731 18:10:33.021602   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:33.021609   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:33.021662   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:33.061097   74203 cri.go:89] found id: ""
	I0731 18:10:33.061128   74203 logs.go:276] 0 containers: []
	W0731 18:10:33.061141   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:33.061160   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:33.061174   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:33.113497   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:33.113534   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:33.126816   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:33.126842   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:33.196713   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:33.196733   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:33.196744   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:33.277697   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:33.277724   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:33.279738   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:35.780181   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:32.643177   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:35.143606   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:34.771678   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:36.772167   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:39.272752   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:35.817143   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:35.829760   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:35.829820   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:35.862974   74203 cri.go:89] found id: ""
	I0731 18:10:35.863002   74203 logs.go:276] 0 containers: []
	W0731 18:10:35.863014   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:35.863022   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:35.863078   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:35.898547   74203 cri.go:89] found id: ""
	I0731 18:10:35.898576   74203 logs.go:276] 0 containers: []
	W0731 18:10:35.898584   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:35.898590   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:35.898651   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:35.930351   74203 cri.go:89] found id: ""
	I0731 18:10:35.930379   74203 logs.go:276] 0 containers: []
	W0731 18:10:35.930390   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:35.930396   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:35.930463   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:35.962623   74203 cri.go:89] found id: ""
	I0731 18:10:35.962652   74203 logs.go:276] 0 containers: []
	W0731 18:10:35.962663   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:35.962670   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:35.962727   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:35.998213   74203 cri.go:89] found id: ""
	I0731 18:10:35.998233   74203 logs.go:276] 0 containers: []
	W0731 18:10:35.998240   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:35.998245   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:35.998291   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:36.032670   74203 cri.go:89] found id: ""
	I0731 18:10:36.032695   74203 logs.go:276] 0 containers: []
	W0731 18:10:36.032703   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:36.032709   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:36.032757   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:36.066349   74203 cri.go:89] found id: ""
	I0731 18:10:36.066381   74203 logs.go:276] 0 containers: []
	W0731 18:10:36.066392   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:36.066399   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:36.066461   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:36.104137   74203 cri.go:89] found id: ""
	I0731 18:10:36.104168   74203 logs.go:276] 0 containers: []
	W0731 18:10:36.104180   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:36.104200   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:36.104215   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:36.155814   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:36.155844   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:36.168885   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:36.168912   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:36.235950   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:36.235972   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:36.235987   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:36.318382   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:36.318414   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:38.853972   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:38.867018   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:38.867089   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:38.902069   74203 cri.go:89] found id: ""
	I0731 18:10:38.902097   74203 logs.go:276] 0 containers: []
	W0731 18:10:38.902109   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:38.902115   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:38.902181   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:38.935272   74203 cri.go:89] found id: ""
	I0731 18:10:38.935296   74203 logs.go:276] 0 containers: []
	W0731 18:10:38.935316   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:38.935329   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:38.935387   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:38.968582   74203 cri.go:89] found id: ""
	I0731 18:10:38.968610   74203 logs.go:276] 0 containers: []
	W0731 18:10:38.968621   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:38.968629   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:38.968688   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:38.999740   74203 cri.go:89] found id: ""
	I0731 18:10:38.999770   74203 logs.go:276] 0 containers: []
	W0731 18:10:38.999780   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:38.999787   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:38.999845   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:39.032964   74203 cri.go:89] found id: ""
	I0731 18:10:39.032993   74203 logs.go:276] 0 containers: []
	W0731 18:10:39.033008   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:39.033015   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:39.033099   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:39.064121   74203 cri.go:89] found id: ""
	I0731 18:10:39.064149   74203 logs.go:276] 0 containers: []
	W0731 18:10:39.064158   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:39.064164   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:39.064222   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:39.098462   74203 cri.go:89] found id: ""
	I0731 18:10:39.098488   74203 logs.go:276] 0 containers: []
	W0731 18:10:39.098498   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:39.098505   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:39.098564   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:39.130627   74203 cri.go:89] found id: ""
	I0731 18:10:39.130653   74203 logs.go:276] 0 containers: []
	W0731 18:10:39.130663   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:39.130674   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:39.130687   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:39.223664   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:39.223698   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:39.260502   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:39.260533   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:39.315643   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:39.315675   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:39.329731   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:39.329761   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:39.395078   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:38.278911   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:40.779921   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:37.643246   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:39.643862   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:41.772051   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:44.271544   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:41.895698   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:41.910111   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:41.910191   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:41.943700   74203 cri.go:89] found id: ""
	I0731 18:10:41.943732   74203 logs.go:276] 0 containers: []
	W0731 18:10:41.943743   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:41.943751   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:41.943812   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:41.976848   74203 cri.go:89] found id: ""
	I0731 18:10:41.976879   74203 logs.go:276] 0 containers: []
	W0731 18:10:41.976888   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:41.976894   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:41.976967   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:42.009424   74203 cri.go:89] found id: ""
	I0731 18:10:42.009451   74203 logs.go:276] 0 containers: []
	W0731 18:10:42.009462   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:42.009477   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:42.009546   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:42.047233   74203 cri.go:89] found id: ""
	I0731 18:10:42.047260   74203 logs.go:276] 0 containers: []
	W0731 18:10:42.047268   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:42.047274   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:42.047342   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:42.079900   74203 cri.go:89] found id: ""
	I0731 18:10:42.079928   74203 logs.go:276] 0 containers: []
	W0731 18:10:42.079938   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:42.079945   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:42.080025   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:42.114122   74203 cri.go:89] found id: ""
	I0731 18:10:42.114152   74203 logs.go:276] 0 containers: []
	W0731 18:10:42.114164   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:42.114172   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:42.114224   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:42.148741   74203 cri.go:89] found id: ""
	I0731 18:10:42.148768   74203 logs.go:276] 0 containers: []
	W0731 18:10:42.148780   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:42.148789   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:42.148853   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:42.184739   74203 cri.go:89] found id: ""
	I0731 18:10:42.184762   74203 logs.go:276] 0 containers: []
	W0731 18:10:42.184769   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:42.184777   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:42.184791   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:42.254676   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:42.254694   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:42.254706   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:42.334936   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:42.334978   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:42.371511   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:42.371540   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:42.421800   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:42.421831   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:44.934983   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:44.947212   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:44.947293   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:44.979722   74203 cri.go:89] found id: ""
	I0731 18:10:44.979748   74203 logs.go:276] 0 containers: []
	W0731 18:10:44.979760   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:44.979767   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:44.979819   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:45.011594   74203 cri.go:89] found id: ""
	I0731 18:10:45.011620   74203 logs.go:276] 0 containers: []
	W0731 18:10:45.011630   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:45.011637   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:45.011803   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:45.043174   74203 cri.go:89] found id: ""
	I0731 18:10:45.043197   74203 logs.go:276] 0 containers: []
	W0731 18:10:45.043207   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:45.043214   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:45.043278   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:45.074629   74203 cri.go:89] found id: ""
	I0731 18:10:45.074652   74203 logs.go:276] 0 containers: []
	W0731 18:10:45.074662   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:45.074669   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:45.074727   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:45.108917   74203 cri.go:89] found id: ""
	I0731 18:10:45.108944   74203 logs.go:276] 0 containers: []
	W0731 18:10:45.108952   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:45.108959   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:45.109018   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:45.142200   74203 cri.go:89] found id: ""
	I0731 18:10:45.142227   74203 logs.go:276] 0 containers: []
	W0731 18:10:45.142237   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:45.142244   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:45.142306   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:45.177076   74203 cri.go:89] found id: ""
	I0731 18:10:45.177101   74203 logs.go:276] 0 containers: []
	W0731 18:10:45.177109   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:45.177114   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:45.177168   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:45.209352   74203 cri.go:89] found id: ""
	I0731 18:10:45.209376   74203 logs.go:276] 0 containers: []
	W0731 18:10:45.209383   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:45.209392   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:45.209407   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:45.257966   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:45.257998   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:45.272429   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:45.272462   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 18:10:43.279626   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:45.778975   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:42.145247   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:44.642278   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:46.644897   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:46.771785   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:48.772117   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	W0731 18:10:45.347952   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:45.347973   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:45.347988   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:45.428556   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:45.428609   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:47.971089   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:47.986677   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:47.986749   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:48.020396   74203 cri.go:89] found id: ""
	I0731 18:10:48.020426   74203 logs.go:276] 0 containers: []
	W0731 18:10:48.020438   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:48.020446   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:48.020512   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:48.058129   74203 cri.go:89] found id: ""
	I0731 18:10:48.058161   74203 logs.go:276] 0 containers: []
	W0731 18:10:48.058172   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:48.058180   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:48.058249   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:48.091894   74203 cri.go:89] found id: ""
	I0731 18:10:48.091922   74203 logs.go:276] 0 containers: []
	W0731 18:10:48.091932   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:48.091939   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:48.091998   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:48.124757   74203 cri.go:89] found id: ""
	I0731 18:10:48.124788   74203 logs.go:276] 0 containers: []
	W0731 18:10:48.124798   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:48.124807   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:48.124871   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:48.159145   74203 cri.go:89] found id: ""
	I0731 18:10:48.159172   74203 logs.go:276] 0 containers: []
	W0731 18:10:48.159184   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:48.159191   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:48.159253   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:48.200024   74203 cri.go:89] found id: ""
	I0731 18:10:48.200051   74203 logs.go:276] 0 containers: []
	W0731 18:10:48.200061   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:48.200069   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:48.200128   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:48.233838   74203 cri.go:89] found id: ""
	I0731 18:10:48.233870   74203 logs.go:276] 0 containers: []
	W0731 18:10:48.233880   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:48.233886   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:48.233941   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:48.265786   74203 cri.go:89] found id: ""
	I0731 18:10:48.265812   74203 logs.go:276] 0 containers: []
	W0731 18:10:48.265821   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:48.265832   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:48.265846   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:48.280422   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:48.280449   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:48.346774   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:48.346796   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:48.346808   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:48.424017   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:48.424052   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:48.464139   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:48.464166   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:47.781556   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:50.278635   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:49.143684   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:51.144631   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:51.272847   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:53.771397   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:51.013681   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:51.028745   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:51.028814   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:51.062656   74203 cri.go:89] found id: ""
	I0731 18:10:51.062683   74203 logs.go:276] 0 containers: []
	W0731 18:10:51.062691   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:51.062700   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:51.062749   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:51.099203   74203 cri.go:89] found id: ""
	I0731 18:10:51.099228   74203 logs.go:276] 0 containers: []
	W0731 18:10:51.099237   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:51.099243   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:51.099310   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:51.133507   74203 cri.go:89] found id: ""
	I0731 18:10:51.133533   74203 logs.go:276] 0 containers: []
	W0731 18:10:51.133540   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:51.133546   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:51.133596   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:51.169935   74203 cri.go:89] found id: ""
	I0731 18:10:51.169954   74203 logs.go:276] 0 containers: []
	W0731 18:10:51.169961   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:51.169966   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:51.170012   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:51.202877   74203 cri.go:89] found id: ""
	I0731 18:10:51.202903   74203 logs.go:276] 0 containers: []
	W0731 18:10:51.202913   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:51.202919   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:51.202988   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:51.239913   74203 cri.go:89] found id: ""
	I0731 18:10:51.239939   74203 logs.go:276] 0 containers: []
	W0731 18:10:51.239949   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:51.239957   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:51.240018   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:51.272024   74203 cri.go:89] found id: ""
	I0731 18:10:51.272095   74203 logs.go:276] 0 containers: []
	W0731 18:10:51.272115   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:51.272123   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:51.272185   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:51.307016   74203 cri.go:89] found id: ""
	I0731 18:10:51.307043   74203 logs.go:276] 0 containers: []
	W0731 18:10:51.307053   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:51.307063   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:51.307079   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:51.364018   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:51.364066   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:51.384277   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:51.384303   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:51.472657   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:51.472679   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:51.472696   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:51.548408   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:51.548439   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:54.086526   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:54.099293   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:54.099368   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:54.129927   74203 cri.go:89] found id: ""
	I0731 18:10:54.129954   74203 logs.go:276] 0 containers: []
	W0731 18:10:54.129965   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:54.129972   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:54.130042   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:54.166428   74203 cri.go:89] found id: ""
	I0731 18:10:54.166457   74203 logs.go:276] 0 containers: []
	W0731 18:10:54.166468   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:54.166476   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:54.166538   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:54.204523   74203 cri.go:89] found id: ""
	I0731 18:10:54.204549   74203 logs.go:276] 0 containers: []
	W0731 18:10:54.204556   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:54.204562   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:54.204619   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:54.241706   74203 cri.go:89] found id: ""
	I0731 18:10:54.241730   74203 logs.go:276] 0 containers: []
	W0731 18:10:54.241737   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:54.241744   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:54.241802   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:54.277154   74203 cri.go:89] found id: ""
	I0731 18:10:54.277178   74203 logs.go:276] 0 containers: []
	W0731 18:10:54.277187   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:54.277193   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:54.277255   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:54.310198   74203 cri.go:89] found id: ""
	I0731 18:10:54.310223   74203 logs.go:276] 0 containers: []
	W0731 18:10:54.310231   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:54.310237   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:54.310283   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:54.344807   74203 cri.go:89] found id: ""
	I0731 18:10:54.344837   74203 logs.go:276] 0 containers: []
	W0731 18:10:54.344847   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:54.344854   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:54.344915   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:54.383358   74203 cri.go:89] found id: ""
	I0731 18:10:54.383391   74203 logs.go:276] 0 containers: []
	W0731 18:10:54.383400   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:54.383410   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:54.383424   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:54.431876   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:54.431908   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:54.444797   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:54.444824   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:54.518816   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:54.518839   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:54.518855   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:54.600072   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:54.600109   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:52.279006   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:54.279520   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:53.643093   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:56.143250   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:56.272955   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:58.771584   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:57.141070   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:57.155903   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:57.155975   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:57.189406   74203 cri.go:89] found id: ""
	I0731 18:10:57.189428   74203 logs.go:276] 0 containers: []
	W0731 18:10:57.189435   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:57.189441   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:57.189510   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:57.221507   74203 cri.go:89] found id: ""
	I0731 18:10:57.221531   74203 logs.go:276] 0 containers: []
	W0731 18:10:57.221540   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:57.221547   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:57.221614   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:57.257843   74203 cri.go:89] found id: ""
	I0731 18:10:57.257868   74203 logs.go:276] 0 containers: []
	W0731 18:10:57.257880   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:57.257887   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:57.257939   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:57.292697   74203 cri.go:89] found id: ""
	I0731 18:10:57.292728   74203 logs.go:276] 0 containers: []
	W0731 18:10:57.292737   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:57.292744   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:57.292802   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:57.325705   74203 cri.go:89] found id: ""
	I0731 18:10:57.325728   74203 logs.go:276] 0 containers: []
	W0731 18:10:57.325735   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:57.325740   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:57.325787   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:57.357436   74203 cri.go:89] found id: ""
	I0731 18:10:57.357463   74203 logs.go:276] 0 containers: []
	W0731 18:10:57.357471   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:57.357477   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:57.357534   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:57.388215   74203 cri.go:89] found id: ""
	I0731 18:10:57.388240   74203 logs.go:276] 0 containers: []
	W0731 18:10:57.388249   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:57.388256   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:57.388315   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:57.419609   74203 cri.go:89] found id: ""
	I0731 18:10:57.419631   74203 logs.go:276] 0 containers: []
	W0731 18:10:57.419643   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:57.419652   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:57.419663   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:57.497157   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:57.497188   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:57.533512   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:57.533552   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:57.587866   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:57.587904   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:57.601191   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:57.601222   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:57.681899   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:00.182160   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:00.195509   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:00.195598   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:00.230650   74203 cri.go:89] found id: ""
	I0731 18:11:00.230674   74203 logs.go:276] 0 containers: []
	W0731 18:11:00.230682   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:00.230689   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:00.230747   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:00.268629   74203 cri.go:89] found id: ""
	I0731 18:11:00.268656   74203 logs.go:276] 0 containers: []
	W0731 18:11:00.268666   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:00.268672   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:00.268740   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:00.301805   74203 cri.go:89] found id: ""
	I0731 18:11:00.301827   74203 logs.go:276] 0 containers: []
	W0731 18:11:00.301836   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:00.301843   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:00.301901   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:00.333844   74203 cri.go:89] found id: ""
	I0731 18:11:00.333871   74203 logs.go:276] 0 containers: []
	W0731 18:11:00.333882   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:00.333889   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:00.333949   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:56.779307   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:58.779655   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:58.643375   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:00.643713   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:01.272195   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:03.272739   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:00.366250   74203 cri.go:89] found id: ""
	I0731 18:11:00.366278   74203 logs.go:276] 0 containers: []
	W0731 18:11:00.366288   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:00.366295   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:00.366358   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:00.399301   74203 cri.go:89] found id: ""
	I0731 18:11:00.399325   74203 logs.go:276] 0 containers: []
	W0731 18:11:00.399335   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:00.399342   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:00.399405   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:00.432182   74203 cri.go:89] found id: ""
	I0731 18:11:00.432207   74203 logs.go:276] 0 containers: []
	W0731 18:11:00.432218   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:00.432224   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:00.432284   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:00.465395   74203 cri.go:89] found id: ""
	I0731 18:11:00.465423   74203 logs.go:276] 0 containers: []
	W0731 18:11:00.465432   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:00.465440   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:00.465453   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:00.516042   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:00.516077   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:00.528621   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:00.528647   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:00.600297   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:00.600322   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:00.600339   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:00.680368   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:00.680399   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:03.217684   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:03.230691   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:03.230752   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:03.264882   74203 cri.go:89] found id: ""
	I0731 18:11:03.264910   74203 logs.go:276] 0 containers: []
	W0731 18:11:03.264918   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:03.264924   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:03.264976   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:03.301608   74203 cri.go:89] found id: ""
	I0731 18:11:03.301733   74203 logs.go:276] 0 containers: []
	W0731 18:11:03.301754   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:03.301765   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:03.301838   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:03.335077   74203 cri.go:89] found id: ""
	I0731 18:11:03.335102   74203 logs.go:276] 0 containers: []
	W0731 18:11:03.335121   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:03.335128   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:03.335196   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:03.370755   74203 cri.go:89] found id: ""
	I0731 18:11:03.370783   74203 logs.go:276] 0 containers: []
	W0731 18:11:03.370794   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:03.370802   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:03.370862   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:03.403004   74203 cri.go:89] found id: ""
	I0731 18:11:03.403035   74203 logs.go:276] 0 containers: []
	W0731 18:11:03.403045   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:03.403052   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:03.403125   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:03.437169   74203 cri.go:89] found id: ""
	I0731 18:11:03.437209   74203 logs.go:276] 0 containers: []
	W0731 18:11:03.437219   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:03.437235   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:03.437296   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:03.469956   74203 cri.go:89] found id: ""
	I0731 18:11:03.469981   74203 logs.go:276] 0 containers: []
	W0731 18:11:03.469991   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:03.469998   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:03.470056   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:03.503850   74203 cri.go:89] found id: ""
	I0731 18:11:03.503878   74203 logs.go:276] 0 containers: []
	W0731 18:11:03.503888   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:03.503898   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:03.503913   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:03.554993   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:03.555036   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:03.567898   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:03.567925   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:03.630151   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:03.630188   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:03.630207   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:03.708552   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:03.708596   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:01.278830   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:03.278880   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:05.778296   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:03.143289   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:05.152015   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:05.771810   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:08.271205   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:06.249728   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:06.261923   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:06.261998   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:06.296249   74203 cri.go:89] found id: ""
	I0731 18:11:06.296276   74203 logs.go:276] 0 containers: []
	W0731 18:11:06.296286   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:06.296292   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:06.296356   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:06.329355   74203 cri.go:89] found id: ""
	I0731 18:11:06.329381   74203 logs.go:276] 0 containers: []
	W0731 18:11:06.329389   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:06.329395   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:06.329443   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:06.362585   74203 cri.go:89] found id: ""
	I0731 18:11:06.362618   74203 logs.go:276] 0 containers: []
	W0731 18:11:06.362630   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:06.362643   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:06.362704   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:06.396489   74203 cri.go:89] found id: ""
	I0731 18:11:06.396514   74203 logs.go:276] 0 containers: []
	W0731 18:11:06.396521   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:06.396527   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:06.396590   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:06.428859   74203 cri.go:89] found id: ""
	I0731 18:11:06.428888   74203 logs.go:276] 0 containers: []
	W0731 18:11:06.428897   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:06.428903   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:06.428960   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:06.468817   74203 cri.go:89] found id: ""
	I0731 18:11:06.468846   74203 logs.go:276] 0 containers: []
	W0731 18:11:06.468856   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:06.468864   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:06.468924   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:06.499975   74203 cri.go:89] found id: ""
	I0731 18:11:06.500000   74203 logs.go:276] 0 containers: []
	W0731 18:11:06.500008   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:06.500013   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:06.500067   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:06.537410   74203 cri.go:89] found id: ""
	I0731 18:11:06.537440   74203 logs.go:276] 0 containers: []
	W0731 18:11:06.537451   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:06.537461   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:06.537476   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:06.589664   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:06.589709   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:06.603978   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:06.604005   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:06.673436   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:06.673454   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:06.673465   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:06.757101   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:06.757143   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:09.299562   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:09.311910   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:09.311971   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:09.346517   74203 cri.go:89] found id: ""
	I0731 18:11:09.346545   74203 logs.go:276] 0 containers: []
	W0731 18:11:09.346555   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:09.346562   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:09.346634   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:09.377688   74203 cri.go:89] found id: ""
	I0731 18:11:09.377713   74203 logs.go:276] 0 containers: []
	W0731 18:11:09.377720   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:09.377726   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:09.377787   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:09.412149   74203 cri.go:89] found id: ""
	I0731 18:11:09.412176   74203 logs.go:276] 0 containers: []
	W0731 18:11:09.412186   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:09.412193   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:09.412259   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:09.444134   74203 cri.go:89] found id: ""
	I0731 18:11:09.444162   74203 logs.go:276] 0 containers: []
	W0731 18:11:09.444172   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:09.444178   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:09.444233   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:09.481407   74203 cri.go:89] found id: ""
	I0731 18:11:09.481436   74203 logs.go:276] 0 containers: []
	W0731 18:11:09.481447   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:09.481453   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:09.481513   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:09.514926   74203 cri.go:89] found id: ""
	I0731 18:11:09.514950   74203 logs.go:276] 0 containers: []
	W0731 18:11:09.514967   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:09.514974   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:09.515036   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:09.547253   74203 cri.go:89] found id: ""
	I0731 18:11:09.547278   74203 logs.go:276] 0 containers: []
	W0731 18:11:09.547285   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:09.547291   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:09.547376   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:09.587585   74203 cri.go:89] found id: ""
	I0731 18:11:09.587614   74203 logs.go:276] 0 containers: []
	W0731 18:11:09.587622   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:09.587632   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:09.587646   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:09.642024   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:09.642057   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:09.655244   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:09.655270   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:09.721446   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:09.721474   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:09.721489   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:09.803315   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:09.803349   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:07.779195   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:10.278028   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:07.643242   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:10.143895   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:10.271515   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:12.771322   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:12.344355   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:12.357122   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:12.357194   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:12.392237   74203 cri.go:89] found id: ""
	I0731 18:11:12.392258   74203 logs.go:276] 0 containers: []
	W0731 18:11:12.392267   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:12.392272   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:12.392339   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:12.424490   74203 cri.go:89] found id: ""
	I0731 18:11:12.424514   74203 logs.go:276] 0 containers: []
	W0731 18:11:12.424523   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:12.424529   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:12.424587   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:12.458438   74203 cri.go:89] found id: ""
	I0731 18:11:12.458467   74203 logs.go:276] 0 containers: []
	W0731 18:11:12.458477   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:12.458483   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:12.458545   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:12.495343   74203 cri.go:89] found id: ""
	I0731 18:11:12.495371   74203 logs.go:276] 0 containers: []
	W0731 18:11:12.495383   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:12.495391   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:12.495455   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:12.527285   74203 cri.go:89] found id: ""
	I0731 18:11:12.527314   74203 logs.go:276] 0 containers: []
	W0731 18:11:12.527324   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:12.527334   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:12.527393   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:12.560341   74203 cri.go:89] found id: ""
	I0731 18:11:12.560369   74203 logs.go:276] 0 containers: []
	W0731 18:11:12.560379   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:12.560387   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:12.560444   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:12.595084   74203 cri.go:89] found id: ""
	I0731 18:11:12.595120   74203 logs.go:276] 0 containers: []
	W0731 18:11:12.595133   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:12.595141   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:12.595215   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:12.630666   74203 cri.go:89] found id: ""
	I0731 18:11:12.630692   74203 logs.go:276] 0 containers: []
	W0731 18:11:12.630702   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:12.630711   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:12.630727   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:12.683588   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:12.683620   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:12.696899   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:12.696925   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:12.757815   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:12.757837   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:12.757870   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:12.834888   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:12.834927   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:12.278464   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:14.279031   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:12.643960   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:15.142811   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:14.771367   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:16.772010   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:19.271857   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:15.372797   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:15.386268   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:15.386356   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:15.420446   74203 cri.go:89] found id: ""
	I0731 18:11:15.420477   74203 logs.go:276] 0 containers: []
	W0731 18:11:15.420488   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:15.420497   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:15.420556   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:15.456092   74203 cri.go:89] found id: ""
	I0731 18:11:15.456118   74203 logs.go:276] 0 containers: []
	W0731 18:11:15.456129   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:15.456136   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:15.456194   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:15.488277   74203 cri.go:89] found id: ""
	I0731 18:11:15.488304   74203 logs.go:276] 0 containers: []
	W0731 18:11:15.488316   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:15.488323   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:15.488384   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:15.520701   74203 cri.go:89] found id: ""
	I0731 18:11:15.520730   74203 logs.go:276] 0 containers: []
	W0731 18:11:15.520741   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:15.520749   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:15.520818   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:15.552831   74203 cri.go:89] found id: ""
	I0731 18:11:15.552854   74203 logs.go:276] 0 containers: []
	W0731 18:11:15.552862   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:15.552867   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:15.552920   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:15.589161   74203 cri.go:89] found id: ""
	I0731 18:11:15.589191   74203 logs.go:276] 0 containers: []
	W0731 18:11:15.589203   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:15.589210   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:15.589274   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:15.622501   74203 cri.go:89] found id: ""
	I0731 18:11:15.622532   74203 logs.go:276] 0 containers: []
	W0731 18:11:15.622544   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:15.622552   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:15.622611   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:15.654772   74203 cri.go:89] found id: ""
	I0731 18:11:15.654801   74203 logs.go:276] 0 containers: []
	W0731 18:11:15.654815   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:15.654826   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:15.654843   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:15.703103   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:15.703148   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:15.716620   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:15.716645   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:15.783391   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:15.783416   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:15.783431   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:15.857462   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:15.857495   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:18.394223   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:18.407297   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:18.407374   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:18.439542   74203 cri.go:89] found id: ""
	I0731 18:11:18.439564   74203 logs.go:276] 0 containers: []
	W0731 18:11:18.439572   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:18.439578   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:18.439625   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:18.471838   74203 cri.go:89] found id: ""
	I0731 18:11:18.471863   74203 logs.go:276] 0 containers: []
	W0731 18:11:18.471873   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:18.471883   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:18.471943   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:18.505325   74203 cri.go:89] found id: ""
	I0731 18:11:18.505355   74203 logs.go:276] 0 containers: []
	W0731 18:11:18.505365   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:18.505372   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:18.505432   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:18.536155   74203 cri.go:89] found id: ""
	I0731 18:11:18.536180   74203 logs.go:276] 0 containers: []
	W0731 18:11:18.536189   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:18.536194   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:18.536241   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:18.569301   74203 cri.go:89] found id: ""
	I0731 18:11:18.569329   74203 logs.go:276] 0 containers: []
	W0731 18:11:18.569339   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:18.569344   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:18.569398   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:18.603053   74203 cri.go:89] found id: ""
	I0731 18:11:18.603079   74203 logs.go:276] 0 containers: []
	W0731 18:11:18.603087   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:18.603092   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:18.603170   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:18.636259   74203 cri.go:89] found id: ""
	I0731 18:11:18.636287   74203 logs.go:276] 0 containers: []
	W0731 18:11:18.636298   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:18.636305   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:18.636361   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:18.667839   74203 cri.go:89] found id: ""
	I0731 18:11:18.667863   74203 logs.go:276] 0 containers: []
	W0731 18:11:18.667873   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:18.667883   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:18.667897   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:18.681005   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:18.681030   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:18.747793   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:18.747875   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:18.747892   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:18.828970   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:18.829005   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:18.866724   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:18.866749   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:16.279368   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:18.778730   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:20.779465   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:17.144041   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:19.645356   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:21.272651   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:23.771240   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:21.416598   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:21.431968   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:21.432027   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:21.469670   74203 cri.go:89] found id: ""
	I0731 18:11:21.469696   74203 logs.go:276] 0 containers: []
	W0731 18:11:21.469703   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:21.469709   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:21.469762   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:21.508461   74203 cri.go:89] found id: ""
	I0731 18:11:21.508490   74203 logs.go:276] 0 containers: []
	W0731 18:11:21.508500   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:21.508506   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:21.508570   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:21.548101   74203 cri.go:89] found id: ""
	I0731 18:11:21.548127   74203 logs.go:276] 0 containers: []
	W0731 18:11:21.548136   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:21.548142   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:21.548204   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:21.582617   74203 cri.go:89] found id: ""
	I0731 18:11:21.582646   74203 logs.go:276] 0 containers: []
	W0731 18:11:21.582653   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:21.582659   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:21.582712   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:21.614185   74203 cri.go:89] found id: ""
	I0731 18:11:21.614210   74203 logs.go:276] 0 containers: []
	W0731 18:11:21.614218   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:21.614223   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:21.614278   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:21.647596   74203 cri.go:89] found id: ""
	I0731 18:11:21.647619   74203 logs.go:276] 0 containers: []
	W0731 18:11:21.647629   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:21.647636   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:21.647693   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:21.680106   74203 cri.go:89] found id: ""
	I0731 18:11:21.680132   74203 logs.go:276] 0 containers: []
	W0731 18:11:21.680142   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:21.680149   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:21.680208   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:21.714708   74203 cri.go:89] found id: ""
	I0731 18:11:21.714733   74203 logs.go:276] 0 containers: []
	W0731 18:11:21.714742   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:21.714754   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:21.714779   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:21.783425   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:21.783448   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:21.783462   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:21.859943   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:21.859980   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:21.898374   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:21.898405   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:21.945753   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:21.945784   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:24.459481   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:24.471376   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:24.471435   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:24.506474   74203 cri.go:89] found id: ""
	I0731 18:11:24.506502   74203 logs.go:276] 0 containers: []
	W0731 18:11:24.506511   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:24.506516   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:24.506572   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:24.547298   74203 cri.go:89] found id: ""
	I0731 18:11:24.547324   74203 logs.go:276] 0 containers: []
	W0731 18:11:24.547332   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:24.547337   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:24.547402   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:24.579912   74203 cri.go:89] found id: ""
	I0731 18:11:24.579944   74203 logs.go:276] 0 containers: []
	W0731 18:11:24.579955   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:24.579963   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:24.580032   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:24.613754   74203 cri.go:89] found id: ""
	I0731 18:11:24.613782   74203 logs.go:276] 0 containers: []
	W0731 18:11:24.613791   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:24.613799   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:24.613859   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:24.649782   74203 cri.go:89] found id: ""
	I0731 18:11:24.649811   74203 logs.go:276] 0 containers: []
	W0731 18:11:24.649822   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:24.649829   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:24.649888   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:24.689232   74203 cri.go:89] found id: ""
	I0731 18:11:24.689264   74203 logs.go:276] 0 containers: []
	W0731 18:11:24.689274   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:24.689283   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:24.689361   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:24.727861   74203 cri.go:89] found id: ""
	I0731 18:11:24.727894   74203 logs.go:276] 0 containers: []
	W0731 18:11:24.727902   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:24.727924   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:24.727983   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:24.763839   74203 cri.go:89] found id: ""
	I0731 18:11:24.763866   74203 logs.go:276] 0 containers: []
	W0731 18:11:24.763876   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:24.763886   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:24.763901   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:24.841090   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:24.841131   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:24.877206   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:24.877231   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:24.926149   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:24.926180   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:24.938795   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:24.938822   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:25.008349   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:23.279256   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:25.778644   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:22.143312   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:24.144259   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:26.144310   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:25.771403   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:28.270613   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:27.509192   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:27.522506   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:27.522582   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:27.557915   74203 cri.go:89] found id: ""
	I0731 18:11:27.557943   74203 logs.go:276] 0 containers: []
	W0731 18:11:27.557954   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:27.557962   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:27.558019   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:27.594295   74203 cri.go:89] found id: ""
	I0731 18:11:27.594322   74203 logs.go:276] 0 containers: []
	W0731 18:11:27.594332   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:27.594348   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:27.594410   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:27.626830   74203 cri.go:89] found id: ""
	I0731 18:11:27.626857   74203 logs.go:276] 0 containers: []
	W0731 18:11:27.626868   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:27.626875   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:27.626964   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:27.662062   74203 cri.go:89] found id: ""
	I0731 18:11:27.662084   74203 logs.go:276] 0 containers: []
	W0731 18:11:27.662092   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:27.662099   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:27.662158   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:27.695686   74203 cri.go:89] found id: ""
	I0731 18:11:27.695715   74203 logs.go:276] 0 containers: []
	W0731 18:11:27.695727   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:27.695735   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:27.695785   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:27.729444   74203 cri.go:89] found id: ""
	I0731 18:11:27.729467   74203 logs.go:276] 0 containers: []
	W0731 18:11:27.729475   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:27.729481   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:27.729531   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:27.761889   74203 cri.go:89] found id: ""
	I0731 18:11:27.761916   74203 logs.go:276] 0 containers: []
	W0731 18:11:27.761926   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:27.761934   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:27.761995   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:27.796178   74203 cri.go:89] found id: ""
	I0731 18:11:27.796199   74203 logs.go:276] 0 containers: []
	W0731 18:11:27.796206   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:27.796214   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:27.796227   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:27.849613   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:27.849650   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:27.862892   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:27.862923   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:27.928691   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:27.928717   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:27.928740   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:28.006310   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:28.006340   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:27.779125   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:30.279252   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:28.643172   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:30.645474   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:30.271016   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:32.771684   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:30.543065   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:30.555951   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:30.556013   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:30.597411   74203 cri.go:89] found id: ""
	I0731 18:11:30.597440   74203 logs.go:276] 0 containers: []
	W0731 18:11:30.597451   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:30.597458   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:30.597518   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:30.629836   74203 cri.go:89] found id: ""
	I0731 18:11:30.629866   74203 logs.go:276] 0 containers: []
	W0731 18:11:30.629873   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:30.629878   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:30.629932   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:30.667402   74203 cri.go:89] found id: ""
	I0731 18:11:30.667432   74203 logs.go:276] 0 containers: []
	W0731 18:11:30.667443   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:30.667450   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:30.667513   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:30.701677   74203 cri.go:89] found id: ""
	I0731 18:11:30.701708   74203 logs.go:276] 0 containers: []
	W0731 18:11:30.701716   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:30.701722   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:30.701773   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:30.736685   74203 cri.go:89] found id: ""
	I0731 18:11:30.736714   74203 logs.go:276] 0 containers: []
	W0731 18:11:30.736721   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:30.736736   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:30.736786   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:30.771501   74203 cri.go:89] found id: ""
	I0731 18:11:30.771526   74203 logs.go:276] 0 containers: []
	W0731 18:11:30.771543   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:30.771549   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:30.771597   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:30.805878   74203 cri.go:89] found id: ""
	I0731 18:11:30.805902   74203 logs.go:276] 0 containers: []
	W0731 18:11:30.805911   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:30.805921   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:30.805966   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:30.839001   74203 cri.go:89] found id: ""
	I0731 18:11:30.839027   74203 logs.go:276] 0 containers: []
	W0731 18:11:30.839038   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:30.839048   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:30.839062   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:30.893357   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:30.893387   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:30.907222   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:30.907248   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:30.985626   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:30.985648   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:30.985668   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:31.067900   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:31.067948   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:33.607259   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:33.621596   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:33.621656   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:33.663616   74203 cri.go:89] found id: ""
	I0731 18:11:33.663642   74203 logs.go:276] 0 containers: []
	W0731 18:11:33.663649   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:33.663655   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:33.663704   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:33.702133   74203 cri.go:89] found id: ""
	I0731 18:11:33.702159   74203 logs.go:276] 0 containers: []
	W0731 18:11:33.702167   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:33.702173   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:33.702226   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:33.733730   74203 cri.go:89] found id: ""
	I0731 18:11:33.733752   74203 logs.go:276] 0 containers: []
	W0731 18:11:33.733760   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:33.733765   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:33.733813   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:33.765036   74203 cri.go:89] found id: ""
	I0731 18:11:33.765064   74203 logs.go:276] 0 containers: []
	W0731 18:11:33.765074   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:33.765080   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:33.765128   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:33.799604   74203 cri.go:89] found id: ""
	I0731 18:11:33.799630   74203 logs.go:276] 0 containers: []
	W0731 18:11:33.799640   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:33.799648   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:33.799716   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:33.831434   74203 cri.go:89] found id: ""
	I0731 18:11:33.831455   74203 logs.go:276] 0 containers: []
	W0731 18:11:33.831464   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:33.831469   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:33.831514   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:33.862975   74203 cri.go:89] found id: ""
	I0731 18:11:33.863004   74203 logs.go:276] 0 containers: []
	W0731 18:11:33.863014   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:33.863022   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:33.863090   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:33.895674   74203 cri.go:89] found id: ""
	I0731 18:11:33.895704   74203 logs.go:276] 0 containers: []
	W0731 18:11:33.895714   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:33.895723   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:33.895737   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:33.931954   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:33.931980   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:33.985353   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:33.985385   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:33.997857   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:33.997882   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:34.060523   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:34.060553   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:34.060575   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:32.778212   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:35.278655   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:33.151579   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:35.643326   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:34.771873   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:36.772309   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:39.271582   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:36.643003   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:36.659306   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:36.659385   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:36.717097   74203 cri.go:89] found id: ""
	I0731 18:11:36.717129   74203 logs.go:276] 0 containers: []
	W0731 18:11:36.717141   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:36.717149   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:36.717212   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:36.750288   74203 cri.go:89] found id: ""
	I0731 18:11:36.750314   74203 logs.go:276] 0 containers: []
	W0731 18:11:36.750325   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:36.750331   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:36.750391   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:36.785272   74203 cri.go:89] found id: ""
	I0731 18:11:36.785296   74203 logs.go:276] 0 containers: []
	W0731 18:11:36.785304   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:36.785310   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:36.785360   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:36.818927   74203 cri.go:89] found id: ""
	I0731 18:11:36.818953   74203 logs.go:276] 0 containers: []
	W0731 18:11:36.818965   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:36.818972   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:36.819020   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:36.854562   74203 cri.go:89] found id: ""
	I0731 18:11:36.854593   74203 logs.go:276] 0 containers: []
	W0731 18:11:36.854602   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:36.854607   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:36.854670   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:36.887786   74203 cri.go:89] found id: ""
	I0731 18:11:36.887814   74203 logs.go:276] 0 containers: []
	W0731 18:11:36.887825   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:36.887833   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:36.887893   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:36.919418   74203 cri.go:89] found id: ""
	I0731 18:11:36.919446   74203 logs.go:276] 0 containers: []
	W0731 18:11:36.919457   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:36.919471   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:36.919533   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:36.956934   74203 cri.go:89] found id: ""
	I0731 18:11:36.956957   74203 logs.go:276] 0 containers: []
	W0731 18:11:36.956964   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:36.956971   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:36.956989   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:37.003755   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:37.003783   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:37.016977   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:37.017004   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:37.091617   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:37.091646   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:37.091662   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:37.170870   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:37.170903   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:39.714271   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:39.730306   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:39.730383   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:39.765368   74203 cri.go:89] found id: ""
	I0731 18:11:39.765399   74203 logs.go:276] 0 containers: []
	W0731 18:11:39.765407   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:39.765412   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:39.765471   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:39.800394   74203 cri.go:89] found id: ""
	I0731 18:11:39.800419   74203 logs.go:276] 0 containers: []
	W0731 18:11:39.800427   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:39.800433   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:39.800486   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:39.834861   74203 cri.go:89] found id: ""
	I0731 18:11:39.834889   74203 logs.go:276] 0 containers: []
	W0731 18:11:39.834898   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:39.834903   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:39.834958   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:39.868108   74203 cri.go:89] found id: ""
	I0731 18:11:39.868132   74203 logs.go:276] 0 containers: []
	W0731 18:11:39.868141   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:39.868146   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:39.868220   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:39.902097   74203 cri.go:89] found id: ""
	I0731 18:11:39.902120   74203 logs.go:276] 0 containers: []
	W0731 18:11:39.902128   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:39.902134   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:39.902184   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:39.933073   74203 cri.go:89] found id: ""
	I0731 18:11:39.933100   74203 logs.go:276] 0 containers: []
	W0731 18:11:39.933109   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:39.933114   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:39.933165   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:39.965748   74203 cri.go:89] found id: ""
	I0731 18:11:39.965775   74203 logs.go:276] 0 containers: []
	W0731 18:11:39.965785   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:39.965796   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:39.965856   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:39.998164   74203 cri.go:89] found id: ""
	I0731 18:11:39.998189   74203 logs.go:276] 0 containers: []
	W0731 18:11:39.998197   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:39.998205   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:39.998222   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:40.049991   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:40.050027   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:40.063676   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:40.063705   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:40.125855   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:40.125880   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:40.125896   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:40.207937   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:40.207970   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:37.778894   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:40.278489   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:37.643651   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:40.144731   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:41.271897   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:43.771556   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:42.746315   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:42.758998   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:42.759053   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:42.791921   74203 cri.go:89] found id: ""
	I0731 18:11:42.791946   74203 logs.go:276] 0 containers: []
	W0731 18:11:42.791954   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:42.791958   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:42.792004   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:42.822888   74203 cri.go:89] found id: ""
	I0731 18:11:42.822914   74203 logs.go:276] 0 containers: []
	W0731 18:11:42.822922   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:42.822927   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:42.822973   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:42.854516   74203 cri.go:89] found id: ""
	I0731 18:11:42.854545   74203 logs.go:276] 0 containers: []
	W0731 18:11:42.854564   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:42.854574   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:42.854638   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:42.890933   74203 cri.go:89] found id: ""
	I0731 18:11:42.890955   74203 logs.go:276] 0 containers: []
	W0731 18:11:42.890963   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:42.890968   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:42.891013   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:42.925170   74203 cri.go:89] found id: ""
	I0731 18:11:42.925196   74203 logs.go:276] 0 containers: []
	W0731 18:11:42.925206   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:42.925213   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:42.925273   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:42.959845   74203 cri.go:89] found id: ""
	I0731 18:11:42.959868   74203 logs.go:276] 0 containers: []
	W0731 18:11:42.959876   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:42.959881   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:42.959938   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:42.997305   74203 cri.go:89] found id: ""
	I0731 18:11:42.997346   74203 logs.go:276] 0 containers: []
	W0731 18:11:42.997358   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:42.997366   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:42.997427   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:43.030663   74203 cri.go:89] found id: ""
	I0731 18:11:43.030690   74203 logs.go:276] 0 containers: []
	W0731 18:11:43.030700   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:43.030711   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:43.030725   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:43.112280   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:43.112303   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:43.112318   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:43.209002   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:43.209035   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:43.249596   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:43.249629   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:43.302419   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:43.302449   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:42.278874   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:44.273355   73696 pod_ready.go:81] duration metric: took 4m0.000454583s for pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace to be "Ready" ...
	E0731 18:11:44.273380   73696 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace to be "Ready" (will not retry!)
	I0731 18:11:44.273399   73696 pod_ready.go:38] duration metric: took 4m8.019714552s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:11:44.273430   73696 kubeadm.go:597] duration metric: took 4m16.379038728s to restartPrimaryControlPlane
	W0731 18:11:44.273506   73696 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 18:11:44.273531   73696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 18:11:42.643165   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:44.644976   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:46.271751   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:48.771274   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:45.816910   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:45.829909   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:45.829976   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:45.865534   74203 cri.go:89] found id: ""
	I0731 18:11:45.865561   74203 logs.go:276] 0 containers: []
	W0731 18:11:45.865572   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:45.865584   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:45.865646   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:45.901552   74203 cri.go:89] found id: ""
	I0731 18:11:45.901585   74203 logs.go:276] 0 containers: []
	W0731 18:11:45.901593   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:45.901598   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:45.901678   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:45.938790   74203 cri.go:89] found id: ""
	I0731 18:11:45.938820   74203 logs.go:276] 0 containers: []
	W0731 18:11:45.938842   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:45.938859   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:45.938926   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:45.971502   74203 cri.go:89] found id: ""
	I0731 18:11:45.971534   74203 logs.go:276] 0 containers: []
	W0731 18:11:45.971546   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:45.971553   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:45.971620   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:46.009281   74203 cri.go:89] found id: ""
	I0731 18:11:46.009316   74203 logs.go:276] 0 containers: []
	W0731 18:11:46.009327   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:46.009335   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:46.009399   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:46.042899   74203 cri.go:89] found id: ""
	I0731 18:11:46.042928   74203 logs.go:276] 0 containers: []
	W0731 18:11:46.042939   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:46.042947   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:46.043005   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:46.079982   74203 cri.go:89] found id: ""
	I0731 18:11:46.080013   74203 logs.go:276] 0 containers: []
	W0731 18:11:46.080024   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:46.080031   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:46.080098   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:46.113136   74203 cri.go:89] found id: ""
	I0731 18:11:46.113168   74203 logs.go:276] 0 containers: []
	W0731 18:11:46.113179   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:46.113191   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:46.113206   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:46.165818   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:46.165855   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:46.181058   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:46.181083   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:46.256805   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:46.256826   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:46.256838   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:46.353045   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:46.353093   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:48.894656   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:48.910648   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:48.910723   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:48.941080   74203 cri.go:89] found id: ""
	I0731 18:11:48.941103   74203 logs.go:276] 0 containers: []
	W0731 18:11:48.941111   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:48.941117   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:48.941164   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:48.972113   74203 cri.go:89] found id: ""
	I0731 18:11:48.972136   74203 logs.go:276] 0 containers: []
	W0731 18:11:48.972146   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:48.972151   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:48.972208   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:49.004521   74203 cri.go:89] found id: ""
	I0731 18:11:49.004547   74203 logs.go:276] 0 containers: []
	W0731 18:11:49.004557   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:49.004571   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:49.004658   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:49.036600   74203 cri.go:89] found id: ""
	I0731 18:11:49.036622   74203 logs.go:276] 0 containers: []
	W0731 18:11:49.036629   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:49.036635   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:49.036683   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:49.071397   74203 cri.go:89] found id: ""
	I0731 18:11:49.071426   74203 logs.go:276] 0 containers: []
	W0731 18:11:49.071436   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:49.071444   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:49.071501   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:49.108907   74203 cri.go:89] found id: ""
	I0731 18:11:49.108933   74203 logs.go:276] 0 containers: []
	W0731 18:11:49.108944   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:49.108952   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:49.109007   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:49.141808   74203 cri.go:89] found id: ""
	I0731 18:11:49.141834   74203 logs.go:276] 0 containers: []
	W0731 18:11:49.141844   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:49.141856   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:49.141917   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:49.174063   74203 cri.go:89] found id: ""
	I0731 18:11:49.174087   74203 logs.go:276] 0 containers: []
	W0731 18:11:49.174095   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:49.174104   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:49.174116   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:49.212152   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:49.212181   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:49.267297   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:49.267324   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:49.281342   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:49.281365   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:49.349843   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:49.349866   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:49.349882   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:47.144588   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:49.644395   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:51.271203   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:53.770849   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:51.927764   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:51.940480   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:51.940539   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:51.973731   74203 cri.go:89] found id: ""
	I0731 18:11:51.973759   74203 logs.go:276] 0 containers: []
	W0731 18:11:51.973768   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:51.973780   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:51.973837   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:52.003761   74203 cri.go:89] found id: ""
	I0731 18:11:52.003783   74203 logs.go:276] 0 containers: []
	W0731 18:11:52.003790   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:52.003795   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:52.003844   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:52.035009   74203 cri.go:89] found id: ""
	I0731 18:11:52.035028   74203 logs.go:276] 0 containers: []
	W0731 18:11:52.035035   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:52.035041   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:52.035100   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:52.065475   74203 cri.go:89] found id: ""
	I0731 18:11:52.065501   74203 logs.go:276] 0 containers: []
	W0731 18:11:52.065509   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:52.065515   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:52.065574   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:52.097529   74203 cri.go:89] found id: ""
	I0731 18:11:52.097558   74203 logs.go:276] 0 containers: []
	W0731 18:11:52.097567   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:52.097573   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:52.097622   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:52.128881   74203 cri.go:89] found id: ""
	I0731 18:11:52.128909   74203 logs.go:276] 0 containers: []
	W0731 18:11:52.128917   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:52.128923   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:52.128974   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:52.159894   74203 cri.go:89] found id: ""
	I0731 18:11:52.159921   74203 logs.go:276] 0 containers: []
	W0731 18:11:52.159931   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:52.159939   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:52.159998   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:52.191955   74203 cri.go:89] found id: ""
	I0731 18:11:52.191981   74203 logs.go:276] 0 containers: []
	W0731 18:11:52.191990   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:52.191999   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:52.192009   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:52.246389   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:52.246423   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:52.260226   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:52.260253   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:52.328423   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:52.328447   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:52.328459   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:52.408456   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:52.408495   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:54.947734   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:54.960359   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:54.960420   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:54.994231   74203 cri.go:89] found id: ""
	I0731 18:11:54.994256   74203 logs.go:276] 0 containers: []
	W0731 18:11:54.994264   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:54.994270   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:54.994332   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:55.027323   74203 cri.go:89] found id: ""
	I0731 18:11:55.027364   74203 logs.go:276] 0 containers: []
	W0731 18:11:55.027374   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:55.027382   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:55.027440   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:55.061741   74203 cri.go:89] found id: ""
	I0731 18:11:55.061763   74203 logs.go:276] 0 containers: []
	W0731 18:11:55.061771   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:55.061776   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:55.061822   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:55.100685   74203 cri.go:89] found id: ""
	I0731 18:11:55.100712   74203 logs.go:276] 0 containers: []
	W0731 18:11:55.100720   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:55.100726   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:55.100780   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:55.141917   74203 cri.go:89] found id: ""
	I0731 18:11:55.141958   74203 logs.go:276] 0 containers: []
	W0731 18:11:55.141971   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:55.141980   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:55.142054   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:55.176669   74203 cri.go:89] found id: ""
	I0731 18:11:55.176702   74203 logs.go:276] 0 containers: []
	W0731 18:11:55.176711   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:55.176718   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:55.176780   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:55.209795   74203 cri.go:89] found id: ""
	I0731 18:11:55.209829   74203 logs.go:276] 0 containers: []
	W0731 18:11:55.209842   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:55.209850   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:55.209915   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:55.244503   74203 cri.go:89] found id: ""
	I0731 18:11:55.244527   74203 logs.go:276] 0 containers: []
	W0731 18:11:55.244537   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:55.244556   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:55.244572   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:55.320033   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:55.320071   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:52.143803   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:54.644223   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:56.273321   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:58.772541   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:55.357684   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:55.357719   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:55.411465   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:55.411501   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:55.423802   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:55.423833   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:55.487945   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:57.988078   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:58.001639   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:58.001724   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:58.036075   74203 cri.go:89] found id: ""
	I0731 18:11:58.036099   74203 logs.go:276] 0 containers: []
	W0731 18:11:58.036107   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:58.036112   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:58.036163   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:58.067316   74203 cri.go:89] found id: ""
	I0731 18:11:58.067340   74203 logs.go:276] 0 containers: []
	W0731 18:11:58.067348   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:58.067353   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:58.067420   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:58.102446   74203 cri.go:89] found id: ""
	I0731 18:11:58.102470   74203 logs.go:276] 0 containers: []
	W0731 18:11:58.102479   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:58.102485   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:58.102553   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:58.134924   74203 cri.go:89] found id: ""
	I0731 18:11:58.134949   74203 logs.go:276] 0 containers: []
	W0731 18:11:58.134957   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:58.134963   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:58.135023   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:58.171589   74203 cri.go:89] found id: ""
	I0731 18:11:58.171611   74203 logs.go:276] 0 containers: []
	W0731 18:11:58.171620   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:58.171625   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:58.171673   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:58.203813   74203 cri.go:89] found id: ""
	I0731 18:11:58.203836   74203 logs.go:276] 0 containers: []
	W0731 18:11:58.203844   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:58.203850   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:58.203911   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:58.236251   74203 cri.go:89] found id: ""
	I0731 18:11:58.236277   74203 logs.go:276] 0 containers: []
	W0731 18:11:58.236288   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:58.236295   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:58.236357   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:58.270595   74203 cri.go:89] found id: ""
	I0731 18:11:58.270624   74203 logs.go:276] 0 containers: []
	W0731 18:11:58.270636   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:58.270647   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:58.270662   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:58.321889   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:58.321927   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:58.334529   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:58.334552   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:58.398489   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:58.398515   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:58.398540   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:58.479657   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:58.479695   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:57.143080   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:59.144357   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:01.643343   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:01.266100   73800 pod_ready.go:81] duration metric: took 4m0.000711681s for pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace to be "Ready" ...
	E0731 18:12:01.266123   73800 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace to be "Ready" (will not retry!)
	I0731 18:12:01.266160   73800 pod_ready.go:38] duration metric: took 4m6.529342365s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:12:01.266205   73800 kubeadm.go:597] duration metric: took 4m13.643145888s to restartPrimaryControlPlane
	W0731 18:12:01.266270   73800 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 18:12:01.266297   73800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 18:12:01.014684   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:12:01.027959   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:12:01.028026   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:12:01.065423   74203 cri.go:89] found id: ""
	I0731 18:12:01.065459   74203 logs.go:276] 0 containers: []
	W0731 18:12:01.065472   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:12:01.065481   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:12:01.065545   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:12:01.099519   74203 cri.go:89] found id: ""
	I0731 18:12:01.099549   74203 logs.go:276] 0 containers: []
	W0731 18:12:01.099561   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:12:01.099568   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:12:01.099630   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:12:01.131239   74203 cri.go:89] found id: ""
	I0731 18:12:01.131262   74203 logs.go:276] 0 containers: []
	W0731 18:12:01.131270   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:12:01.131275   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:12:01.131321   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:12:01.163209   74203 cri.go:89] found id: ""
	I0731 18:12:01.163229   74203 logs.go:276] 0 containers: []
	W0731 18:12:01.163237   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:12:01.163242   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:12:01.163295   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:12:01.201165   74203 cri.go:89] found id: ""
	I0731 18:12:01.201193   74203 logs.go:276] 0 containers: []
	W0731 18:12:01.201204   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:12:01.201217   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:12:01.201274   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:12:01.233310   74203 cri.go:89] found id: ""
	I0731 18:12:01.233334   74203 logs.go:276] 0 containers: []
	W0731 18:12:01.233342   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:12:01.233348   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:12:01.233415   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:12:01.263412   74203 cri.go:89] found id: ""
	I0731 18:12:01.263442   74203 logs.go:276] 0 containers: []
	W0731 18:12:01.263452   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:12:01.263459   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:12:01.263521   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:12:01.296598   74203 cri.go:89] found id: ""
	I0731 18:12:01.296624   74203 logs.go:276] 0 containers: []
	W0731 18:12:01.296632   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:12:01.296642   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:12:01.296656   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:12:01.372362   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:12:01.372381   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:12:01.372395   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:12:01.461997   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:12:01.462029   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:12:01.507610   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:12:01.507636   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:12:01.558335   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:12:01.558375   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:12:04.073333   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:12:04.091122   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:12:04.091205   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:12:04.130510   74203 cri.go:89] found id: ""
	I0731 18:12:04.130545   74203 logs.go:276] 0 containers: []
	W0731 18:12:04.130558   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:12:04.130566   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:12:04.130632   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:12:04.174749   74203 cri.go:89] found id: ""
	I0731 18:12:04.174775   74203 logs.go:276] 0 containers: []
	W0731 18:12:04.174785   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:12:04.174792   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:12:04.174846   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:12:04.212123   74203 cri.go:89] found id: ""
	I0731 18:12:04.212160   74203 logs.go:276] 0 containers: []
	W0731 18:12:04.212172   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:12:04.212180   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:12:04.212254   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:12:04.251558   74203 cri.go:89] found id: ""
	I0731 18:12:04.251589   74203 logs.go:276] 0 containers: []
	W0731 18:12:04.251600   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:12:04.251608   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:12:04.251671   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:12:04.284831   74203 cri.go:89] found id: ""
	I0731 18:12:04.284864   74203 logs.go:276] 0 containers: []
	W0731 18:12:04.284878   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:12:04.284886   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:12:04.284954   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:12:04.325076   74203 cri.go:89] found id: ""
	I0731 18:12:04.325115   74203 logs.go:276] 0 containers: []
	W0731 18:12:04.325126   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:12:04.325135   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:12:04.325195   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:12:04.370883   74203 cri.go:89] found id: ""
	I0731 18:12:04.370922   74203 logs.go:276] 0 containers: []
	W0731 18:12:04.370933   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:12:04.370940   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:12:04.370999   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:12:04.410639   74203 cri.go:89] found id: ""
	I0731 18:12:04.410671   74203 logs.go:276] 0 containers: []
	W0731 18:12:04.410685   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:12:04.410697   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:12:04.410713   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:12:04.462988   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:12:04.463023   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:12:04.479086   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:12:04.479123   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:12:04.544675   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:12:04.544699   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:12:04.544712   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:12:04.633231   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:12:04.633267   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:12:03.645118   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:06.143865   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:07.174252   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:12:07.187289   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:12:07.187393   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:12:07.220927   74203 cri.go:89] found id: ""
	I0731 18:12:07.220953   74203 logs.go:276] 0 containers: []
	W0731 18:12:07.220964   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:12:07.220972   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:12:07.221040   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:12:07.256817   74203 cri.go:89] found id: ""
	I0731 18:12:07.256849   74203 logs.go:276] 0 containers: []
	W0731 18:12:07.256861   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:12:07.256870   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:12:07.256935   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:12:07.290267   74203 cri.go:89] found id: ""
	I0731 18:12:07.290297   74203 logs.go:276] 0 containers: []
	W0731 18:12:07.290309   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:12:07.290315   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:12:07.290373   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:12:07.330037   74203 cri.go:89] found id: ""
	I0731 18:12:07.330068   74203 logs.go:276] 0 containers: []
	W0731 18:12:07.330079   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:12:07.330087   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:12:07.330143   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:12:07.366745   74203 cri.go:89] found id: ""
	I0731 18:12:07.366770   74203 logs.go:276] 0 containers: []
	W0731 18:12:07.366778   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:12:07.366783   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:12:07.366861   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:12:07.400608   74203 cri.go:89] found id: ""
	I0731 18:12:07.400637   74203 logs.go:276] 0 containers: []
	W0731 18:12:07.400648   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:12:07.400661   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:12:07.400727   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:12:07.434996   74203 cri.go:89] found id: ""
	I0731 18:12:07.435028   74203 logs.go:276] 0 containers: []
	W0731 18:12:07.435037   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:12:07.435044   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:12:07.435130   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:12:07.474347   74203 cri.go:89] found id: ""
	I0731 18:12:07.474375   74203 logs.go:276] 0 containers: []
	W0731 18:12:07.474387   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:12:07.474400   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:12:07.474415   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:12:07.549009   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:12:07.549045   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:12:07.586710   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:12:07.586736   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:12:07.640770   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:12:07.640800   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:12:07.654380   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:12:07.654405   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:12:07.721479   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:12:10.221837   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:12:10.235686   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:12:10.235746   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:12:10.268769   74203 cri.go:89] found id: ""
	I0731 18:12:10.268794   74203 logs.go:276] 0 containers: []
	W0731 18:12:10.268802   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:12:10.268808   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:12:10.268860   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:12:10.305229   74203 cri.go:89] found id: ""
	I0731 18:12:10.305264   74203 logs.go:276] 0 containers: []
	W0731 18:12:10.305277   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:12:10.305290   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:12:10.305353   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:12:10.337070   74203 cri.go:89] found id: ""
	I0731 18:12:10.337095   74203 logs.go:276] 0 containers: []
	W0731 18:12:10.337104   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:12:10.337109   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:12:10.337155   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:12:08.643708   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:10.645483   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:10.372979   74203 cri.go:89] found id: ""
	I0731 18:12:10.373005   74203 logs.go:276] 0 containers: []
	W0731 18:12:10.373015   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:12:10.373022   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:12:10.373079   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:12:10.407225   74203 cri.go:89] found id: ""
	I0731 18:12:10.407252   74203 logs.go:276] 0 containers: []
	W0731 18:12:10.407264   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:12:10.407270   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:12:10.407327   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:12:10.443338   74203 cri.go:89] found id: ""
	I0731 18:12:10.443366   74203 logs.go:276] 0 containers: []
	W0731 18:12:10.443377   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:12:10.443385   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:12:10.443474   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:12:10.477005   74203 cri.go:89] found id: ""
	I0731 18:12:10.477030   74203 logs.go:276] 0 containers: []
	W0731 18:12:10.477038   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:12:10.477043   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:12:10.477092   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:12:10.509338   74203 cri.go:89] found id: ""
	I0731 18:12:10.509367   74203 logs.go:276] 0 containers: []
	W0731 18:12:10.509378   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:12:10.509389   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:12:10.509405   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:12:10.559604   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:12:10.559639   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:12:10.572652   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:12:10.572682   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:12:10.642749   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:12:10.642772   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:12:10.642789   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:12:10.728716   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:12:10.728753   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:12:13.267783   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:12:13.282235   74203 kubeadm.go:597] duration metric: took 4m4.41837453s to restartPrimaryControlPlane
	W0731 18:12:13.282324   74203 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 18:12:13.282355   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 18:12:15.410363   73696 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.136815784s)
	I0731 18:12:15.410431   73696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:12:15.426599   73696 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 18:12:15.435823   73696 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 18:12:15.444553   73696 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 18:12:15.444581   73696 kubeadm.go:157] found existing configuration files:
	
	I0731 18:12:15.444624   73696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0731 18:12:15.453198   73696 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 18:12:15.453273   73696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 18:12:15.461988   73696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0731 18:12:15.470178   73696 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 18:12:15.470238   73696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 18:12:15.478903   73696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0731 18:12:15.487176   73696 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 18:12:15.487215   73696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 18:12:15.496114   73696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0731 18:12:15.504518   73696 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 18:12:15.504579   73696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 18:12:15.513915   73696 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 18:12:15.563318   73696 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0731 18:12:15.563381   73696 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 18:12:15.697426   73696 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 18:12:15.697574   73696 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 18:12:15.697688   73696 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 18:12:15.902621   73696 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 18:12:15.904763   73696 out.go:204]   - Generating certificates and keys ...
	I0731 18:12:15.904869   73696 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 18:12:15.904948   73696 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 18:12:15.905049   73696 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 18:12:15.905149   73696 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 18:12:15.905247   73696 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 18:12:15.905328   73696 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 18:12:15.905426   73696 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 18:12:15.905516   73696 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 18:12:15.905620   73696 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 18:12:15.905729   73696 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 18:12:15.905812   73696 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 18:12:15.905890   73696 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 18:12:16.011366   73696 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 18:12:16.171776   73696 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0731 18:12:16.404302   73696 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 18:12:16.559451   73696 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 18:12:16.686612   73696 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 18:12:16.687311   73696 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 18:12:16.689956   73696 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 18:12:13.142855   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:15.144107   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:16.959318   74203 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.676937263s)
	I0731 18:12:16.959425   74203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:12:16.973440   74203 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 18:12:16.983482   74203 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 18:12:16.993930   74203 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 18:12:16.993951   74203 kubeadm.go:157] found existing configuration files:
	
	I0731 18:12:16.993993   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 18:12:17.002713   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 18:12:17.002771   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 18:12:17.012107   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 18:12:17.022548   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 18:12:17.022604   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 18:12:17.033569   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 18:12:17.043338   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 18:12:17.043391   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 18:12:17.052064   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 18:12:17.060785   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 18:12:17.060850   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 18:12:17.069499   74203 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 18:12:17.136512   74203 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0731 18:12:17.136579   74203 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 18:12:17.286224   74203 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 18:12:17.286383   74203 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 18:12:17.286506   74203 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 18:12:17.467092   74203 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 18:12:17.468918   74203 out.go:204]   - Generating certificates and keys ...
	I0731 18:12:17.469024   74203 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 18:12:17.469135   74203 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 18:12:17.469229   74203 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 18:12:17.469307   74203 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 18:12:17.469439   74203 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 18:12:17.469525   74203 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 18:12:17.469609   74203 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 18:12:17.470025   74203 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 18:12:17.470501   74203 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 18:12:17.470852   74203 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 18:12:17.470899   74203 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 18:12:17.470949   74203 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 18:12:17.673308   74203 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 18:12:17.922789   74203 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 18:12:18.391239   74203 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 18:12:18.464854   74203 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 18:12:18.480495   74203 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 18:12:18.480675   74203 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 18:12:18.480746   74203 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 18:12:18.632564   74203 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 18:12:18.635416   74203 out.go:204]   - Booting up control plane ...
	I0731 18:12:18.635542   74203 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 18:12:18.643338   74203 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 18:12:18.645881   74203 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 18:12:18.646898   74203 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 18:12:18.650052   74203 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 18:12:16.691876   73696 out.go:204]   - Booting up control plane ...
	I0731 18:12:16.691967   73696 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 18:12:16.692064   73696 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 18:12:16.692643   73696 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 18:12:16.713038   73696 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 18:12:16.713123   73696 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 18:12:16.713159   73696 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 18:12:16.855506   73696 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0731 18:12:16.855638   73696 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0731 18:12:17.856697   73696 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001297342s
	I0731 18:12:17.856823   73696 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0731 18:12:17.144295   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:19.644100   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:21.644654   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:22.358287   73696 kubeadm.go:310] [api-check] The API server is healthy after 4.501118217s
	I0731 18:12:22.370066   73696 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 18:12:22.382929   73696 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 18:12:22.402765   73696 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 18:12:22.403044   73696 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-094310 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 18:12:22.419724   73696 kubeadm.go:310] [bootstrap-token] Using token: hduea8.ix2m91ewiu6okgi9
	I0731 18:12:22.421231   73696 out.go:204]   - Configuring RBAC rules ...
	I0731 18:12:22.421382   73696 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 18:12:22.426230   73696 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 18:12:22.434423   73696 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 18:12:22.437839   73696 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 18:12:22.449264   73696 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 18:12:22.452420   73696 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 18:12:22.764876   73696 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 18:12:23.216229   73696 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 18:12:23.765173   73696 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 18:12:23.766223   73696 kubeadm.go:310] 
	I0731 18:12:23.766311   73696 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 18:12:23.766356   73696 kubeadm.go:310] 
	I0731 18:12:23.766466   73696 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 18:12:23.766487   73696 kubeadm.go:310] 
	I0731 18:12:23.766521   73696 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 18:12:23.766641   73696 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 18:12:23.766726   73696 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 18:12:23.766741   73696 kubeadm.go:310] 
	I0731 18:12:23.766827   73696 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 18:12:23.766844   73696 kubeadm.go:310] 
	I0731 18:12:23.766899   73696 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 18:12:23.766910   73696 kubeadm.go:310] 
	I0731 18:12:23.766986   73696 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 18:12:23.767089   73696 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 18:12:23.767225   73696 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 18:12:23.767237   73696 kubeadm.go:310] 
	I0731 18:12:23.767310   73696 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 18:12:23.767401   73696 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 18:12:23.767411   73696 kubeadm.go:310] 
	I0731 18:12:23.767531   73696 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token hduea8.ix2m91ewiu6okgi9 \
	I0731 18:12:23.767662   73696 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:460b24b51d10a3760f2391542a8a89e401359854f437f922cbee3f2d8ed6c856 \
	I0731 18:12:23.767695   73696 kubeadm.go:310] 	--control-plane 
	I0731 18:12:23.767702   73696 kubeadm.go:310] 
	I0731 18:12:23.767773   73696 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 18:12:23.767782   73696 kubeadm.go:310] 
	I0731 18:12:23.767847   73696 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token hduea8.ix2m91ewiu6okgi9 \
	I0731 18:12:23.767930   73696 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:460b24b51d10a3760f2391542a8a89e401359854f437f922cbee3f2d8ed6c856 
	I0731 18:12:23.768912   73696 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 18:12:23.769058   73696 cni.go:84] Creating CNI manager for ""
	I0731 18:12:23.769073   73696 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:12:23.771596   73696 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 18:12:23.773122   73696 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 18:12:23.782944   73696 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 18:12:23.800254   73696 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 18:12:23.800383   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:23.800398   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-094310 minikube.k8s.io/updated_at=2024_07_31T18_12_23_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1d737dad7efa60c56d30434fcd857dd3b14c91d9 minikube.k8s.io/name=default-k8s-diff-port-094310 minikube.k8s.io/primary=true
	I0731 18:12:23.827190   73696 ops.go:34] apiserver oom_adj: -16
	I0731 18:12:23.990425   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:24.490585   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:24.991490   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:25.490948   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:25.991461   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:23.645259   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:26.144352   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:26.491041   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:26.990516   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:27.491386   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:27.991150   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:28.490838   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:28.991267   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:29.490459   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:29.990672   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:30.491302   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:30.990644   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:28.644749   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:31.143617   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:32.532203   73800 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.265875459s)
	I0731 18:12:32.532286   73800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:12:32.548139   73800 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 18:12:32.558049   73800 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 18:12:32.567036   73800 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 18:12:32.567060   73800 kubeadm.go:157] found existing configuration files:
	
	I0731 18:12:32.567133   73800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 18:12:32.576069   73800 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 18:12:32.576124   73800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 18:12:32.584762   73800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 18:12:32.592927   73800 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 18:12:32.592980   73800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 18:12:32.601309   73800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 18:12:32.609478   73800 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 18:12:32.609525   73800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 18:12:32.617980   73800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 18:12:32.625943   73800 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 18:12:32.625978   73800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 18:12:32.634091   73800 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 18:12:32.821569   73800 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 18:12:31.491226   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:31.991099   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:32.490751   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:32.991252   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:33.490564   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:33.990977   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:34.491037   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:34.990696   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:35.491381   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:35.990793   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:36.490926   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:36.581312   73696 kubeadm.go:1113] duration metric: took 12.780981821s to wait for elevateKubeSystemPrivileges
	I0731 18:12:36.581370   73696 kubeadm.go:394] duration metric: took 5m8.741923744s to StartCluster
	I0731 18:12:36.581393   73696 settings.go:142] acquiring lock: {Name:mk8ac8269296bf0b887a00ff74caaad180005f89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:12:36.581485   73696 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 18:12:36.583690   73696 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/kubeconfig: {Name:mkf2340bc1ada3c683f132aca37228d7588e1fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:12:36.583986   73696 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.197 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 18:12:36.585079   73696 config.go:182] Loaded profile config "default-k8s-diff-port-094310": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:12:36.585328   73696 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 18:12:36.585677   73696 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-094310"
	I0731 18:12:36.585686   73696 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-094310"
	I0731 18:12:36.585688   73696 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-094310"
	I0731 18:12:36.585705   73696 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-094310"
	W0731 18:12:36.585717   73696 addons.go:243] addon storage-provisioner should already be in state true
	I0731 18:12:36.585720   73696 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-094310"
	I0731 18:12:36.585732   73696 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-094310"
	W0731 18:12:36.585740   73696 addons.go:243] addon metrics-server should already be in state true
	I0731 18:12:36.585752   73696 host.go:66] Checking if "default-k8s-diff-port-094310" exists ...
	I0731 18:12:36.585766   73696 host.go:66] Checking if "default-k8s-diff-port-094310" exists ...
	I0731 18:12:36.586152   73696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:36.586166   73696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:36.586166   73696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:36.586174   73696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:36.586180   73696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:36.586188   73696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:36.586456   73696 out.go:177] * Verifying Kubernetes components...
	I0731 18:12:36.588174   73696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:12:36.605611   73696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38873
	I0731 18:12:36.605856   73696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44661
	I0731 18:12:36.606122   73696 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:36.606710   73696 main.go:141] libmachine: Using API Version  1
	I0731 18:12:36.606731   73696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:36.606809   73696 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:36.607072   73696 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:36.607240   73696 main.go:141] libmachine: Using API Version  1
	I0731 18:12:36.607262   73696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:36.607789   73696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:36.607817   73696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:36.608000   73696 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:36.608173   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetState
	I0731 18:12:36.609009   73696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39155
	I0731 18:12:36.609469   73696 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:36.609954   73696 main.go:141] libmachine: Using API Version  1
	I0731 18:12:36.609973   73696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:36.610333   73696 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:36.610936   73696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:36.610998   73696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:36.612199   73696 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-094310"
	W0731 18:12:36.612224   73696 addons.go:243] addon default-storageclass should already be in state true
	I0731 18:12:36.612254   73696 host.go:66] Checking if "default-k8s-diff-port-094310" exists ...
	I0731 18:12:36.612624   73696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:36.612659   73696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:36.626474   73696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38009
	I0731 18:12:36.626981   73696 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:36.627514   73696 main.go:141] libmachine: Using API Version  1
	I0731 18:12:36.627534   73696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:36.627836   73696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43349
	I0731 18:12:36.628007   73696 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:36.628336   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetState
	I0731 18:12:36.628415   73696 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:36.628816   73696 main.go:141] libmachine: Using API Version  1
	I0731 18:12:36.628831   73696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:36.629237   73696 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:36.629450   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetState
	I0731 18:12:36.630518   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:12:36.631198   73696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43471
	I0731 18:12:36.631550   73696 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:36.632064   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:12:36.632200   73696 main.go:141] libmachine: Using API Version  1
	I0731 18:12:36.632217   73696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:36.632576   73696 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:12:36.632739   73696 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:36.633275   73696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:36.633313   73696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:36.633711   73696 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0731 18:12:33.642776   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:35.643640   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:36.633805   73696 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 18:12:36.633820   73696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 18:12:36.633840   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:12:36.634990   73696 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 18:12:36.635005   73696 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 18:12:36.635022   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:12:36.637135   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:12:36.637767   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:12:36.637792   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:12:36.637891   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:12:36.639047   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:12:36.639583   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:12:36.639617   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:12:36.639885   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:12:36.640106   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:12:36.640235   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:12:36.640419   73696 sshutil.go:53] new ssh client: &{IP:192.168.72.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/default-k8s-diff-port-094310/id_rsa Username:docker}
	I0731 18:12:36.641860   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:12:36.642037   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:12:36.642205   73696 sshutil.go:53] new ssh client: &{IP:192.168.72.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/default-k8s-diff-port-094310/id_rsa Username:docker}
	I0731 18:12:36.659960   73696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36389
	I0731 18:12:36.660280   73696 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:36.660692   73696 main.go:141] libmachine: Using API Version  1
	I0731 18:12:36.660713   73696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:36.660986   73696 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:36.661150   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetState
	I0731 18:12:36.663024   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:12:36.663232   73696 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 18:12:36.663245   73696 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 18:12:36.663264   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:12:36.666016   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:12:36.666393   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:12:36.666472   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:12:36.666562   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:12:36.666730   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:12:36.666832   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:12:36.666935   73696 sshutil.go:53] new ssh client: &{IP:192.168.72.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/default-k8s-diff-port-094310/id_rsa Username:docker}
	I0731 18:12:36.813977   73696 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 18:12:36.832201   73696 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-094310" to be "Ready" ...
	I0731 18:12:36.849864   73696 node_ready.go:49] node "default-k8s-diff-port-094310" has status "Ready":"True"
	I0731 18:12:36.849891   73696 node_ready.go:38] duration metric: took 17.657098ms for node "default-k8s-diff-port-094310" to be "Ready" ...
	I0731 18:12:36.849903   73696 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:12:36.860981   73696 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:36.865178   73696 pod_ready.go:92] pod "etcd-default-k8s-diff-port-094310" in "kube-system" namespace has status "Ready":"True"
	I0731 18:12:36.865198   73696 pod_ready.go:81] duration metric: took 4.190559ms for pod "etcd-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:36.865209   73696 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:36.869977   73696 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-094310" in "kube-system" namespace has status "Ready":"True"
	I0731 18:12:36.869998   73696 pod_ready.go:81] duration metric: took 4.780295ms for pod "kube-apiserver-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:36.870008   73696 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:36.874051   73696 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-094310" in "kube-system" namespace has status "Ready":"True"
	I0731 18:12:36.874069   73696 pod_ready.go:81] duration metric: took 4.053362ms for pod "kube-controller-manager-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:36.874079   73696 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:36.878519   73696 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-094310" in "kube-system" namespace has status "Ready":"True"
	I0731 18:12:36.878536   73696 pod_ready.go:81] duration metric: took 4.448692ms for pod "kube-scheduler-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:36.878544   73696 pod_ready.go:38] duration metric: took 28.628924ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:12:36.878564   73696 api_server.go:52] waiting for apiserver process to appear ...
	I0731 18:12:36.878622   73696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:12:36.892011   73696 api_server.go:72] duration metric: took 307.983877ms to wait for apiserver process to appear ...
	I0731 18:12:36.892031   73696 api_server.go:88] waiting for apiserver healthz status ...
	I0731 18:12:36.892049   73696 api_server.go:253] Checking apiserver healthz at https://192.168.72.197:8444/healthz ...
	I0731 18:12:36.895929   73696 api_server.go:279] https://192.168.72.197:8444/healthz returned 200:
	ok
	I0731 18:12:36.896760   73696 api_server.go:141] control plane version: v1.30.3
	I0731 18:12:36.896780   73696 api_server.go:131] duration metric: took 4.741896ms to wait for apiserver health ...
	I0731 18:12:36.896789   73696 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 18:12:36.974073   73696 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 18:12:36.974092   73696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0731 18:12:37.010218   73696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 18:12:37.018536   73696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 18:12:37.039734   73696 system_pods.go:59] 5 kube-system pods found
	I0731 18:12:37.039767   73696 system_pods.go:61] "etcd-default-k8s-diff-port-094310" [f73c4fb4-b160-4e79-a0a4-8fba255cfb8c] Running
	I0731 18:12:37.039773   73696 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-094310" [820215b3-0eaa-46ca-bf0b-4fa73942160a] Running
	I0731 18:12:37.039778   73696 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-094310" [5ca0ed32-5439-49b5-9a85-1a4b1628371d] Running
	I0731 18:12:37.039787   73696 system_pods.go:61] "kube-proxy-4vvjq" [a8dd9db6-5f45-4c8d-a9d3-03ede1db07eb] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 18:12:37.039792   73696 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-094310" [11590718-4e74-46eb-be90-6405c3f1759f] Running
	I0731 18:12:37.039802   73696 system_pods.go:74] duration metric: took 143.007992ms to wait for pod list to return data ...
	I0731 18:12:37.039812   73696 default_sa.go:34] waiting for default service account to be created ...
	I0731 18:12:37.041650   73696 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 18:12:37.041672   73696 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 18:12:37.096891   73696 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 18:12:37.096920   73696 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 18:12:37.159438   73696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 18:12:37.235560   73696 default_sa.go:45] found service account: "default"
	I0731 18:12:37.235599   73696 default_sa.go:55] duration metric: took 195.778976ms for default service account to be created ...
	I0731 18:12:37.235612   73696 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 18:12:37.439935   73696 system_pods.go:86] 7 kube-system pods found
	I0731 18:12:37.439966   73696 system_pods.go:89] "coredns-7db6d8ff4d-2r7zb" [e7acc926-db53-4c1c-a62f-45e303c69fc7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:12:37.439975   73696 system_pods.go:89] "coredns-7db6d8ff4d-756jj" [c91e86b1-8348-4dc7-9aa1-6909055c2dde] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:12:37.439982   73696 system_pods.go:89] "etcd-default-k8s-diff-port-094310" [f73c4fb4-b160-4e79-a0a4-8fba255cfb8c] Running
	I0731 18:12:37.439988   73696 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-094310" [820215b3-0eaa-46ca-bf0b-4fa73942160a] Running
	I0731 18:12:37.439993   73696 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-094310" [5ca0ed32-5439-49b5-9a85-1a4b1628371d] Running
	I0731 18:12:37.439998   73696 system_pods.go:89] "kube-proxy-4vvjq" [a8dd9db6-5f45-4c8d-a9d3-03ede1db07eb] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 18:12:37.440003   73696 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-094310" [11590718-4e74-46eb-be90-6405c3f1759f] Running
	I0731 18:12:37.440020   73696 retry.go:31] will retry after 230.300903ms: missing components: kube-dns, kube-proxy
	I0731 18:12:37.676385   73696 system_pods.go:86] 7 kube-system pods found
	I0731 18:12:37.676411   73696 system_pods.go:89] "coredns-7db6d8ff4d-2r7zb" [e7acc926-db53-4c1c-a62f-45e303c69fc7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:12:37.676421   73696 system_pods.go:89] "coredns-7db6d8ff4d-756jj" [c91e86b1-8348-4dc7-9aa1-6909055c2dde] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:12:37.676429   73696 system_pods.go:89] "etcd-default-k8s-diff-port-094310" [f73c4fb4-b160-4e79-a0a4-8fba255cfb8c] Running
	I0731 18:12:37.676436   73696 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-094310" [820215b3-0eaa-46ca-bf0b-4fa73942160a] Running
	I0731 18:12:37.676442   73696 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-094310" [5ca0ed32-5439-49b5-9a85-1a4b1628371d] Running
	I0731 18:12:37.676451   73696 system_pods.go:89] "kube-proxy-4vvjq" [a8dd9db6-5f45-4c8d-a9d3-03ede1db07eb] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 18:12:37.676456   73696 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-094310" [11590718-4e74-46eb-be90-6405c3f1759f] Running
	I0731 18:12:37.676475   73696 retry.go:31] will retry after 311.28179ms: missing components: kube-dns, kube-proxy
	I0731 18:12:37.813837   73696 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:37.813870   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .Close
	I0731 18:12:37.814017   73696 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:37.814039   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .Close
	I0731 18:12:37.814265   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | Closing plugin on server side
	I0731 18:12:37.814316   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | Closing plugin on server side
	I0731 18:12:37.814363   73696 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:37.814376   73696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:37.814391   73696 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:37.814402   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .Close
	I0731 18:12:37.814531   73696 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:37.814556   73696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:37.814598   73696 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:37.814608   73696 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:37.814631   73696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:37.814622   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .Close
	I0731 18:12:37.816102   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | Closing plugin on server side
	I0731 18:12:37.816268   73696 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:37.816280   73696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:37.830991   73696 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:37.831018   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .Close
	I0731 18:12:37.831354   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | Closing plugin on server side
	I0731 18:12:37.831354   73696 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:37.831380   73696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:37.995206   73696 system_pods.go:86] 8 kube-system pods found
	I0731 18:12:37.995248   73696 system_pods.go:89] "coredns-7db6d8ff4d-2r7zb" [e7acc926-db53-4c1c-a62f-45e303c69fc7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:12:37.995262   73696 system_pods.go:89] "coredns-7db6d8ff4d-756jj" [c91e86b1-8348-4dc7-9aa1-6909055c2dde] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:12:37.995272   73696 system_pods.go:89] "etcd-default-k8s-diff-port-094310" [f73c4fb4-b160-4e79-a0a4-8fba255cfb8c] Running
	I0731 18:12:37.995295   73696 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-094310" [820215b3-0eaa-46ca-bf0b-4fa73942160a] Running
	I0731 18:12:37.995310   73696 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-094310" [5ca0ed32-5439-49b5-9a85-1a4b1628371d] Running
	I0731 18:12:37.995322   73696 system_pods.go:89] "kube-proxy-4vvjq" [a8dd9db6-5f45-4c8d-a9d3-03ede1db07eb] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 18:12:37.995332   73696 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-094310" [11590718-4e74-46eb-be90-6405c3f1759f] Running
	I0731 18:12:37.995345   73696 system_pods.go:89] "storage-provisioner" [2e4c5a96-4dfc-4af6-8d3e-bab865644328] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 18:12:37.995370   73696 retry.go:31] will retry after 381.430275ms: missing components: kube-dns, kube-proxy
	I0731 18:12:38.392678   73696 system_pods.go:86] 9 kube-system pods found
	I0731 18:12:38.392719   73696 system_pods.go:89] "coredns-7db6d8ff4d-2r7zb" [e7acc926-db53-4c1c-a62f-45e303c69fc7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:12:38.392732   73696 system_pods.go:89] "coredns-7db6d8ff4d-756jj" [c91e86b1-8348-4dc7-9aa1-6909055c2dde] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:12:38.392742   73696 system_pods.go:89] "etcd-default-k8s-diff-port-094310" [f73c4fb4-b160-4e79-a0a4-8fba255cfb8c] Running
	I0731 18:12:38.392751   73696 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-094310" [820215b3-0eaa-46ca-bf0b-4fa73942160a] Running
	I0731 18:12:38.392760   73696 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-094310" [5ca0ed32-5439-49b5-9a85-1a4b1628371d] Running
	I0731 18:12:38.392770   73696 system_pods.go:89] "kube-proxy-4vvjq" [a8dd9db6-5f45-4c8d-a9d3-03ede1db07eb] Running
	I0731 18:12:38.392778   73696 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-094310" [11590718-4e74-46eb-be90-6405c3f1759f] Running
	I0731 18:12:38.392787   73696 system_pods.go:89] "metrics-server-569cc877fc-mskwc" [c57990b4-1d91-4764-9c33-2fd5f7d2f83b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 18:12:38.392802   73696 system_pods.go:89] "storage-provisioner" [2e4c5a96-4dfc-4af6-8d3e-bab865644328] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 18:12:38.392823   73696 retry.go:31] will retry after 567.905994ms: missing components: kube-dns
	I0731 18:12:38.501117   73696 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.341621275s)
	I0731 18:12:38.501181   73696 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:38.501200   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .Close
	I0731 18:12:38.501595   73696 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:38.501615   73696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:38.501625   73696 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:38.501634   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .Close
	I0731 18:12:38.501593   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | Closing plugin on server side
	I0731 18:12:38.501907   73696 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:38.501953   73696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:38.501966   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | Closing plugin on server side
	I0731 18:12:38.501975   73696 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-094310"
	I0731 18:12:38.505204   73696 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0731 18:12:38.506517   73696 addons.go:510] duration metric: took 1.921658263s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0731 18:12:38.967657   73696 system_pods.go:86] 9 kube-system pods found
	I0731 18:12:38.967691   73696 system_pods.go:89] "coredns-7db6d8ff4d-2r7zb" [e7acc926-db53-4c1c-a62f-45e303c69fc7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:12:38.967700   73696 system_pods.go:89] "coredns-7db6d8ff4d-756jj" [c91e86b1-8348-4dc7-9aa1-6909055c2dde] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:12:38.967708   73696 system_pods.go:89] "etcd-default-k8s-diff-port-094310" [f73c4fb4-b160-4e79-a0a4-8fba255cfb8c] Running
	I0731 18:12:38.967716   73696 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-094310" [820215b3-0eaa-46ca-bf0b-4fa73942160a] Running
	I0731 18:12:38.967723   73696 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-094310" [5ca0ed32-5439-49b5-9a85-1a4b1628371d] Running
	I0731 18:12:38.967729   73696 system_pods.go:89] "kube-proxy-4vvjq" [a8dd9db6-5f45-4c8d-a9d3-03ede1db07eb] Running
	I0731 18:12:38.967736   73696 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-094310" [11590718-4e74-46eb-be90-6405c3f1759f] Running
	I0731 18:12:38.967746   73696 system_pods.go:89] "metrics-server-569cc877fc-mskwc" [c57990b4-1d91-4764-9c33-2fd5f7d2f83b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 18:12:38.967759   73696 system_pods.go:89] "storage-provisioner" [2e4c5a96-4dfc-4af6-8d3e-bab865644328] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 18:12:38.967779   73696 retry.go:31] will retry after 488.293971ms: missing components: kube-dns
	I0731 18:12:39.464918   73696 system_pods.go:86] 9 kube-system pods found
	I0731 18:12:39.464956   73696 system_pods.go:89] "coredns-7db6d8ff4d-2r7zb" [e7acc926-db53-4c1c-a62f-45e303c69fc7] Running
	I0731 18:12:39.464965   73696 system_pods.go:89] "coredns-7db6d8ff4d-756jj" [c91e86b1-8348-4dc7-9aa1-6909055c2dde] Running
	I0731 18:12:39.464972   73696 system_pods.go:89] "etcd-default-k8s-diff-port-094310" [f73c4fb4-b160-4e79-a0a4-8fba255cfb8c] Running
	I0731 18:12:39.464978   73696 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-094310" [820215b3-0eaa-46ca-bf0b-4fa73942160a] Running
	I0731 18:12:39.464986   73696 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-094310" [5ca0ed32-5439-49b5-9a85-1a4b1628371d] Running
	I0731 18:12:39.464992   73696 system_pods.go:89] "kube-proxy-4vvjq" [a8dd9db6-5f45-4c8d-a9d3-03ede1db07eb] Running
	I0731 18:12:39.464999   73696 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-094310" [11590718-4e74-46eb-be90-6405c3f1759f] Running
	I0731 18:12:39.465017   73696 system_pods.go:89] "metrics-server-569cc877fc-mskwc" [c57990b4-1d91-4764-9c33-2fd5f7d2f83b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 18:12:39.465028   73696 system_pods.go:89] "storage-provisioner" [2e4c5a96-4dfc-4af6-8d3e-bab865644328] Running
	I0731 18:12:39.465041   73696 system_pods.go:126] duration metric: took 2.229422302s to wait for k8s-apps to be running ...
	I0731 18:12:39.465053   73696 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 18:12:39.465111   73696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:12:39.482063   73696 system_svc.go:56] duration metric: took 16.998965ms WaitForService to wait for kubelet
	I0731 18:12:39.482092   73696 kubeadm.go:582] duration metric: took 2.898066741s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 18:12:39.482138   73696 node_conditions.go:102] verifying NodePressure condition ...
	I0731 18:12:39.486728   73696 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 18:12:39.486752   73696 node_conditions.go:123] node cpu capacity is 2
	I0731 18:12:39.486764   73696 node_conditions.go:105] duration metric: took 4.617934ms to run NodePressure ...
	I0731 18:12:39.486777   73696 start.go:241] waiting for startup goroutines ...
	I0731 18:12:39.486787   73696 start.go:246] waiting for cluster config update ...
	I0731 18:12:39.486798   73696 start.go:255] writing updated cluster config ...
	I0731 18:12:39.487565   73696 ssh_runner.go:195] Run: rm -f paused
	I0731 18:12:39.539591   73696 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 18:12:39.541533   73696 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-094310" cluster and "default" namespace by default
	I0731 18:12:37.644379   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:39.645608   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:41.969949   73800 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0731 18:12:41.970018   73800 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 18:12:41.970137   73800 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 18:12:41.970234   73800 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 18:12:41.970386   73800 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 18:12:41.970495   73800 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 18:12:41.972177   73800 out.go:204]   - Generating certificates and keys ...
	I0731 18:12:41.972244   73800 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 18:12:41.972314   73800 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 18:12:41.972403   73800 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 18:12:41.972480   73800 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 18:12:41.972538   73800 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 18:12:41.972588   73800 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 18:12:41.972654   73800 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 18:12:41.972748   73800 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 18:12:41.972859   73800 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 18:12:41.972982   73800 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 18:12:41.973027   73800 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 18:12:41.973082   73800 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 18:12:41.973152   73800 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 18:12:41.973205   73800 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0731 18:12:41.973252   73800 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 18:12:41.973323   73800 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 18:12:41.973387   73800 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 18:12:41.973456   73800 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 18:12:41.973553   73800 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 18:12:41.974927   73800 out.go:204]   - Booting up control plane ...
	I0731 18:12:41.975019   73800 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 18:12:41.975128   73800 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 18:12:41.975215   73800 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 18:12:41.975342   73800 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 18:12:41.975425   73800 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 18:12:41.975474   73800 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 18:12:41.975635   73800 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0731 18:12:41.975710   73800 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0731 18:12:41.975766   73800 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001397088s
	I0731 18:12:41.975824   73800 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0731 18:12:41.975909   73800 kubeadm.go:310] [api-check] The API server is healthy after 5.001258426s
	I0731 18:12:41.976064   73800 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 18:12:41.976241   73800 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 18:12:41.976355   73800 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 18:12:41.976528   73800 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-436067 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 18:12:41.976605   73800 kubeadm.go:310] [bootstrap-token] Using token: m9csv8.j58cj919sgzkgy1k
	I0731 18:12:41.978880   73800 out.go:204]   - Configuring RBAC rules ...
	I0731 18:12:41.978976   73800 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 18:12:41.979087   73800 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 18:12:41.979277   73800 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 18:12:41.979441   73800 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 18:12:41.979622   73800 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 18:12:41.979708   73800 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 18:12:41.979835   73800 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 18:12:41.979875   73800 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 18:12:41.979918   73800 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 18:12:41.979924   73800 kubeadm.go:310] 
	I0731 18:12:41.979971   73800 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 18:12:41.979979   73800 kubeadm.go:310] 
	I0731 18:12:41.980058   73800 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 18:12:41.980067   73800 kubeadm.go:310] 
	I0731 18:12:41.980098   73800 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 18:12:41.980160   73800 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 18:12:41.980229   73800 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 18:12:41.980236   73800 kubeadm.go:310] 
	I0731 18:12:41.980300   73800 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 18:12:41.980311   73800 kubeadm.go:310] 
	I0731 18:12:41.980384   73800 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 18:12:41.980393   73800 kubeadm.go:310] 
	I0731 18:12:41.980446   73800 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 18:12:41.980548   73800 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 18:12:41.980644   73800 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 18:12:41.980653   73800 kubeadm.go:310] 
	I0731 18:12:41.980759   73800 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 18:12:41.980824   73800 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 18:12:41.980830   73800 kubeadm.go:310] 
	I0731 18:12:41.980896   73800 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token m9csv8.j58cj919sgzkgy1k \
	I0731 18:12:41.980984   73800 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:460b24b51d10a3760f2391542a8a89e401359854f437f922cbee3f2d8ed6c856 \
	I0731 18:12:41.981011   73800 kubeadm.go:310] 	--control-plane 
	I0731 18:12:41.981023   73800 kubeadm.go:310] 
	I0731 18:12:41.981093   73800 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 18:12:41.981099   73800 kubeadm.go:310] 
	I0731 18:12:41.981183   73800 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token m9csv8.j58cj919sgzkgy1k \
	I0731 18:12:41.981306   73800 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:460b24b51d10a3760f2391542a8a89e401359854f437f922cbee3f2d8ed6c856 
	I0731 18:12:41.981317   73800 cni.go:84] Creating CNI manager for ""
	I0731 18:12:41.981324   73800 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:12:41.982701   73800 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 18:12:41.983929   73800 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 18:12:41.995272   73800 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 18:12:42.014929   73800 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 18:12:42.014984   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:42.015033   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-436067 minikube.k8s.io/updated_at=2024_07_31T18_12_42_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1d737dad7efa60c56d30434fcd857dd3b14c91d9 minikube.k8s.io/name=embed-certs-436067 minikube.k8s.io/primary=true
	I0731 18:12:42.164811   73800 ops.go:34] apiserver oom_adj: -16
	I0731 18:12:42.164934   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:42.665108   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:43.165818   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:43.665733   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:44.165074   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:42.144896   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:44.644077   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:44.665477   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:45.165127   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:45.665440   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:46.165555   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:46.665998   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:47.165829   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:47.665704   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:48.164973   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:48.665549   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:49.165210   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:47.142947   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:49.144015   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:51.644495   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:49.665500   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:50.165567   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:50.665547   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:51.166002   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:51.664981   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:52.165135   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:52.665927   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:53.165045   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:53.664981   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:54.165715   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:54.252373   73800 kubeadm.go:1113] duration metric: took 12.237438799s to wait for elevateKubeSystemPrivileges
	I0731 18:12:54.252415   73800 kubeadm.go:394] duration metric: took 5m6.689979758s to StartCluster
	I0731 18:12:54.252435   73800 settings.go:142] acquiring lock: {Name:mk8ac8269296bf0b887a00ff74caaad180005f89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:12:54.252509   73800 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 18:12:54.254175   73800 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/kubeconfig: {Name:mkf2340bc1ada3c683f132aca37228d7588e1fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:12:54.254495   73800 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.86 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 18:12:54.254600   73800 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 18:12:54.254687   73800 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-436067"
	I0731 18:12:54.254721   73800 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-436067"
	I0731 18:12:54.254724   73800 addons.go:69] Setting default-storageclass=true in profile "embed-certs-436067"
	W0731 18:12:54.254734   73800 addons.go:243] addon storage-provisioner should already be in state true
	I0731 18:12:54.254737   73800 config.go:182] Loaded profile config "embed-certs-436067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:12:54.254743   73800 addons.go:69] Setting metrics-server=true in profile "embed-certs-436067"
	I0731 18:12:54.254760   73800 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-436067"
	I0731 18:12:54.254769   73800 host.go:66] Checking if "embed-certs-436067" exists ...
	I0731 18:12:54.254785   73800 addons.go:234] Setting addon metrics-server=true in "embed-certs-436067"
	W0731 18:12:54.254795   73800 addons.go:243] addon metrics-server should already be in state true
	I0731 18:12:54.254826   73800 host.go:66] Checking if "embed-certs-436067" exists ...
	I0731 18:12:54.255205   73800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:54.255208   73800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:54.255233   73800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:54.255238   73800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:54.255302   73800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:54.255323   73800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:54.256412   73800 out.go:177] * Verifying Kubernetes components...
	I0731 18:12:54.257653   73800 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:12:54.274456   73800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34433
	I0731 18:12:54.274959   73800 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:54.275532   73800 main.go:141] libmachine: Using API Version  1
	I0731 18:12:54.275554   73800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:54.275828   73800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44423
	I0731 18:12:54.275851   73800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41513
	I0731 18:12:54.276001   73800 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:54.276152   73800 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:54.276225   73800 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:54.276498   73800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:54.276534   73800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:54.276592   73800 main.go:141] libmachine: Using API Version  1
	I0731 18:12:54.276606   73800 main.go:141] libmachine: Using API Version  1
	I0731 18:12:54.276613   73800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:54.276616   73800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:54.276954   73800 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:54.277055   73800 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:54.277103   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetState
	I0731 18:12:54.277663   73800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:54.277704   73800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:54.280559   73800 addons.go:234] Setting addon default-storageclass=true in "embed-certs-436067"
	W0731 18:12:54.280583   73800 addons.go:243] addon default-storageclass should already be in state true
	I0731 18:12:54.280615   73800 host.go:66] Checking if "embed-certs-436067" exists ...
	I0731 18:12:54.280969   73800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:54.281000   73800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:54.293211   73800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38393
	I0731 18:12:54.293657   73800 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:54.294121   73800 main.go:141] libmachine: Using API Version  1
	I0731 18:12:54.294142   73800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:54.294444   73800 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:54.294642   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetState
	I0731 18:12:54.294724   73800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38771
	I0731 18:12:54.295077   73800 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:54.295590   73800 main.go:141] libmachine: Using API Version  1
	I0731 18:12:54.295609   73800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:54.296058   73800 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:54.296285   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetState
	I0731 18:12:54.296377   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:12:54.298013   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:12:54.298541   73800 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0731 18:12:54.299454   73800 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:12:54.299489   73800 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 18:12:54.299501   73800 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 18:12:54.299515   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:12:54.300664   73800 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 18:12:54.300682   73800 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 18:12:54.300699   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:12:54.301018   73800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36647
	I0731 18:12:54.301671   73800 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:54.302210   73800 main.go:141] libmachine: Using API Version  1
	I0731 18:12:54.302229   73800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:54.302731   73800 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:54.302857   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:12:54.303479   73800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:54.303503   73800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:54.303710   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:12:54.303744   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:12:54.303768   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:12:54.303893   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:12:54.304071   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:12:54.304232   73800 sshutil.go:53] new ssh client: &{IP:192.168.50.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/embed-certs-436067/id_rsa Username:docker}
	I0731 18:12:54.304601   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:12:54.305040   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:12:54.305063   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:12:54.305311   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:12:54.305480   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:12:54.305594   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:12:54.305712   73800 sshutil.go:53] new ssh client: &{IP:192.168.50.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/embed-certs-436067/id_rsa Username:docker}
	I0731 18:12:54.318168   73800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33711
	I0731 18:12:54.318558   73800 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:54.319015   73800 main.go:141] libmachine: Using API Version  1
	I0731 18:12:54.319033   73800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:54.319355   73800 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:54.319552   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetState
	I0731 18:12:54.321369   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:12:54.321540   73800 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 18:12:54.321553   73800 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 18:12:54.321565   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:12:54.324613   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:12:54.324994   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:12:54.325011   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:12:54.325310   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:12:54.325437   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:12:54.325571   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:12:54.325683   73800 sshutil.go:53] new ssh client: &{IP:192.168.50.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/embed-certs-436067/id_rsa Username:docker}
	I0731 18:12:54.435485   73800 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 18:12:54.462541   73800 node_ready.go:35] waiting up to 6m0s for node "embed-certs-436067" to be "Ready" ...
	I0731 18:12:54.473787   73800 node_ready.go:49] node "embed-certs-436067" has status "Ready":"True"
	I0731 18:12:54.473810   73800 node_ready.go:38] duration metric: took 11.237808ms for node "embed-certs-436067" to be "Ready" ...
	I0731 18:12:54.473819   73800 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:12:54.485589   73800 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:54.507887   73800 pod_ready.go:92] pod "etcd-embed-certs-436067" in "kube-system" namespace has status "Ready":"True"
	I0731 18:12:54.507910   73800 pod_ready.go:81] duration metric: took 22.296215ms for pod "etcd-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:54.507921   73800 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:54.524721   73800 pod_ready.go:92] pod "kube-apiserver-embed-certs-436067" in "kube-system" namespace has status "Ready":"True"
	I0731 18:12:54.524742   73800 pod_ready.go:81] duration metric: took 16.814491ms for pod "kube-apiserver-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:54.524751   73800 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:54.536810   73800 pod_ready.go:92] pod "kube-controller-manager-embed-certs-436067" in "kube-system" namespace has status "Ready":"True"
	I0731 18:12:54.536837   73800 pod_ready.go:81] duration metric: took 12.078703ms for pod "kube-controller-manager-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:54.536848   73800 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-85spm" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:54.552538   73800 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 18:12:54.579223   73800 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 18:12:54.579244   73800 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0731 18:12:54.596087   73800 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 18:12:54.617180   73800 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 18:12:54.617209   73800 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 18:12:54.679879   73800 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 18:12:54.679908   73800 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 18:12:54.775272   73800 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 18:12:55.199299   73800 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:55.199335   73800 main.go:141] libmachine: (embed-certs-436067) Calling .Close
	I0731 18:12:55.199342   73800 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:55.199361   73800 main.go:141] libmachine: (embed-certs-436067) Calling .Close
	I0731 18:12:55.199618   73800 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:55.199666   73800 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:55.199678   73800 main.go:141] libmachine: (embed-certs-436067) DBG | Closing plugin on server side
	I0731 18:12:55.199634   73800 main.go:141] libmachine: (embed-certs-436067) DBG | Closing plugin on server side
	I0731 18:12:55.199685   73800 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:55.199710   73800 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:55.199689   73800 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:55.199717   73800 main.go:141] libmachine: (embed-certs-436067) Calling .Close
	I0731 18:12:55.199726   73800 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:55.199735   73800 main.go:141] libmachine: (embed-certs-436067) Calling .Close
	I0731 18:12:55.200002   73800 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:55.200016   73800 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:55.200079   73800 main.go:141] libmachine: (embed-certs-436067) DBG | Closing plugin on server side
	I0731 18:12:55.200107   73800 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:55.200120   73800 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:55.227472   73800 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:55.227497   73800 main.go:141] libmachine: (embed-certs-436067) Calling .Close
	I0731 18:12:55.227792   73800 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:55.227811   73800 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:55.712134   73800 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:55.712159   73800 main.go:141] libmachine: (embed-certs-436067) Calling .Close
	I0731 18:12:55.712516   73800 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:55.712568   73800 main.go:141] libmachine: (embed-certs-436067) DBG | Closing plugin on server side
	I0731 18:12:55.712574   73800 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:55.712596   73800 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:55.712605   73800 main.go:141] libmachine: (embed-certs-436067) Calling .Close
	I0731 18:12:55.712851   73800 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:55.712868   73800 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:55.712867   73800 main.go:141] libmachine: (embed-certs-436067) DBG | Closing plugin on server side
	I0731 18:12:55.712877   73800 addons.go:475] Verifying addon metrics-server=true in "embed-certs-436067"
	I0731 18:12:55.714432   73800 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0731 18:12:54.143455   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:56.144177   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:55.715903   73800 addons.go:510] duration metric: took 1.461304856s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0731 18:12:56.542100   73800 pod_ready.go:92] pod "kube-proxy-85spm" in "kube-system" namespace has status "Ready":"True"
	I0731 18:12:56.542122   73800 pod_ready.go:81] duration metric: took 2.005265959s for pod "kube-proxy-85spm" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:56.542135   73800 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:56.553810   73800 pod_ready.go:92] pod "kube-scheduler-embed-certs-436067" in "kube-system" namespace has status "Ready":"True"
	I0731 18:12:56.553831   73800 pod_ready.go:81] duration metric: took 11.689814ms for pod "kube-scheduler-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:56.553840   73800 pod_ready.go:38] duration metric: took 2.080010607s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:12:56.553853   73800 api_server.go:52] waiting for apiserver process to appear ...
	I0731 18:12:56.553899   73800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:12:56.568301   73800 api_server.go:72] duration metric: took 2.313759916s to wait for apiserver process to appear ...
	I0731 18:12:56.568327   73800 api_server.go:88] waiting for apiserver healthz status ...
	I0731 18:12:56.568345   73800 api_server.go:253] Checking apiserver healthz at https://192.168.50.86:8443/healthz ...
	I0731 18:12:56.573861   73800 api_server.go:279] https://192.168.50.86:8443/healthz returned 200:
	ok
	I0731 18:12:56.575494   73800 api_server.go:141] control plane version: v1.30.3
	I0731 18:12:56.575513   73800 api_server.go:131] duration metric: took 7.1795ms to wait for apiserver health ...
	I0731 18:12:56.575520   73800 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 18:12:56.669169   73800 system_pods.go:59] 9 kube-system pods found
	I0731 18:12:56.669197   73800 system_pods.go:61] "coredns-7db6d8ff4d-fqkfd" [2e5a67a3-6f2b-43a5-8b94-cf48202c5958] Running
	I0731 18:12:56.669202   73800 system_pods.go:61] "coredns-7db6d8ff4d-qpb62" [e57f157b-01b5-42ec-9630-b9b5ae94fe5d] Running
	I0731 18:12:56.669206   73800 system_pods.go:61] "etcd-embed-certs-436067" [d0439817-b307-4673-8a64-34aa31266fbb] Running
	I0731 18:12:56.669210   73800 system_pods.go:61] "kube-apiserver-embed-certs-436067" [c2409e33-d42b-4132-8f63-197c4d445704] Running
	I0731 18:12:56.669214   73800 system_pods.go:61] "kube-controller-manager-embed-certs-436067" [dfea1f44-48a3-4a3c-8a27-be6f01f58add] Running
	I0731 18:12:56.669218   73800 system_pods.go:61] "kube-proxy-85spm" [eecb399a-0365-436d-8f14-a7853a5a2ea3] Running
	I0731 18:12:56.669221   73800 system_pods.go:61] "kube-scheduler-embed-certs-436067" [e406617d-adf7-4baa-9a8a-6fd92847a12f] Running
	I0731 18:12:56.669228   73800 system_pods.go:61] "metrics-server-569cc877fc-pgf6q" [b799fa04-0bf7-4914-9738-964b825577b5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 18:12:56.669231   73800 system_pods.go:61] "storage-provisioner" [a7d95d32-1002-4fba-b0fd-555b872efa1f] Running
	I0731 18:12:56.669240   73800 system_pods.go:74] duration metric: took 93.714593ms to wait for pod list to return data ...
	I0731 18:12:56.669247   73800 default_sa.go:34] waiting for default service account to be created ...
	I0731 18:12:56.866494   73800 default_sa.go:45] found service account: "default"
	I0731 18:12:56.866521   73800 default_sa.go:55] duration metric: took 197.264891ms for default service account to be created ...
	I0731 18:12:56.866532   73800 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 18:12:57.068903   73800 system_pods.go:86] 9 kube-system pods found
	I0731 18:12:57.068930   73800 system_pods.go:89] "coredns-7db6d8ff4d-fqkfd" [2e5a67a3-6f2b-43a5-8b94-cf48202c5958] Running
	I0731 18:12:57.068936   73800 system_pods.go:89] "coredns-7db6d8ff4d-qpb62" [e57f157b-01b5-42ec-9630-b9b5ae94fe5d] Running
	I0731 18:12:57.068940   73800 system_pods.go:89] "etcd-embed-certs-436067" [d0439817-b307-4673-8a64-34aa31266fbb] Running
	I0731 18:12:57.068944   73800 system_pods.go:89] "kube-apiserver-embed-certs-436067" [c2409e33-d42b-4132-8f63-197c4d445704] Running
	I0731 18:12:57.068948   73800 system_pods.go:89] "kube-controller-manager-embed-certs-436067" [dfea1f44-48a3-4a3c-8a27-be6f01f58add] Running
	I0731 18:12:57.068951   73800 system_pods.go:89] "kube-proxy-85spm" [eecb399a-0365-436d-8f14-a7853a5a2ea3] Running
	I0731 18:12:57.068955   73800 system_pods.go:89] "kube-scheduler-embed-certs-436067" [e406617d-adf7-4baa-9a8a-6fd92847a12f] Running
	I0731 18:12:57.068961   73800 system_pods.go:89] "metrics-server-569cc877fc-pgf6q" [b799fa04-0bf7-4914-9738-964b825577b5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 18:12:57.068965   73800 system_pods.go:89] "storage-provisioner" [a7d95d32-1002-4fba-b0fd-555b872efa1f] Running
	I0731 18:12:57.068972   73800 system_pods.go:126] duration metric: took 202.435205ms to wait for k8s-apps to be running ...
	I0731 18:12:57.068980   73800 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 18:12:57.069018   73800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:12:57.083728   73800 system_svc.go:56] duration metric: took 14.739831ms WaitForService to wait for kubelet
	I0731 18:12:57.083756   73800 kubeadm.go:582] duration metric: took 2.829227102s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 18:12:57.083782   73800 node_conditions.go:102] verifying NodePressure condition ...
	I0731 18:12:57.266463   73800 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 18:12:57.266486   73800 node_conditions.go:123] node cpu capacity is 2
	I0731 18:12:57.266495   73800 node_conditions.go:105] duration metric: took 182.707869ms to run NodePressure ...
	I0731 18:12:57.266505   73800 start.go:241] waiting for startup goroutines ...
	I0731 18:12:57.266512   73800 start.go:246] waiting for cluster config update ...
	I0731 18:12:57.266521   73800 start.go:255] writing updated cluster config ...
	I0731 18:12:57.266767   73800 ssh_runner.go:195] Run: rm -f paused
	I0731 18:12:57.313723   73800 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 18:12:57.315966   73800 out.go:177] * Done! kubectl is now configured to use "embed-certs-436067" cluster and "default" namespace by default
	I0731 18:12:58.652853   74203 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0731 18:12:58.653480   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:12:58.653735   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:12:58.643237   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:13:01.143274   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:13:01.643357   73479 pod_ready.go:81] duration metric: took 4m0.006506347s for pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace to be "Ready" ...
	E0731 18:13:01.643382   73479 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0731 18:13:01.643388   73479 pod_ready.go:38] duration metric: took 4m7.418860701s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:13:01.643402   73479 api_server.go:52] waiting for apiserver process to appear ...
	I0731 18:13:01.643428   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:13:01.643481   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:13:01.692071   73479 cri.go:89] found id: "895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0"
	I0731 18:13:01.692092   73479 cri.go:89] found id: ""
	I0731 18:13:01.692101   73479 logs.go:276] 1 containers: [895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0]
	I0731 18:13:01.692159   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:01.697266   73479 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:13:01.697356   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:13:01.736299   73479 cri.go:89] found id: "65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645"
	I0731 18:13:01.736350   73479 cri.go:89] found id: ""
	I0731 18:13:01.736360   73479 logs.go:276] 1 containers: [65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645]
	I0731 18:13:01.736417   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:01.740672   73479 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:13:01.740733   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:13:01.774782   73479 cri.go:89] found id: "f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855"
	I0731 18:13:01.774816   73479 cri.go:89] found id: ""
	I0731 18:13:01.774826   73479 logs.go:276] 1 containers: [f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855]
	I0731 18:13:01.774893   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:01.778542   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:13:01.778618   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:13:01.818749   73479 cri.go:89] found id: "ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2"
	I0731 18:13:01.818769   73479 cri.go:89] found id: ""
	I0731 18:13:01.818776   73479 logs.go:276] 1 containers: [ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2]
	I0731 18:13:01.818828   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:01.827176   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:13:01.827248   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:13:01.860700   73479 cri.go:89] found id: "57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847"
	I0731 18:13:01.860730   73479 cri.go:89] found id: ""
	I0731 18:13:01.860739   73479 logs.go:276] 1 containers: [57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847]
	I0731 18:13:01.860825   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:03.654494   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:13:03.654747   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:13:01.864629   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:13:01.864702   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:13:01.899293   73479 cri.go:89] found id: "ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c"
	I0731 18:13:01.899338   73479 cri.go:89] found id: ""
	I0731 18:13:01.899347   73479 logs.go:276] 1 containers: [ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c]
	I0731 18:13:01.899406   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:01.903202   73479 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:13:01.903272   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:13:01.934472   73479 cri.go:89] found id: ""
	I0731 18:13:01.934505   73479 logs.go:276] 0 containers: []
	W0731 18:13:01.934516   73479 logs.go:278] No container was found matching "kindnet"
	I0731 18:13:01.934523   73479 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 18:13:01.934588   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 18:13:01.967244   73479 cri.go:89] found id: "6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df"
	I0731 18:13:01.967271   73479 cri.go:89] found id: "9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c"
	I0731 18:13:01.967276   73479 cri.go:89] found id: ""
	I0731 18:13:01.967285   73479 logs.go:276] 2 containers: [6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df 9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c]
	I0731 18:13:01.967349   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:01.971167   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:01.975648   73479 logs.go:123] Gathering logs for kubelet ...
	I0731 18:13:01.975670   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:13:02.031430   73479 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:13:02.031472   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 18:13:02.158774   73479 logs.go:123] Gathering logs for kube-apiserver [895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0] ...
	I0731 18:13:02.158803   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0"
	I0731 18:13:02.199495   73479 logs.go:123] Gathering logs for kube-scheduler [ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2] ...
	I0731 18:13:02.199521   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2"
	I0731 18:13:02.232285   73479 logs.go:123] Gathering logs for kube-proxy [57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847] ...
	I0731 18:13:02.232327   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847"
	I0731 18:13:02.272360   73479 logs.go:123] Gathering logs for storage-provisioner [9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c] ...
	I0731 18:13:02.272389   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c"
	I0731 18:13:02.305902   73479 logs.go:123] Gathering logs for dmesg ...
	I0731 18:13:02.305931   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:13:02.319954   73479 logs.go:123] Gathering logs for etcd [65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645] ...
	I0731 18:13:02.319984   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645"
	I0731 18:13:02.361657   73479 logs.go:123] Gathering logs for coredns [f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855] ...
	I0731 18:13:02.361685   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855"
	I0731 18:13:02.395696   73479 logs.go:123] Gathering logs for kube-controller-manager [ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c] ...
	I0731 18:13:02.395724   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c"
	I0731 18:13:02.444671   73479 logs.go:123] Gathering logs for storage-provisioner [6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df] ...
	I0731 18:13:02.444704   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df"
	I0731 18:13:02.480666   73479 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:13:02.480693   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:13:02.967693   73479 logs.go:123] Gathering logs for container status ...
	I0731 18:13:02.967741   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:13:05.512381   73479 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:13:05.528582   73479 api_server.go:72] duration metric: took 4m19.030809429s to wait for apiserver process to appear ...
	I0731 18:13:05.528612   73479 api_server.go:88] waiting for apiserver healthz status ...
	I0731 18:13:05.528652   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:13:05.528730   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:13:05.567984   73479 cri.go:89] found id: "895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0"
	I0731 18:13:05.568004   73479 cri.go:89] found id: ""
	I0731 18:13:05.568013   73479 logs.go:276] 1 containers: [895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0]
	I0731 18:13:05.568073   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:05.571946   73479 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:13:05.572003   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:13:05.620468   73479 cri.go:89] found id: "65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645"
	I0731 18:13:05.620495   73479 cri.go:89] found id: ""
	I0731 18:13:05.620504   73479 logs.go:276] 1 containers: [65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645]
	I0731 18:13:05.620571   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:05.624599   73479 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:13:05.624653   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:13:05.663717   73479 cri.go:89] found id: "f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855"
	I0731 18:13:05.663740   73479 cri.go:89] found id: ""
	I0731 18:13:05.663748   73479 logs.go:276] 1 containers: [f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855]
	I0731 18:13:05.663803   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:05.667601   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:13:05.667672   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:13:05.699764   73479 cri.go:89] found id: "ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2"
	I0731 18:13:05.699791   73479 cri.go:89] found id: ""
	I0731 18:13:05.699801   73479 logs.go:276] 1 containers: [ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2]
	I0731 18:13:05.699858   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:05.703965   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:13:05.704036   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:13:05.739460   73479 cri.go:89] found id: "57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847"
	I0731 18:13:05.739487   73479 cri.go:89] found id: ""
	I0731 18:13:05.739496   73479 logs.go:276] 1 containers: [57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847]
	I0731 18:13:05.739558   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:05.743180   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:13:05.743232   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:13:05.777369   73479 cri.go:89] found id: "ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c"
	I0731 18:13:05.777390   73479 cri.go:89] found id: ""
	I0731 18:13:05.777397   73479 logs.go:276] 1 containers: [ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c]
	I0731 18:13:05.777449   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:05.781388   73479 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:13:05.781435   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:13:05.825567   73479 cri.go:89] found id: ""
	I0731 18:13:05.825599   73479 logs.go:276] 0 containers: []
	W0731 18:13:05.825610   73479 logs.go:278] No container was found matching "kindnet"
	I0731 18:13:05.825617   73479 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 18:13:05.825689   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 18:13:05.859538   73479 cri.go:89] found id: "6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df"
	I0731 18:13:05.859570   73479 cri.go:89] found id: "9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c"
	I0731 18:13:05.859577   73479 cri.go:89] found id: ""
	I0731 18:13:05.859586   73479 logs.go:276] 2 containers: [6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df 9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c]
	I0731 18:13:05.859657   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:05.863513   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:05.866989   73479 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:13:05.867011   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:13:06.314116   73479 logs.go:123] Gathering logs for container status ...
	I0731 18:13:06.314166   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:13:06.357738   73479 logs.go:123] Gathering logs for kubelet ...
	I0731 18:13:06.357764   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:13:06.407330   73479 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:13:06.407365   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 18:13:06.508580   73479 logs.go:123] Gathering logs for kube-apiserver [895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0] ...
	I0731 18:13:06.508616   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0"
	I0731 18:13:06.550032   73479 logs.go:123] Gathering logs for coredns [f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855] ...
	I0731 18:13:06.550071   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855"
	I0731 18:13:06.588519   73479 logs.go:123] Gathering logs for kube-proxy [57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847] ...
	I0731 18:13:06.588548   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847"
	I0731 18:13:06.622872   73479 logs.go:123] Gathering logs for storage-provisioner [6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df] ...
	I0731 18:13:06.622901   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df"
	I0731 18:13:06.666694   73479 logs.go:123] Gathering logs for dmesg ...
	I0731 18:13:06.666721   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:13:06.680326   73479 logs.go:123] Gathering logs for etcd [65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645] ...
	I0731 18:13:06.680355   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645"
	I0731 18:13:06.723966   73479 logs.go:123] Gathering logs for kube-scheduler [ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2] ...
	I0731 18:13:06.723997   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2"
	I0731 18:13:06.760873   73479 logs.go:123] Gathering logs for kube-controller-manager [ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c] ...
	I0731 18:13:06.760901   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c"
	I0731 18:13:06.809348   73479 logs.go:123] Gathering logs for storage-provisioner [9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c] ...
	I0731 18:13:06.809387   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c"
	I0731 18:13:09.341394   73479 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0731 18:13:09.346642   73479 api_server.go:279] https://192.168.61.126:8443/healthz returned 200:
	ok
	I0731 18:13:09.347803   73479 api_server.go:141] control plane version: v1.31.0-beta.0
	I0731 18:13:09.347821   73479 api_server.go:131] duration metric: took 3.819202346s to wait for apiserver health ...
	I0731 18:13:09.347828   73479 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 18:13:09.347850   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:13:09.347903   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:13:09.391857   73479 cri.go:89] found id: "895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0"
	I0731 18:13:09.391885   73479 cri.go:89] found id: ""
	I0731 18:13:09.391895   73479 logs.go:276] 1 containers: [895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0]
	I0731 18:13:09.391956   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:09.395723   73479 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:13:09.395789   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:13:09.430108   73479 cri.go:89] found id: "65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645"
	I0731 18:13:09.430128   73479 cri.go:89] found id: ""
	I0731 18:13:09.430135   73479 logs.go:276] 1 containers: [65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645]
	I0731 18:13:09.430180   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:09.433933   73479 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:13:09.434037   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:13:09.471630   73479 cri.go:89] found id: "f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855"
	I0731 18:13:09.471655   73479 cri.go:89] found id: ""
	I0731 18:13:09.471663   73479 logs.go:276] 1 containers: [f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855]
	I0731 18:13:09.471709   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:09.476432   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:13:09.476496   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:13:09.519568   73479 cri.go:89] found id: "ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2"
	I0731 18:13:09.519590   73479 cri.go:89] found id: ""
	I0731 18:13:09.519598   73479 logs.go:276] 1 containers: [ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2]
	I0731 18:13:09.519641   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:09.523587   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:13:09.523656   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:13:09.559405   73479 cri.go:89] found id: "57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847"
	I0731 18:13:09.559429   73479 cri.go:89] found id: ""
	I0731 18:13:09.559438   73479 logs.go:276] 1 containers: [57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847]
	I0731 18:13:09.559485   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:09.564137   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:13:09.564199   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:13:09.605298   73479 cri.go:89] found id: "ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c"
	I0731 18:13:09.605324   73479 cri.go:89] found id: ""
	I0731 18:13:09.605332   73479 logs.go:276] 1 containers: [ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c]
	I0731 18:13:09.605403   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:09.612233   73479 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:13:09.612296   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:13:09.648804   73479 cri.go:89] found id: ""
	I0731 18:13:09.648836   73479 logs.go:276] 0 containers: []
	W0731 18:13:09.648848   73479 logs.go:278] No container was found matching "kindnet"
	I0731 18:13:09.648855   73479 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 18:13:09.648916   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 18:13:09.694708   73479 cri.go:89] found id: "6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df"
	I0731 18:13:09.694733   73479 cri.go:89] found id: "9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c"
	I0731 18:13:09.694737   73479 cri.go:89] found id: ""
	I0731 18:13:09.694743   73479 logs.go:276] 2 containers: [6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df 9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c]
	I0731 18:13:09.694794   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:09.698687   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:09.702244   73479 logs.go:123] Gathering logs for storage-provisioner [6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df] ...
	I0731 18:13:09.702261   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df"
	I0731 18:13:09.737777   73479 logs.go:123] Gathering logs for storage-provisioner [9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c] ...
	I0731 18:13:09.737808   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c"
	I0731 18:13:09.771128   73479 logs.go:123] Gathering logs for container status ...
	I0731 18:13:09.771161   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:13:09.817498   73479 logs.go:123] Gathering logs for dmesg ...
	I0731 18:13:09.817525   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:13:09.833574   73479 logs.go:123] Gathering logs for etcd [65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645] ...
	I0731 18:13:09.833607   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645"
	I0731 18:13:09.872664   73479 logs.go:123] Gathering logs for kube-scheduler [ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2] ...
	I0731 18:13:09.872691   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2"
	I0731 18:13:09.913741   73479 logs.go:123] Gathering logs for coredns [f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855] ...
	I0731 18:13:09.913771   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855"
	I0731 18:13:09.949469   73479 logs.go:123] Gathering logs for kube-proxy [57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847] ...
	I0731 18:13:09.949512   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847"
	I0731 18:13:09.985409   73479 logs.go:123] Gathering logs for kube-controller-manager [ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c] ...
	I0731 18:13:09.985447   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c"
	I0731 18:13:10.039018   73479 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:13:10.039048   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:13:10.406380   73479 logs.go:123] Gathering logs for kubelet ...
	I0731 18:13:10.406416   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:13:10.459944   73479 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:13:10.459997   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 18:13:10.564092   73479 logs.go:123] Gathering logs for kube-apiserver [895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0] ...
	I0731 18:13:10.564134   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0"
	I0731 18:13:13.124074   73479 system_pods.go:59] 8 kube-system pods found
	I0731 18:13:13.124102   73479 system_pods.go:61] "coredns-5cfdc65f69-k7clq" [e4be77b6-aa7c-45e2-90a1-6a8264fd5101] Running
	I0731 18:13:13.124107   73479 system_pods.go:61] "etcd-no-preload-673754" [d64e9194-dd33-4a28-b8c3-d25462f1dae8] Running
	I0731 18:13:13.124110   73479 system_pods.go:61] "kube-apiserver-no-preload-673754" [4497614d-51de-4a5f-93dc-1397446fe4c8] Running
	I0731 18:13:13.124114   73479 system_pods.go:61] "kube-controller-manager-no-preload-673754" [fe7f714e-d778-49e7-b4ef-44e6efb5bc33] Running
	I0731 18:13:13.124117   73479 system_pods.go:61] "kube-proxy-hqxh6" [4fb623c0-4be3-421c-85d6-1d76a90b874f] Running
	I0731 18:13:13.124119   73479 system_pods.go:61] "kube-scheduler-no-preload-673754" [558e9b78-a6a9-46b8-9db8-004126359c74] Running
	I0731 18:13:13.124125   73479 system_pods.go:61] "metrics-server-78fcd8795b-27pkr" [a63a156b-0446-4bd9-8619-de75edaeb481] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 18:13:13.124129   73479 system_pods.go:61] "storage-provisioner" [a9f0ab70-18f4-4fac-b858-a9177077fe29] Running
	I0731 18:13:13.124135   73479 system_pods.go:74] duration metric: took 3.776302431s to wait for pod list to return data ...
	I0731 18:13:13.124141   73479 default_sa.go:34] waiting for default service account to be created ...
	I0731 18:13:13.127100   73479 default_sa.go:45] found service account: "default"
	I0731 18:13:13.127137   73479 default_sa.go:55] duration metric: took 2.989455ms for default service account to be created ...
	I0731 18:13:13.127148   73479 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 18:13:13.132359   73479 system_pods.go:86] 8 kube-system pods found
	I0731 18:13:13.132379   73479 system_pods.go:89] "coredns-5cfdc65f69-k7clq" [e4be77b6-aa7c-45e2-90a1-6a8264fd5101] Running
	I0731 18:13:13.132387   73479 system_pods.go:89] "etcd-no-preload-673754" [d64e9194-dd33-4a28-b8c3-d25462f1dae8] Running
	I0731 18:13:13.132393   73479 system_pods.go:89] "kube-apiserver-no-preload-673754" [4497614d-51de-4a5f-93dc-1397446fe4c8] Running
	I0731 18:13:13.132399   73479 system_pods.go:89] "kube-controller-manager-no-preload-673754" [fe7f714e-d778-49e7-b4ef-44e6efb5bc33] Running
	I0731 18:13:13.132405   73479 system_pods.go:89] "kube-proxy-hqxh6" [4fb623c0-4be3-421c-85d6-1d76a90b874f] Running
	I0731 18:13:13.132410   73479 system_pods.go:89] "kube-scheduler-no-preload-673754" [558e9b78-a6a9-46b8-9db8-004126359c74] Running
	I0731 18:13:13.132420   73479 system_pods.go:89] "metrics-server-78fcd8795b-27pkr" [a63a156b-0446-4bd9-8619-de75edaeb481] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 18:13:13.132427   73479 system_pods.go:89] "storage-provisioner" [a9f0ab70-18f4-4fac-b858-a9177077fe29] Running
	I0731 18:13:13.132435   73479 system_pods.go:126] duration metric: took 5.281138ms to wait for k8s-apps to be running ...
	I0731 18:13:13.132443   73479 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 18:13:13.132488   73479 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:13:13.148254   73479 system_svc.go:56] duration metric: took 15.802724ms WaitForService to wait for kubelet
	I0731 18:13:13.148281   73479 kubeadm.go:582] duration metric: took 4m26.650509962s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 18:13:13.148315   73479 node_conditions.go:102] verifying NodePressure condition ...
	I0731 18:13:13.151986   73479 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 18:13:13.152006   73479 node_conditions.go:123] node cpu capacity is 2
	I0731 18:13:13.152018   73479 node_conditions.go:105] duration metric: took 3.693857ms to run NodePressure ...
	I0731 18:13:13.152031   73479 start.go:241] waiting for startup goroutines ...
	I0731 18:13:13.152043   73479 start.go:246] waiting for cluster config update ...
	I0731 18:13:13.152058   73479 start.go:255] writing updated cluster config ...
	I0731 18:13:13.152347   73479 ssh_runner.go:195] Run: rm -f paused
	I0731 18:13:13.202434   73479 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0731 18:13:13.205205   73479 out.go:177] * Done! kubectl is now configured to use "no-preload-673754" cluster and "default" namespace by default
	I0731 18:13:13.655618   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:13:13.655843   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:13:33.657356   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:13:33.657560   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:14:13.660934   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:14:13.661161   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:14:13.661183   74203 kubeadm.go:310] 
	I0731 18:14:13.661216   74203 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 18:14:13.661251   74203 kubeadm.go:310] 		timed out waiting for the condition
	I0731 18:14:13.661279   74203 kubeadm.go:310] 
	I0731 18:14:13.661338   74203 kubeadm.go:310] 	This error is likely caused by:
	I0731 18:14:13.661378   74203 kubeadm.go:310] 		- The kubelet is not running
	I0731 18:14:13.661477   74203 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 18:14:13.661483   74203 kubeadm.go:310] 
	I0731 18:14:13.661577   74203 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 18:14:13.661617   74203 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 18:14:13.661646   74203 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 18:14:13.661651   74203 kubeadm.go:310] 
	I0731 18:14:13.661781   74203 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 18:14:13.661897   74203 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 18:14:13.661909   74203 kubeadm.go:310] 
	I0731 18:14:13.662044   74203 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 18:14:13.662164   74203 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 18:14:13.662265   74203 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 18:14:13.662444   74203 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 18:14:13.662477   74203 kubeadm.go:310] 
	I0731 18:14:13.663123   74203 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 18:14:13.663235   74203 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 18:14:13.663331   74203 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0731 18:14:13.663497   74203 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0731 18:14:13.663559   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 18:14:18.956376   74203 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.292787213s)
	I0731 18:14:18.956479   74203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:14:18.970820   74203 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 18:14:18.980747   74203 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 18:14:18.980771   74203 kubeadm.go:157] found existing configuration files:
	
	I0731 18:14:18.980816   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 18:14:18.989985   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 18:14:18.990063   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 18:14:18.999143   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 18:14:19.008740   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 18:14:19.008798   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 18:14:19.018729   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 18:14:19.028953   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 18:14:19.029015   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 18:14:19.039399   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 18:14:19.049072   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 18:14:19.049124   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 18:14:19.059592   74203 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 18:14:19.121542   74203 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0731 18:14:19.121613   74203 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 18:14:19.271989   74203 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 18:14:19.272100   74203 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 18:14:19.272223   74203 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 18:14:19.440224   74203 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 18:14:19.441929   74203 out.go:204]   - Generating certificates and keys ...
	I0731 18:14:19.442025   74203 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 18:14:19.442104   74203 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 18:14:19.442196   74203 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 18:14:19.442245   74203 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 18:14:19.442326   74203 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 18:14:19.442395   74203 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 18:14:19.442498   74203 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 18:14:19.442610   74203 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 18:14:19.442687   74203 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 18:14:19.442770   74203 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 18:14:19.442813   74203 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 18:14:19.442887   74203 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 18:14:19.481696   74203 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 18:14:19.804252   74203 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 18:14:20.038734   74203 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 18:14:20.211133   74203 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 18:14:20.225726   74203 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 18:14:20.227920   74203 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 18:14:20.227977   74203 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 18:14:20.364068   74203 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 18:14:20.365991   74203 out.go:204]   - Booting up control plane ...
	I0731 18:14:20.366094   74203 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 18:14:20.366195   74203 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 18:14:20.366270   74203 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 18:14:20.366379   74203 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 18:14:20.367688   74203 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 18:15:00.365616   74203 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0731 18:15:00.366184   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:15:00.366412   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:15:05.366332   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:15:05.366529   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:15:15.366241   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:15:15.366499   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:15:35.366114   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:15:35.366344   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:16:15.365995   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:16:15.366181   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:16:15.366191   74203 kubeadm.go:310] 
	I0731 18:16:15.366224   74203 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 18:16:15.366448   74203 kubeadm.go:310] 		timed out waiting for the condition
	I0731 18:16:15.366472   74203 kubeadm.go:310] 
	I0731 18:16:15.366517   74203 kubeadm.go:310] 	This error is likely caused by:
	I0731 18:16:15.366568   74203 kubeadm.go:310] 		- The kubelet is not running
	I0731 18:16:15.366723   74203 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 18:16:15.366740   74203 kubeadm.go:310] 
	I0731 18:16:15.366863   74203 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 18:16:15.366924   74203 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 18:16:15.366986   74203 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 18:16:15.366999   74203 kubeadm.go:310] 
	I0731 18:16:15.367153   74203 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 18:16:15.367271   74203 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 18:16:15.367283   74203 kubeadm.go:310] 
	I0731 18:16:15.367418   74203 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 18:16:15.367504   74203 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 18:16:15.367609   74203 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 18:16:15.367725   74203 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 18:16:15.367734   74203 kubeadm.go:310] 
	I0731 18:16:15.369210   74203 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 18:16:15.369361   74203 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 18:16:15.369434   74203 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0731 18:16:15.369496   74203 kubeadm.go:394] duration metric: took 8m6.557607575s to StartCluster
	I0731 18:16:15.369537   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:16:15.369590   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:16:15.432899   74203 cri.go:89] found id: ""
	I0731 18:16:15.432929   74203 logs.go:276] 0 containers: []
	W0731 18:16:15.432941   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:16:15.432947   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:16:15.433005   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:16:15.470506   74203 cri.go:89] found id: ""
	I0731 18:16:15.470534   74203 logs.go:276] 0 containers: []
	W0731 18:16:15.470542   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:16:15.470549   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:16:15.470609   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:16:15.502032   74203 cri.go:89] found id: ""
	I0731 18:16:15.502055   74203 logs.go:276] 0 containers: []
	W0731 18:16:15.502062   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:16:15.502067   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:16:15.502115   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:16:15.533897   74203 cri.go:89] found id: ""
	I0731 18:16:15.533918   74203 logs.go:276] 0 containers: []
	W0731 18:16:15.533925   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:16:15.533930   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:16:15.533980   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:16:15.565275   74203 cri.go:89] found id: ""
	I0731 18:16:15.565311   74203 logs.go:276] 0 containers: []
	W0731 18:16:15.565326   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:16:15.565333   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:16:15.565395   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:16:15.601402   74203 cri.go:89] found id: ""
	I0731 18:16:15.601427   74203 logs.go:276] 0 containers: []
	W0731 18:16:15.601435   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:16:15.601440   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:16:15.601489   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:16:15.638778   74203 cri.go:89] found id: ""
	I0731 18:16:15.638801   74203 logs.go:276] 0 containers: []
	W0731 18:16:15.638808   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:16:15.638813   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:16:15.638861   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:16:15.675697   74203 cri.go:89] found id: ""
	I0731 18:16:15.675720   74203 logs.go:276] 0 containers: []
	W0731 18:16:15.675728   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:16:15.675736   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:16:15.675748   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:16:15.745287   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:16:15.745325   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:16:15.745341   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:16:15.848503   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:16:15.848536   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:16:15.887234   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:16:15.887258   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:16:15.934871   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:16:15.934901   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0731 18:16:15.947727   74203 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0731 18:16:15.947769   74203 out.go:239] * 
	W0731 18:16:15.947817   74203 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 18:16:15.947836   74203 out.go:239] * 
	W0731 18:16:15.948669   74203 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 18:16:15.952286   74203 out.go:177] 
	W0731 18:16:15.953375   74203 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 18:16:15.953424   74203 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0731 18:16:15.953442   74203 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0731 18:16:15.954734   74203 out.go:177] 
	
	
	==> CRI-O <==
	Jul 31 18:21:59 embed-certs-436067 crio[725]: time="2024-07-31 18:21:59.296544090Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722450119296518473,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bf9be9a6-2d08-4b0c-91b5-1b05a069d00a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:21:59 embed-certs-436067 crio[725]: time="2024-07-31 18:21:59.296982870Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a7a9e49a-308c-4d0f-8704-8634e6626d27 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:21:59 embed-certs-436067 crio[725]: time="2024-07-31 18:21:59.297033523Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a7a9e49a-308c-4d0f-8704-8634e6626d27 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:21:59 embed-certs-436067 crio[725]: time="2024-07-31 18:21:59.297251387Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e81135eed50d1b7a66d7fe0cf97169684a888d597ad1e09ac13cc822beef8463,PodSandboxId:13ceb832a21bd1dad9e9382b188da32b85c8f81bc49d2ed970691d7d8295b0f2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722449575925975759,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7d95d32-1002-4fba-b0fd-555b872efa1f,},Annotations:map[string]string{io.kubernetes.container.hash: 305b4e0b,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb37176a1402a636ef18f65e18efff58af3508014944aa89a30ef43f65a7087c,PodSandboxId:16503bb7d21bca8ce3dc91e3843567a6fac2d71496cea7ff11459c58ee0f7483,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722449575876581676,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-85spm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eecb399a-0365-436d-8f14-a7853a5a2ea3,},Annotations:map[string]string{io.kubernetes.container.hash: e95b56e6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fa65ac5c2b20c87a2c33c21a876b003c2aee98795df76fce1619374d44a2eca,PodSandboxId:dbda29967246ca2b68e1316e89d393f54987a0186e7659bec1fb3394c3474267,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722449575762374998,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fqkfd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e5a67a3-6f2b-43a5-8b94-cf48202c5958,},Annotations:map[string]string{io.kubernetes.container.hash: af4356a7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"cont
ainerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99a38e72d22388429038d3580675f16e680416f8fda306bdf4c391f75d1d6f4e,PodSandboxId:4ce18c5492bcc33609c111e5dfc539411ed97b6c88cca97ffe6bc55035c69ab2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722449575672794330,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qpb62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e57f157b-01b5-42ec-9630-b9b5ae94fe
5d,},Annotations:map[string]string{io.kubernetes.container.hash: 64ee34e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc1d1518390d9bb43923997af4c62caa816339308e4dc225f452a54a0a1b86e7,PodSandboxId:403ef356717193ee2ac40466ec7781b568ca6f3bf8e61fc6d16f2d8c80f54431,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722449555729790036,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-436067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec4df7a89391220824463c879809b59a,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c093f881541f076f977e16312dc643448d3f077778055fb2462c6952710814e7,PodSandboxId:29ce686dfcdef81bf5020ea4f2e895dd4f86ab430fa2f43be5579362af403cec,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722449555702714874,Labe
ls:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-436067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 808aaf12096cf081a2351698f977532c,},Annotations:map[string]string{io.kubernetes.container.hash: f2dabe25,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6db79734980205959c736946c757d63f009ecfd4877f566a300321787965474b,PodSandboxId:410e5e30052883f36056869049399453d5345a52810890a6564042533f131031,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722449555709471226,Labels:map[string]string{io.kubernet
es.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-436067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a55599c46901235db3664d7ea64eb319,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9a9772dcf78ae6e2ec7aaaa6496f3c491d1011757c4b667f76dc2cad3bf5b72,PodSandboxId:9cba8f262e7f1a23846ed03a0eab0905455b21ffc753005b5a1a83b4caf2c1ab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722449555631382221,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-436067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a2b92359fe7ffe52886ca4ecfa670b1,},Annotations:map[string]string{io.kubernetes.container.hash: ee5b1ef,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a7a9e49a-308c-4d0f-8704-8634e6626d27 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:21:59 embed-certs-436067 crio[725]: time="2024-07-31 18:21:59.337044847Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b36f2ac9-ef16-436f-9fdf-2363a2023abc name=/runtime.v1.RuntimeService/Version
	Jul 31 18:21:59 embed-certs-436067 crio[725]: time="2024-07-31 18:21:59.337115459Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b36f2ac9-ef16-436f-9fdf-2363a2023abc name=/runtime.v1.RuntimeService/Version
	Jul 31 18:21:59 embed-certs-436067 crio[725]: time="2024-07-31 18:21:59.338597358Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cd0b1d86-50d6-4010-8e75-d9a74c6f67b3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:21:59 embed-certs-436067 crio[725]: time="2024-07-31 18:21:59.339100915Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722450119339075830,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cd0b1d86-50d6-4010-8e75-d9a74c6f67b3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:21:59 embed-certs-436067 crio[725]: time="2024-07-31 18:21:59.339684632Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=07e7dca2-df65-41c7-9e60-04f6ee271cb3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:21:59 embed-certs-436067 crio[725]: time="2024-07-31 18:21:59.339747607Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=07e7dca2-df65-41c7-9e60-04f6ee271cb3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:21:59 embed-certs-436067 crio[725]: time="2024-07-31 18:21:59.339992053Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e81135eed50d1b7a66d7fe0cf97169684a888d597ad1e09ac13cc822beef8463,PodSandboxId:13ceb832a21bd1dad9e9382b188da32b85c8f81bc49d2ed970691d7d8295b0f2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722449575925975759,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7d95d32-1002-4fba-b0fd-555b872efa1f,},Annotations:map[string]string{io.kubernetes.container.hash: 305b4e0b,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb37176a1402a636ef18f65e18efff58af3508014944aa89a30ef43f65a7087c,PodSandboxId:16503bb7d21bca8ce3dc91e3843567a6fac2d71496cea7ff11459c58ee0f7483,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722449575876581676,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-85spm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eecb399a-0365-436d-8f14-a7853a5a2ea3,},Annotations:map[string]string{io.kubernetes.container.hash: e95b56e6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fa65ac5c2b20c87a2c33c21a876b003c2aee98795df76fce1619374d44a2eca,PodSandboxId:dbda29967246ca2b68e1316e89d393f54987a0186e7659bec1fb3394c3474267,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722449575762374998,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fqkfd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e5a67a3-6f2b-43a5-8b94-cf48202c5958,},Annotations:map[string]string{io.kubernetes.container.hash: af4356a7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"cont
ainerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99a38e72d22388429038d3580675f16e680416f8fda306bdf4c391f75d1d6f4e,PodSandboxId:4ce18c5492bcc33609c111e5dfc539411ed97b6c88cca97ffe6bc55035c69ab2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722449575672794330,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qpb62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e57f157b-01b5-42ec-9630-b9b5ae94fe
5d,},Annotations:map[string]string{io.kubernetes.container.hash: 64ee34e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc1d1518390d9bb43923997af4c62caa816339308e4dc225f452a54a0a1b86e7,PodSandboxId:403ef356717193ee2ac40466ec7781b568ca6f3bf8e61fc6d16f2d8c80f54431,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722449555729790036,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-436067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec4df7a89391220824463c879809b59a,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c093f881541f076f977e16312dc643448d3f077778055fb2462c6952710814e7,PodSandboxId:29ce686dfcdef81bf5020ea4f2e895dd4f86ab430fa2f43be5579362af403cec,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722449555702714874,Labe
ls:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-436067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 808aaf12096cf081a2351698f977532c,},Annotations:map[string]string{io.kubernetes.container.hash: f2dabe25,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6db79734980205959c736946c757d63f009ecfd4877f566a300321787965474b,PodSandboxId:410e5e30052883f36056869049399453d5345a52810890a6564042533f131031,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722449555709471226,Labels:map[string]string{io.kubernet
es.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-436067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a55599c46901235db3664d7ea64eb319,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9a9772dcf78ae6e2ec7aaaa6496f3c491d1011757c4b667f76dc2cad3bf5b72,PodSandboxId:9cba8f262e7f1a23846ed03a0eab0905455b21ffc753005b5a1a83b4caf2c1ab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722449555631382221,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-436067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a2b92359fe7ffe52886ca4ecfa670b1,},Annotations:map[string]string{io.kubernetes.container.hash: ee5b1ef,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=07e7dca2-df65-41c7-9e60-04f6ee271cb3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:21:59 embed-certs-436067 crio[725]: time="2024-07-31 18:21:59.377656235Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6b1d070e-2b74-46ef-9684-16bfcbeb734d name=/runtime.v1.RuntimeService/Version
	Jul 31 18:21:59 embed-certs-436067 crio[725]: time="2024-07-31 18:21:59.377774169Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6b1d070e-2b74-46ef-9684-16bfcbeb734d name=/runtime.v1.RuntimeService/Version
	Jul 31 18:21:59 embed-certs-436067 crio[725]: time="2024-07-31 18:21:59.378924728Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d0329aee-0178-4abb-831d-aeaec4148167 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:21:59 embed-certs-436067 crio[725]: time="2024-07-31 18:21:59.381438712Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722450119381411104,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d0329aee-0178-4abb-831d-aeaec4148167 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:21:59 embed-certs-436067 crio[725]: time="2024-07-31 18:21:59.382070634Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8cf68de3-1ba0-4db2-8cfd-3ebcbb49ea3a name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:21:59 embed-certs-436067 crio[725]: time="2024-07-31 18:21:59.382137217Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8cf68de3-1ba0-4db2-8cfd-3ebcbb49ea3a name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:21:59 embed-certs-436067 crio[725]: time="2024-07-31 18:21:59.382356629Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e81135eed50d1b7a66d7fe0cf97169684a888d597ad1e09ac13cc822beef8463,PodSandboxId:13ceb832a21bd1dad9e9382b188da32b85c8f81bc49d2ed970691d7d8295b0f2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722449575925975759,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7d95d32-1002-4fba-b0fd-555b872efa1f,},Annotations:map[string]string{io.kubernetes.container.hash: 305b4e0b,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb37176a1402a636ef18f65e18efff58af3508014944aa89a30ef43f65a7087c,PodSandboxId:16503bb7d21bca8ce3dc91e3843567a6fac2d71496cea7ff11459c58ee0f7483,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722449575876581676,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-85spm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eecb399a-0365-436d-8f14-a7853a5a2ea3,},Annotations:map[string]string{io.kubernetes.container.hash: e95b56e6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fa65ac5c2b20c87a2c33c21a876b003c2aee98795df76fce1619374d44a2eca,PodSandboxId:dbda29967246ca2b68e1316e89d393f54987a0186e7659bec1fb3394c3474267,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722449575762374998,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fqkfd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e5a67a3-6f2b-43a5-8b94-cf48202c5958,},Annotations:map[string]string{io.kubernetes.container.hash: af4356a7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"cont
ainerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99a38e72d22388429038d3580675f16e680416f8fda306bdf4c391f75d1d6f4e,PodSandboxId:4ce18c5492bcc33609c111e5dfc539411ed97b6c88cca97ffe6bc55035c69ab2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722449575672794330,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qpb62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e57f157b-01b5-42ec-9630-b9b5ae94fe
5d,},Annotations:map[string]string{io.kubernetes.container.hash: 64ee34e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc1d1518390d9bb43923997af4c62caa816339308e4dc225f452a54a0a1b86e7,PodSandboxId:403ef356717193ee2ac40466ec7781b568ca6f3bf8e61fc6d16f2d8c80f54431,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722449555729790036,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-436067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec4df7a89391220824463c879809b59a,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c093f881541f076f977e16312dc643448d3f077778055fb2462c6952710814e7,PodSandboxId:29ce686dfcdef81bf5020ea4f2e895dd4f86ab430fa2f43be5579362af403cec,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722449555702714874,Labe
ls:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-436067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 808aaf12096cf081a2351698f977532c,},Annotations:map[string]string{io.kubernetes.container.hash: f2dabe25,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6db79734980205959c736946c757d63f009ecfd4877f566a300321787965474b,PodSandboxId:410e5e30052883f36056869049399453d5345a52810890a6564042533f131031,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722449555709471226,Labels:map[string]string{io.kubernet
es.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-436067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a55599c46901235db3664d7ea64eb319,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9a9772dcf78ae6e2ec7aaaa6496f3c491d1011757c4b667f76dc2cad3bf5b72,PodSandboxId:9cba8f262e7f1a23846ed03a0eab0905455b21ffc753005b5a1a83b4caf2c1ab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722449555631382221,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-436067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a2b92359fe7ffe52886ca4ecfa670b1,},Annotations:map[string]string{io.kubernetes.container.hash: ee5b1ef,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8cf68de3-1ba0-4db2-8cfd-3ebcbb49ea3a name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:21:59 embed-certs-436067 crio[725]: time="2024-07-31 18:21:59.414040951Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=67fdd790-1614-47f2-8aee-f6a188a3f433 name=/runtime.v1.RuntimeService/Version
	Jul 31 18:21:59 embed-certs-436067 crio[725]: time="2024-07-31 18:21:59.414115652Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=67fdd790-1614-47f2-8aee-f6a188a3f433 name=/runtime.v1.RuntimeService/Version
	Jul 31 18:21:59 embed-certs-436067 crio[725]: time="2024-07-31 18:21:59.415633398Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=60f43804-5bb6-4b97-9df8-f0ba17d43352 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:21:59 embed-certs-436067 crio[725]: time="2024-07-31 18:21:59.416011114Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722450119415989043,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=60f43804-5bb6-4b97-9df8-f0ba17d43352 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:21:59 embed-certs-436067 crio[725]: time="2024-07-31 18:21:59.416588325Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2a180d48-d11b-407d-8de6-1ba9d74c86cf name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:21:59 embed-certs-436067 crio[725]: time="2024-07-31 18:21:59.416638814Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2a180d48-d11b-407d-8de6-1ba9d74c86cf name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:21:59 embed-certs-436067 crio[725]: time="2024-07-31 18:21:59.416821445Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e81135eed50d1b7a66d7fe0cf97169684a888d597ad1e09ac13cc822beef8463,PodSandboxId:13ceb832a21bd1dad9e9382b188da32b85c8f81bc49d2ed970691d7d8295b0f2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722449575925975759,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7d95d32-1002-4fba-b0fd-555b872efa1f,},Annotations:map[string]string{io.kubernetes.container.hash: 305b4e0b,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb37176a1402a636ef18f65e18efff58af3508014944aa89a30ef43f65a7087c,PodSandboxId:16503bb7d21bca8ce3dc91e3843567a6fac2d71496cea7ff11459c58ee0f7483,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722449575876581676,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-85spm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eecb399a-0365-436d-8f14-a7853a5a2ea3,},Annotations:map[string]string{io.kubernetes.container.hash: e95b56e6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fa65ac5c2b20c87a2c33c21a876b003c2aee98795df76fce1619374d44a2eca,PodSandboxId:dbda29967246ca2b68e1316e89d393f54987a0186e7659bec1fb3394c3474267,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722449575762374998,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fqkfd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e5a67a3-6f2b-43a5-8b94-cf48202c5958,},Annotations:map[string]string{io.kubernetes.container.hash: af4356a7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"cont
ainerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99a38e72d22388429038d3580675f16e680416f8fda306bdf4c391f75d1d6f4e,PodSandboxId:4ce18c5492bcc33609c111e5dfc539411ed97b6c88cca97ffe6bc55035c69ab2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722449575672794330,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qpb62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e57f157b-01b5-42ec-9630-b9b5ae94fe
5d,},Annotations:map[string]string{io.kubernetes.container.hash: 64ee34e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc1d1518390d9bb43923997af4c62caa816339308e4dc225f452a54a0a1b86e7,PodSandboxId:403ef356717193ee2ac40466ec7781b568ca6f3bf8e61fc6d16f2d8c80f54431,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722449555729790036,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-436067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec4df7a89391220824463c879809b59a,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c093f881541f076f977e16312dc643448d3f077778055fb2462c6952710814e7,PodSandboxId:29ce686dfcdef81bf5020ea4f2e895dd4f86ab430fa2f43be5579362af403cec,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722449555702714874,Labe
ls:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-436067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 808aaf12096cf081a2351698f977532c,},Annotations:map[string]string{io.kubernetes.container.hash: f2dabe25,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6db79734980205959c736946c757d63f009ecfd4877f566a300321787965474b,PodSandboxId:410e5e30052883f36056869049399453d5345a52810890a6564042533f131031,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722449555709471226,Labels:map[string]string{io.kubernet
es.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-436067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a55599c46901235db3664d7ea64eb319,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9a9772dcf78ae6e2ec7aaaa6496f3c491d1011757c4b667f76dc2cad3bf5b72,PodSandboxId:9cba8f262e7f1a23846ed03a0eab0905455b21ffc753005b5a1a83b4caf2c1ab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722449555631382221,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-436067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a2b92359fe7ffe52886ca4ecfa670b1,},Annotations:map[string]string{io.kubernetes.container.hash: ee5b1ef,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2a180d48-d11b-407d-8de6-1ba9d74c86cf name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e81135eed50d1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   13ceb832a21bd       storage-provisioner
	cb37176a1402a       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   9 minutes ago       Running             kube-proxy                0                   16503bb7d21bc       kube-proxy-85spm
	8fa65ac5c2b20       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   dbda29967246c       coredns-7db6d8ff4d-fqkfd
	99a38e72d2238       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   4ce18c5492bcc       coredns-7db6d8ff4d-qpb62
	cc1d1518390d9       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   9 minutes ago       Running             kube-controller-manager   2                   403ef35671719       kube-controller-manager-embed-certs-436067
	6db7973498020       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   9 minutes ago       Running             kube-scheduler            2                   410e5e3005288       kube-scheduler-embed-certs-436067
	c093f881541f0       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   29ce686dfcdef       etcd-embed-certs-436067
	d9a9772dcf78a       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   9 minutes ago       Running             kube-apiserver            2                   9cba8f262e7f1       kube-apiserver-embed-certs-436067
	
	
	==> coredns [8fa65ac5c2b20c87a2c33c21a876b003c2aee98795df76fce1619374d44a2eca] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [99a38e72d22388429038d3580675f16e680416f8fda306bdf4c391f75d1d6f4e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-436067
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-436067
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1d737dad7efa60c56d30434fcd857dd3b14c91d9
	                    minikube.k8s.io/name=embed-certs-436067
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T18_12_42_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 18:12:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-436067
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 18:21:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 18:18:08 +0000   Wed, 31 Jul 2024 18:12:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 18:18:08 +0000   Wed, 31 Jul 2024 18:12:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 18:18:08 +0000   Wed, 31 Jul 2024 18:12:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 18:18:08 +0000   Wed, 31 Jul 2024 18:12:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.86
	  Hostname:    embed-certs-436067
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f6fa4d76d50f402ca6798d9445da3dc8
	  System UUID:                f6fa4d76-d50f-402c-a679-8d9445da3dc8
	  Boot ID:                    fde69711-7c4c-4fc8-a71c-0af26845f36a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-fqkfd                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m5s
	  kube-system                 coredns-7db6d8ff4d-qpb62                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m5s
	  kube-system                 etcd-embed-certs-436067                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m18s
	  kube-system                 kube-apiserver-embed-certs-436067             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                 kube-controller-manager-embed-certs-436067    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                 kube-proxy-85spm                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m5s
	  kube-system                 kube-scheduler-embed-certs-436067             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                 metrics-server-569cc877fc-pgf6q               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m4s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m3s   kube-proxy       
	  Normal  NodeHasSufficientMemory  9m24s  kubelet          Node embed-certs-436067 status is now: NodeHasSufficientMemory
	  Normal  Starting                 9m18s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m18s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m18s  kubelet          Node embed-certs-436067 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m18s  kubelet          Node embed-certs-436067 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m18s  kubelet          Node embed-certs-436067 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m6s   node-controller  Node embed-certs-436067 event: Registered Node embed-certs-436067 in Controller
	
	
	==> dmesg <==
	[  +0.052278] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039889] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.811769] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.851620] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.515852] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.972241] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.055723] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070736] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +0.185374] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +0.154499] systemd-fstab-generator[680]: Ignoring "noauto" option for root device
	[  +0.297404] systemd-fstab-generator[710]: Ignoring "noauto" option for root device
	[  +4.278734] systemd-fstab-generator[806]: Ignoring "noauto" option for root device
	[  +0.059716] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.006855] systemd-fstab-generator[928]: Ignoring "noauto" option for root device
	[  +4.588837] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.290731] kauditd_printk_skb: 79 callbacks suppressed
	[Jul31 18:12] kauditd_printk_skb: 4 callbacks suppressed
	[  +2.271517] systemd-fstab-generator[3583]: Ignoring "noauto" option for root device
	[  +4.642096] kauditd_printk_skb: 57 callbacks suppressed
	[  +1.913264] systemd-fstab-generator[3909]: Ignoring "noauto" option for root device
	[ +13.311734] systemd-fstab-generator[4097]: Ignoring "noauto" option for root device
	[  +0.083136] kauditd_printk_skb: 14 callbacks suppressed
	[Jul31 18:13] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [c093f881541f076f977e16312dc643448d3f077778055fb2462c6952710814e7] <==
	{"level":"info","ts":"2024-07-31T18:12:36.120345Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"c5418a0fb3fcfa37","local-member-id":"1d005a24580f63ce","added-peer-id":"1d005a24580f63ce","added-peer-peer-urls":["https://192.168.50.86:2380"]}
	{"level":"info","ts":"2024-07-31T18:12:36.160273Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-31T18:12:36.160554Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"1d005a24580f63ce","initial-advertise-peer-urls":["https://192.168.50.86:2380"],"listen-peer-urls":["https://192.168.50.86:2380"],"advertise-client-urls":["https://192.168.50.86:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.86:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-31T18:12:36.160613Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-31T18:12:36.160985Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.86:2380"}
	{"level":"info","ts":"2024-07-31T18:12:36.16108Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.86:2380"}
	{"level":"info","ts":"2024-07-31T18:12:37.06932Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1d005a24580f63ce is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-31T18:12:37.069457Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1d005a24580f63ce became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-31T18:12:37.069524Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1d005a24580f63ce received MsgPreVoteResp from 1d005a24580f63ce at term 1"}
	{"level":"info","ts":"2024-07-31T18:12:37.069559Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1d005a24580f63ce became candidate at term 2"}
	{"level":"info","ts":"2024-07-31T18:12:37.069591Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1d005a24580f63ce received MsgVoteResp from 1d005a24580f63ce at term 2"}
	{"level":"info","ts":"2024-07-31T18:12:37.069617Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1d005a24580f63ce became leader at term 2"}
	{"level":"info","ts":"2024-07-31T18:12:37.069641Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1d005a24580f63ce elected leader 1d005a24580f63ce at term 2"}
	{"level":"info","ts":"2024-07-31T18:12:37.070893Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"1d005a24580f63ce","local-member-attributes":"{Name:embed-certs-436067 ClientURLs:[https://192.168.50.86:2379]}","request-path":"/0/members/1d005a24580f63ce/attributes","cluster-id":"c5418a0fb3fcfa37","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T18:12:37.071123Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T18:12:37.071704Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T18:12:37.07191Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T18:12:37.072302Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T18:12:37.07239Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-31T18:12:37.075035Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-31T18:12:37.075331Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c5418a0fb3fcfa37","local-member-id":"1d005a24580f63ce","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T18:12:37.075466Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T18:12:37.075513Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T18:12:37.081387Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.86:2379"}
	2024/07/31 18:12:41 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	
	
	==> kernel <==
	 18:21:59 up 14 min,  0 users,  load average: 0.24, 0.20, 0.17
	Linux embed-certs-436067 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d9a9772dcf78ae6e2ec7aaaa6496f3c491d1011757c4b667f76dc2cad3bf5b72] <==
	I0731 18:15:56.358643       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 18:17:38.498912       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 18:17:38.499251       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0731 18:17:39.500254       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 18:17:39.500296       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0731 18:17:39.500307       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 18:17:39.500437       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 18:17:39.500543       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0731 18:17:39.501593       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 18:18:39.501280       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 18:18:39.501354       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0731 18:18:39.501363       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 18:18:39.502469       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 18:18:39.502551       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0731 18:18:39.502578       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 18:20:39.501584       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 18:20:39.501680       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0731 18:20:39.501689       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 18:20:39.502884       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 18:20:39.502982       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0731 18:20:39.502991       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [cc1d1518390d9bb43923997af4c62caa816339308e4dc225f452a54a0a1b86e7] <==
	I0731 18:16:24.445097       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 18:16:53.972812       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 18:16:54.453928       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 18:17:23.977896       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 18:17:24.461596       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 18:17:53.984253       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 18:17:54.470383       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 18:18:23.990800       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 18:18:24.480178       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0731 18:18:45.266118       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="487.289µs"
	E0731 18:18:53.996625       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 18:18:54.490043       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0731 18:18:57.260251       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="99.536µs"
	E0731 18:19:24.001398       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 18:19:24.497664       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 18:19:54.007007       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 18:19:54.504880       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 18:20:24.011860       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 18:20:24.511754       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 18:20:54.017922       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 18:20:54.519990       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 18:21:24.024098       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 18:21:24.527952       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 18:21:54.029848       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 18:21:54.535850       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [cb37176a1402a636ef18f65e18efff58af3508014944aa89a30ef43f65a7087c] <==
	I0731 18:12:56.246292       1 server_linux.go:69] "Using iptables proxy"
	I0731 18:12:56.255684       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.86"]
	I0731 18:12:56.289788       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 18:12:56.289841       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 18:12:56.289858       1 server_linux.go:165] "Using iptables Proxier"
	I0731 18:12:56.291948       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 18:12:56.292137       1 server.go:872] "Version info" version="v1.30.3"
	I0731 18:12:56.292159       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 18:12:56.293688       1 config.go:192] "Starting service config controller"
	I0731 18:12:56.293724       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 18:12:56.293752       1 config.go:101] "Starting endpoint slice config controller"
	I0731 18:12:56.293768       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 18:12:56.295488       1 config.go:319] "Starting node config controller"
	I0731 18:12:56.295506       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 18:12:56.394474       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 18:12:56.394531       1 shared_informer.go:320] Caches are synced for service config
	I0731 18:12:56.396132       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6db79734980205959c736946c757d63f009ecfd4877f566a300321787965474b] <==
	W0731 18:12:38.541521       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 18:12:38.541545       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 18:12:38.541585       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 18:12:38.541618       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0731 18:12:38.541695       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 18:12:38.541719       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0731 18:12:39.382562       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 18:12:39.382590       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0731 18:12:39.385779       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 18:12:39.385811       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0731 18:12:39.435796       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 18:12:39.436013       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0731 18:12:39.466528       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 18:12:39.466672       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0731 18:12:39.481566       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0731 18:12:39.481896       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0731 18:12:39.525949       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 18:12:39.525992       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 18:12:39.657006       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 18:12:39.657333       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 18:12:39.800696       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0731 18:12:39.800830       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0731 18:12:39.804334       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0731 18:12:39.804530       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0731 18:12:42.423048       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 31 18:19:41 embed-certs-436067 kubelet[3916]: E0731 18:19:41.264842    3916 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 18:19:41 embed-certs-436067 kubelet[3916]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 18:19:41 embed-certs-436067 kubelet[3916]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 18:19:41 embed-certs-436067 kubelet[3916]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 18:19:41 embed-certs-436067 kubelet[3916]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 18:19:48 embed-certs-436067 kubelet[3916]: E0731 18:19:48.247536    3916 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pgf6q" podUID="b799fa04-0bf7-4914-9738-964b825577b5"
	Jul 31 18:20:00 embed-certs-436067 kubelet[3916]: E0731 18:20:00.247010    3916 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pgf6q" podUID="b799fa04-0bf7-4914-9738-964b825577b5"
	Jul 31 18:20:12 embed-certs-436067 kubelet[3916]: E0731 18:20:12.249921    3916 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pgf6q" podUID="b799fa04-0bf7-4914-9738-964b825577b5"
	Jul 31 18:20:27 embed-certs-436067 kubelet[3916]: E0731 18:20:27.246922    3916 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pgf6q" podUID="b799fa04-0bf7-4914-9738-964b825577b5"
	Jul 31 18:20:41 embed-certs-436067 kubelet[3916]: E0731 18:20:41.249444    3916 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pgf6q" podUID="b799fa04-0bf7-4914-9738-964b825577b5"
	Jul 31 18:20:41 embed-certs-436067 kubelet[3916]: E0731 18:20:41.265053    3916 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 18:20:41 embed-certs-436067 kubelet[3916]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 18:20:41 embed-certs-436067 kubelet[3916]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 18:20:41 embed-certs-436067 kubelet[3916]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 18:20:41 embed-certs-436067 kubelet[3916]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 18:20:54 embed-certs-436067 kubelet[3916]: E0731 18:20:54.246221    3916 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pgf6q" podUID="b799fa04-0bf7-4914-9738-964b825577b5"
	Jul 31 18:21:09 embed-certs-436067 kubelet[3916]: E0731 18:21:09.247231    3916 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pgf6q" podUID="b799fa04-0bf7-4914-9738-964b825577b5"
	Jul 31 18:21:23 embed-certs-436067 kubelet[3916]: E0731 18:21:23.250123    3916 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pgf6q" podUID="b799fa04-0bf7-4914-9738-964b825577b5"
	Jul 31 18:21:38 embed-certs-436067 kubelet[3916]: E0731 18:21:38.246658    3916 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pgf6q" podUID="b799fa04-0bf7-4914-9738-964b825577b5"
	Jul 31 18:21:41 embed-certs-436067 kubelet[3916]: E0731 18:21:41.265377    3916 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 18:21:41 embed-certs-436067 kubelet[3916]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 18:21:41 embed-certs-436067 kubelet[3916]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 18:21:41 embed-certs-436067 kubelet[3916]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 18:21:41 embed-certs-436067 kubelet[3916]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 18:21:50 embed-certs-436067 kubelet[3916]: E0731 18:21:50.246274    3916 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pgf6q" podUID="b799fa04-0bf7-4914-9738-964b825577b5"
	
	
	==> storage-provisioner [e81135eed50d1b7a66d7fe0cf97169684a888d597ad1e09ac13cc822beef8463] <==
	I0731 18:12:56.208897       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0731 18:12:56.223886       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0731 18:12:56.224019       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0731 18:12:56.235758       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0731 18:12:56.235942       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-436067_416a8f7b-0ebd-4ef5-9f2d-e4f138bf005b!
	I0731 18:12:56.238247       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f7e2a240-06d4-4b1c-9dc4-46302f362726", APIVersion:"v1", ResourceVersion:"402", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-436067_416a8f7b-0ebd-4ef5-9f2d-e4f138bf005b became leader
	I0731 18:12:56.337789       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-436067_416a8f7b-0ebd-4ef5-9f2d-e4f138bf005b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-436067 -n embed-certs-436067
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-436067 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-pgf6q
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-436067 describe pod metrics-server-569cc877fc-pgf6q
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-436067 describe pod metrics-server-569cc877fc-pgf6q: exit status 1 (61.943584ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-pgf6q" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-436067 describe pod metrics-server-569cc877fc-pgf6q: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0731 18:13:26.933976   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/enable-default-cni-985288/client.crt: no such file or directory
E0731 18:14:05.345895   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/client.crt: no such file or directory
E0731 18:14:21.706032   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/auto-985288/client.crt: no such file or directory
E0731 18:15:00.362524   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/kindnet-985288/client.crt: no such file or directory
E0731 18:15:35.991768   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/calico-985288/client.crt: no such file or directory
E0731 18:15:44.753169   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/auto-985288/client.crt: no such file or directory
E0731 18:16:07.908648   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/custom-flannel-985288/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-673754 -n no-preload-673754
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-31 18:22:13.730680203 +0000 UTC m=+6137.437418407
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-673754 -n no-preload-673754
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-673754 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-673754 logs -n 25: (2.015594233s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-985288                           | enable-default-cni-985288    | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 17:58 UTC |
	|         | sudo cat                                               |                              |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-985288                           | enable-default-cni-985288    | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 17:58 UTC |
	|         | sudo containerd config dump                            |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-985288                           | enable-default-cni-985288    | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 17:58 UTC |
	|         | sudo systemctl status crio                             |                              |         |         |                     |                     |
	|         | --all --full --no-pager                                |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-985288                           | enable-default-cni-985288    | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 17:58 UTC |
	|         | sudo systemctl cat crio                                |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-985288                           | enable-default-cni-985288    | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 17:58 UTC |
	|         | sudo find /etc/crio -type f                            |                              |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                          |                              |         |         |                     |                     |
	|         | \;                                                     |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-985288                           | enable-default-cni-985288    | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 17:58 UTC |
	|         | sudo crio config                                       |                              |         |         |                     |                     |
	| delete  | -p enable-default-cni-985288                           | enable-default-cni-985288    | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 17:58 UTC |
	| delete  | -p                                                     | disable-driver-mounts-280161 | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 17:58 UTC |
	|         | disable-driver-mounts-280161                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-094310 | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 18:00 UTC |
	|         | default-k8s-diff-port-094310                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-673754             | no-preload-673754            | jenkins | v1.33.1 | 31 Jul 24 17:59 UTC | 31 Jul 24 17:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-673754                                   | no-preload-673754            | jenkins | v1.33.1 | 31 Jul 24 17:59 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-094310  | default-k8s-diff-port-094310 | jenkins | v1.33.1 | 31 Jul 24 18:00 UTC | 31 Jul 24 18:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-094310 | jenkins | v1.33.1 | 31 Jul 24 18:00 UTC |                     |
	|         | default-k8s-diff-port-094310                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-436067            | embed-certs-436067           | jenkins | v1.33.1 | 31 Jul 24 18:00 UTC | 31 Jul 24 18:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-436067                                  | embed-certs-436067           | jenkins | v1.33.1 | 31 Jul 24 18:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-276459        | old-k8s-version-276459       | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-673754                  | no-preload-673754            | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-673754 --memory=2200                     | no-preload-673754            | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC | 31 Jul 24 18:13 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-094310       | default-k8s-diff-port-094310 | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-436067                 | embed-certs-436067           | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-094310 | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC | 31 Jul 24 18:12 UTC |
	|         | default-k8s-diff-port-094310                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-436067                                  | embed-certs-436067           | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC | 31 Jul 24 18:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-276459                              | old-k8s-version-276459       | jenkins | v1.33.1 | 31 Jul 24 18:03 UTC | 31 Jul 24 18:03 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-276459             | old-k8s-version-276459       | jenkins | v1.33.1 | 31 Jul 24 18:03 UTC | 31 Jul 24 18:03 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-276459                              | old-k8s-version-276459       | jenkins | v1.33.1 | 31 Jul 24 18:03 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 18:03:55
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 18:03:55.344211   74203 out.go:291] Setting OutFile to fd 1 ...
	I0731 18:03:55.344313   74203 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:03:55.344321   74203 out.go:304] Setting ErrFile to fd 2...
	I0731 18:03:55.344324   74203 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:03:55.344541   74203 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19349-8084/.minikube/bin
	I0731 18:03:55.345055   74203 out.go:298] Setting JSON to false
	I0731 18:03:55.345905   74203 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6379,"bootTime":1722442656,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 18:03:55.345962   74203 start.go:139] virtualization: kvm guest
	I0731 18:03:55.347848   74203 out.go:177] * [old-k8s-version-276459] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 18:03:55.349045   74203 notify.go:220] Checking for updates...
	I0731 18:03:55.349052   74203 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 18:03:55.350359   74203 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 18:03:55.351583   74203 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 18:03:55.352789   74203 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19349-8084/.minikube
	I0731 18:03:55.354046   74203 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 18:03:55.355244   74203 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 18:03:55.356819   74203 config.go:182] Loaded profile config "old-k8s-version-276459": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0731 18:03:55.357218   74203 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:03:55.357268   74203 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:03:55.372081   74203 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38815
	I0731 18:03:55.372424   74203 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:03:55.372950   74203 main.go:141] libmachine: Using API Version  1
	I0731 18:03:55.372972   74203 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:03:55.373263   74203 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:03:55.373466   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:03:55.375198   74203 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0731 18:03:55.376370   74203 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 18:03:55.376714   74203 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:03:55.376748   74203 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:03:55.390924   74203 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46311
	I0731 18:03:55.391380   74203 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:03:55.391853   74203 main.go:141] libmachine: Using API Version  1
	I0731 18:03:55.391875   74203 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:03:55.392165   74203 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:03:55.392389   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:03:55.425283   74203 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 18:03:55.426485   74203 start.go:297] selected driver: kvm2
	I0731 18:03:55.426517   74203 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-276459 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-276459 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:03:55.426632   74203 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 18:03:55.427322   74203 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 18:03:55.427419   74203 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19349-8084/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 18:03:55.441518   74203 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 18:03:55.441891   74203 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 18:03:55.441921   74203 cni.go:84] Creating CNI manager for ""
	I0731 18:03:55.441928   74203 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:03:55.441970   74203 start.go:340] cluster config:
	{Name:old-k8s-version-276459 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-276459 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:03:55.442088   74203 iso.go:125] acquiring lock: {Name:mk4494bb5a63902bf12a909c3109db85229bc516 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 18:03:55.443745   74203 out.go:177] * Starting "old-k8s-version-276459" primary control-plane node in "old-k8s-version-276459" cluster
	I0731 18:03:55.299338   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:03:55.445026   74203 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 18:03:55.445062   74203 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0731 18:03:55.445085   74203 cache.go:56] Caching tarball of preloaded images
	I0731 18:03:55.445157   74203 preload.go:172] Found /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 18:03:55.445167   74203 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0731 18:03:55.445250   74203 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/config.json ...
	I0731 18:03:55.445412   74203 start.go:360] acquireMachinesLock for old-k8s-version-276459: {Name:mkd78eacfab7d6c3058c6674434b3d889ec957e0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 18:03:58.371340   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:04.451379   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:07.523408   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:13.603407   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:16.675437   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:22.755418   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:25.827434   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:31.907379   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:34.979426   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:41.059417   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:44.131434   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:50.211391   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:53.283445   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:59.363428   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:02.435450   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:08.515394   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:11.587394   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:17.667388   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:20.739413   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:26.819368   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:29.891394   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:35.971391   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:39.043445   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:45.123378   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:48.195378   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:54.275417   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:57.347374   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:03.427390   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:06.499378   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:12.579395   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:15.651447   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:21.731394   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:24.803405   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:30.883468   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:33.955397   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:40.035387   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:43.107448   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:49.187413   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:52.259420   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:58.339413   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:07:01.411396   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:07:04.416121   73696 start.go:364] duration metric: took 4m18.256589549s to acquireMachinesLock for "default-k8s-diff-port-094310"
	I0731 18:07:04.416183   73696 start.go:96] Skipping create...Using existing machine configuration
	I0731 18:07:04.416192   73696 fix.go:54] fixHost starting: 
	I0731 18:07:04.416522   73696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:07:04.416570   73696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:07:04.432249   73696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32817
	I0731 18:07:04.432715   73696 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:07:04.433206   73696 main.go:141] libmachine: Using API Version  1
	I0731 18:07:04.433234   73696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:07:04.433616   73696 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:07:04.433833   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:07:04.434001   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetState
	I0731 18:07:04.436061   73696 fix.go:112] recreateIfNeeded on default-k8s-diff-port-094310: state=Stopped err=<nil>
	I0731 18:07:04.436082   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	W0731 18:07:04.436241   73696 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 18:07:04.438139   73696 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-094310" ...
	I0731 18:07:04.439463   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .Start
	I0731 18:07:04.439678   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Ensuring networks are active...
	I0731 18:07:04.440645   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Ensuring network default is active
	I0731 18:07:04.441067   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Ensuring network mk-default-k8s-diff-port-094310 is active
	I0731 18:07:04.441473   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Getting domain xml...
	I0731 18:07:04.442331   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Creating domain...
	I0731 18:07:05.660745   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting to get IP...
	I0731 18:07:05.661963   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:05.662532   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:05.662620   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:05.662524   74854 retry.go:31] will retry after 294.438382ms: waiting for machine to come up
	I0731 18:07:05.959200   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:05.959668   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:05.959699   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:05.959619   74854 retry.go:31] will retry after 331.316387ms: waiting for machine to come up
	I0731 18:07:04.413166   73479 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 18:07:04.413216   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetMachineName
	I0731 18:07:04.413580   73479 buildroot.go:166] provisioning hostname "no-preload-673754"
	I0731 18:07:04.413609   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetMachineName
	I0731 18:07:04.413827   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:07:04.415964   73479 machine.go:97] duration metric: took 4m37.431900974s to provisionDockerMachine
	I0731 18:07:04.416013   73479 fix.go:56] duration metric: took 4m37.452176305s for fixHost
	I0731 18:07:04.416023   73479 start.go:83] releasing machines lock for "no-preload-673754", held for 4m37.452227129s
	W0731 18:07:04.416048   73479 start.go:714] error starting host: provision: host is not running
	W0731 18:07:04.416143   73479 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0731 18:07:04.416157   73479 start.go:729] Will try again in 5 seconds ...
	I0731 18:07:06.292146   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:06.292533   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:06.292555   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:06.292487   74854 retry.go:31] will retry after 324.512889ms: waiting for machine to come up
	I0731 18:07:06.619045   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:06.619440   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:06.619470   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:06.619404   74854 retry.go:31] will retry after 556.332506ms: waiting for machine to come up
	I0731 18:07:07.177224   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:07.177689   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:07.177722   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:07.177631   74854 retry.go:31] will retry after 599.567638ms: waiting for machine to come up
	I0731 18:07:07.778444   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:07.778848   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:07.778885   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:07.778820   74854 retry.go:31] will retry after 944.17246ms: waiting for machine to come up
	I0731 18:07:08.724983   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:08.725484   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:08.725512   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:08.725433   74854 retry.go:31] will retry after 1.077726279s: waiting for machine to come up
	I0731 18:07:09.805196   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:09.805629   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:09.805667   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:09.805575   74854 retry.go:31] will retry after 1.140059854s: waiting for machine to come up
	I0731 18:07:10.951633   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:10.952066   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:10.952091   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:10.952028   74854 retry.go:31] will retry after 1.691707383s: waiting for machine to come up
	I0731 18:07:09.418606   73479 start.go:360] acquireMachinesLock for no-preload-673754: {Name:mkd78eacfab7d6c3058c6674434b3d889ec957e0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 18:07:12.645970   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:12.646588   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:12.646623   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:12.646525   74854 retry.go:31] will retry after 2.257630784s: waiting for machine to come up
	I0731 18:07:14.905494   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:14.905891   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:14.905922   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:14.905833   74854 retry.go:31] will retry after 2.877713561s: waiting for machine to come up
	I0731 18:07:17.786797   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:17.787173   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:17.787194   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:17.787140   74854 retry.go:31] will retry after 3.028611559s: waiting for machine to come up
	I0731 18:07:20.817593   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:20.817898   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Found IP for machine: 192.168.72.197
	I0731 18:07:20.817921   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Reserving static IP address...
	I0731 18:07:20.817934   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has current primary IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:20.818352   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-094310", mac: "52:54:00:a9:b2:ae", ip: "192.168.72.197"} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:20.818379   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Reserved static IP address: 192.168.72.197
	I0731 18:07:20.818400   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | skip adding static IP to network mk-default-k8s-diff-port-094310 - found existing host DHCP lease matching {name: "default-k8s-diff-port-094310", mac: "52:54:00:a9:b2:ae", ip: "192.168.72.197"}
	I0731 18:07:20.818414   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for SSH to be available...
	I0731 18:07:20.818431   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | Getting to WaitForSSH function...
	I0731 18:07:20.820417   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:20.820731   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:20.820758   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:20.820893   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | Using SSH client type: external
	I0731 18:07:20.820916   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | Using SSH private key: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/default-k8s-diff-port-094310/id_rsa (-rw-------)
	I0731 18:07:20.820940   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.197 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19349-8084/.minikube/machines/default-k8s-diff-port-094310/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 18:07:20.820950   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | About to run SSH command:
	I0731 18:07:20.820959   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | exit 0
	I0731 18:07:20.943348   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | SSH cmd err, output: <nil>: 
	I0731 18:07:20.943708   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetConfigRaw
	I0731 18:07:20.944373   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetIP
	I0731 18:07:20.947080   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:20.947465   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:20.947499   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:20.947731   73696 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/default-k8s-diff-port-094310/config.json ...
	I0731 18:07:20.947909   73696 machine.go:94] provisionDockerMachine start ...
	I0731 18:07:20.947926   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:07:20.948124   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:07:20.950698   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:20.951056   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:20.951083   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:20.951228   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:07:20.951443   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:20.951608   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:20.951780   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:07:20.952016   73696 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:20.952208   73696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.197 22 <nil> <nil>}
	I0731 18:07:20.952220   73696 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 18:07:21.051082   73696 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 18:07:21.051137   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetMachineName
	I0731 18:07:21.051424   73696 buildroot.go:166] provisioning hostname "default-k8s-diff-port-094310"
	I0731 18:07:21.051454   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetMachineName
	I0731 18:07:21.051650   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:07:21.054527   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.054913   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:21.054940   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.055151   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:07:21.055377   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:21.055516   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:21.055670   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:07:21.055838   73696 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:21.056037   73696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.197 22 <nil> <nil>}
	I0731 18:07:21.056051   73696 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-094310 && echo "default-k8s-diff-port-094310" | sudo tee /etc/hostname
	I0731 18:07:22.127802   73800 start.go:364] duration metric: took 4m27.5245732s to acquireMachinesLock for "embed-certs-436067"
	I0731 18:07:22.127861   73800 start.go:96] Skipping create...Using existing machine configuration
	I0731 18:07:22.127871   73800 fix.go:54] fixHost starting: 
	I0731 18:07:22.128296   73800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:07:22.128386   73800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:07:22.144783   73800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34313
	I0731 18:07:22.145111   73800 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:07:22.145531   73800 main.go:141] libmachine: Using API Version  1
	I0731 18:07:22.145549   73800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:07:22.145894   73800 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:07:22.146086   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:07:22.146226   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetState
	I0731 18:07:22.147718   73800 fix.go:112] recreateIfNeeded on embed-certs-436067: state=Stopped err=<nil>
	I0731 18:07:22.147737   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	W0731 18:07:22.147878   73800 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 18:07:22.149896   73800 out.go:177] * Restarting existing kvm2 VM for "embed-certs-436067" ...
	I0731 18:07:21.168797   73696 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-094310
	
	I0731 18:07:21.168828   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:07:21.171672   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.172012   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:21.172043   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.172183   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:07:21.172351   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:21.172510   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:21.172633   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:07:21.172800   73696 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:21.172976   73696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.197 22 <nil> <nil>}
	I0731 18:07:21.173010   73696 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-094310' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-094310/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-094310' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 18:07:21.284583   73696 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 18:07:21.284610   73696 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19349-8084/.minikube CaCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19349-8084/.minikube}
	I0731 18:07:21.284633   73696 buildroot.go:174] setting up certificates
	I0731 18:07:21.284645   73696 provision.go:84] configureAuth start
	I0731 18:07:21.284656   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetMachineName
	I0731 18:07:21.284931   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetIP
	I0731 18:07:21.287526   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.287945   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:21.287973   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.288161   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:07:21.290169   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.290469   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:21.290495   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.290602   73696 provision.go:143] copyHostCerts
	I0731 18:07:21.290661   73696 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem, removing ...
	I0731 18:07:21.290673   73696 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem
	I0731 18:07:21.290757   73696 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem (1082 bytes)
	I0731 18:07:21.290844   73696 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem, removing ...
	I0731 18:07:21.290856   73696 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem
	I0731 18:07:21.290881   73696 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem (1123 bytes)
	I0731 18:07:21.290933   73696 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem, removing ...
	I0731 18:07:21.290939   73696 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem
	I0731 18:07:21.290959   73696 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem (1675 bytes)
	I0731 18:07:21.291005   73696 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-094310 san=[127.0.0.1 192.168.72.197 default-k8s-diff-port-094310 localhost minikube]
	I0731 18:07:21.483241   73696 provision.go:177] copyRemoteCerts
	I0731 18:07:21.483314   73696 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 18:07:21.483343   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:07:21.486231   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.486619   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:21.486659   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.486850   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:07:21.487084   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:21.487285   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:07:21.487443   73696 sshutil.go:53] new ssh client: &{IP:192.168.72.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/default-k8s-diff-port-094310/id_rsa Username:docker}
	I0731 18:07:21.568564   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 18:07:21.598766   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0731 18:07:21.621602   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 18:07:21.643361   73696 provision.go:87] duration metric: took 358.702982ms to configureAuth
	I0731 18:07:21.643393   73696 buildroot.go:189] setting minikube options for container-runtime
	I0731 18:07:21.643598   73696 config.go:182] Loaded profile config "default-k8s-diff-port-094310": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:07:21.643699   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:07:21.646487   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.646921   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:21.646967   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.647126   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:07:21.647331   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:21.647527   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:21.647675   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:07:21.647879   73696 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:21.648051   73696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.197 22 <nil> <nil>}
	I0731 18:07:21.648066   73696 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 18:07:21.896109   73696 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 18:07:21.896138   73696 machine.go:97] duration metric: took 948.216479ms to provisionDockerMachine
	I0731 18:07:21.896152   73696 start.go:293] postStartSetup for "default-k8s-diff-port-094310" (driver="kvm2")
	I0731 18:07:21.896166   73696 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 18:07:21.896185   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:07:21.896500   73696 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 18:07:21.896533   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:07:21.899447   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.899784   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:21.899817   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.899936   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:07:21.900136   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:21.900268   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:07:21.900415   73696 sshutil.go:53] new ssh client: &{IP:192.168.72.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/default-k8s-diff-port-094310/id_rsa Username:docker}
	I0731 18:07:21.981347   73696 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 18:07:21.985297   73696 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 18:07:21.985324   73696 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/addons for local assets ...
	I0731 18:07:21.985397   73696 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/files for local assets ...
	I0731 18:07:21.985513   73696 filesync.go:149] local asset: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem -> 152592.pem in /etc/ssl/certs
	I0731 18:07:21.985646   73696 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 18:07:21.994700   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /etc/ssl/certs/152592.pem (1708 bytes)
	I0731 18:07:22.022005   73696 start.go:296] duration metric: took 125.838186ms for postStartSetup
	I0731 18:07:22.022052   73696 fix.go:56] duration metric: took 17.605858897s for fixHost
	I0731 18:07:22.022075   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:07:22.025151   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:22.025445   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:22.025478   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:22.025622   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:07:22.025829   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:22.026023   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:22.026199   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:07:22.026390   73696 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:22.026632   73696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.197 22 <nil> <nil>}
	I0731 18:07:22.026653   73696 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 18:07:22.127643   73696 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722449242.103036947
	
	I0731 18:07:22.127668   73696 fix.go:216] guest clock: 1722449242.103036947
	I0731 18:07:22.127675   73696 fix.go:229] Guest: 2024-07-31 18:07:22.103036947 +0000 UTC Remote: 2024-07-31 18:07:22.022056299 +0000 UTC m=+275.995802468 (delta=80.980648ms)
	I0731 18:07:22.127698   73696 fix.go:200] guest clock delta is within tolerance: 80.980648ms
	I0731 18:07:22.127704   73696 start.go:83] releasing machines lock for "default-k8s-diff-port-094310", held for 17.711543911s
	I0731 18:07:22.127735   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:07:22.128006   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetIP
	I0731 18:07:22.130905   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:22.131291   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:22.131322   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:22.131568   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:07:22.132072   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:07:22.132244   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:07:22.132334   73696 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 18:07:22.132373   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:07:22.132488   73696 ssh_runner.go:195] Run: cat /version.json
	I0731 18:07:22.132511   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:07:22.134976   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:22.135269   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:22.135350   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:22.135386   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:22.135583   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:07:22.135702   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:22.135735   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:22.135751   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:22.135837   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:07:22.135945   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:07:22.135966   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:22.136068   73696 sshutil.go:53] new ssh client: &{IP:192.168.72.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/default-k8s-diff-port-094310/id_rsa Username:docker}
	I0731 18:07:22.136101   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:07:22.136246   73696 sshutil.go:53] new ssh client: &{IP:192.168.72.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/default-k8s-diff-port-094310/id_rsa Username:docker}
	I0731 18:07:22.245752   73696 ssh_runner.go:195] Run: systemctl --version
	I0731 18:07:22.251574   73696 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 18:07:22.391398   73696 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 18:07:22.396765   73696 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 18:07:22.396842   73696 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 18:07:22.412102   73696 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 18:07:22.412119   73696 start.go:495] detecting cgroup driver to use...
	I0731 18:07:22.412170   73696 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 18:07:22.427198   73696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 18:07:22.441511   73696 docker.go:217] disabling cri-docker service (if available) ...
	I0731 18:07:22.441589   73696 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 18:07:22.455498   73696 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 18:07:22.469702   73696 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 18:07:22.584218   73696 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 18:07:22.719105   73696 docker.go:233] disabling docker service ...
	I0731 18:07:22.719195   73696 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 18:07:22.733625   73696 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 18:07:22.746500   73696 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 18:07:22.893624   73696 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 18:07:23.012965   73696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 18:07:23.027132   73696 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 18:07:23.044766   73696 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 18:07:23.044832   73696 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:23.054276   73696 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 18:07:23.054363   73696 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:23.063873   73696 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:23.073392   73696 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:23.082908   73696 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 18:07:23.093468   73696 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:23.103419   73696 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:23.119920   73696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:23.130427   73696 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 18:07:23.139397   73696 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 18:07:23.139465   73696 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 18:07:23.152275   73696 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 18:07:23.162439   73696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:07:23.280030   73696 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 18:07:23.412019   73696 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 18:07:23.412083   73696 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 18:07:23.416884   73696 start.go:563] Will wait 60s for crictl version
	I0731 18:07:23.416930   73696 ssh_runner.go:195] Run: which crictl
	I0731 18:07:23.420518   73696 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 18:07:23.458895   73696 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 18:07:23.458976   73696 ssh_runner.go:195] Run: crio --version
	I0731 18:07:23.486961   73696 ssh_runner.go:195] Run: crio --version
	I0731 18:07:23.519648   73696 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 18:07:22.151159   73800 main.go:141] libmachine: (embed-certs-436067) Calling .Start
	I0731 18:07:22.151319   73800 main.go:141] libmachine: (embed-certs-436067) Ensuring networks are active...
	I0731 18:07:22.151951   73800 main.go:141] libmachine: (embed-certs-436067) Ensuring network default is active
	I0731 18:07:22.152245   73800 main.go:141] libmachine: (embed-certs-436067) Ensuring network mk-embed-certs-436067 is active
	I0731 18:07:22.152747   73800 main.go:141] libmachine: (embed-certs-436067) Getting domain xml...
	I0731 18:07:22.153446   73800 main.go:141] libmachine: (embed-certs-436067) Creating domain...
	I0731 18:07:23.410530   73800 main.go:141] libmachine: (embed-certs-436067) Waiting to get IP...
	I0731 18:07:23.411687   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:23.412152   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:23.412231   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:23.412133   74994 retry.go:31] will retry after 233.281104ms: waiting for machine to come up
	I0731 18:07:23.646659   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:23.647147   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:23.647174   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:23.647069   74994 retry.go:31] will retry after 307.068766ms: waiting for machine to come up
	I0731 18:07:23.955614   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:23.956140   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:23.956166   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:23.956094   74994 retry.go:31] will retry after 410.095032ms: waiting for machine to come up
	I0731 18:07:24.367793   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:24.368231   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:24.368264   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:24.368188   74994 retry.go:31] will retry after 366.242055ms: waiting for machine to come up
	I0731 18:07:23.520927   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetIP
	I0731 18:07:23.524167   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:23.524615   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:23.524663   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:23.524913   73696 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0731 18:07:23.528924   73696 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:07:23.540496   73696 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-094310 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-094310 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.197 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 18:07:23.540633   73696 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 18:07:23.540681   73696 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 18:07:23.579224   73696 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 18:07:23.579295   73696 ssh_runner.go:195] Run: which lz4
	I0731 18:07:23.583060   73696 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 18:07:23.586888   73696 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 18:07:23.586922   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 18:07:24.864241   73696 crio.go:462] duration metric: took 1.281254602s to copy over tarball
	I0731 18:07:24.864321   73696 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 18:07:24.735741   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:24.736325   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:24.736356   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:24.736275   74994 retry.go:31] will retry after 593.179812ms: waiting for machine to come up
	I0731 18:07:25.331004   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:25.331406   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:25.331470   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:25.331381   74994 retry.go:31] will retry after 778.352855ms: waiting for machine to come up
	I0731 18:07:26.111327   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:26.111828   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:26.111855   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:26.111757   74994 retry.go:31] will retry after 993.157171ms: waiting for machine to come up
	I0731 18:07:27.106111   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:27.106543   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:27.106574   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:27.106507   74994 retry.go:31] will retry after 963.581879ms: waiting for machine to come up
	I0731 18:07:28.072100   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:28.072628   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:28.072657   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:28.072560   74994 retry.go:31] will retry after 1.608497907s: waiting for machine to come up
	I0731 18:07:27.052512   73696 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.188157854s)
	I0731 18:07:27.052542   73696 crio.go:469] duration metric: took 2.188269884s to extract the tarball
	I0731 18:07:27.052557   73696 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 18:07:27.089250   73696 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 18:07:27.130507   73696 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 18:07:27.130536   73696 cache_images.go:84] Images are preloaded, skipping loading
	I0731 18:07:27.130546   73696 kubeadm.go:934] updating node { 192.168.72.197 8444 v1.30.3 crio true true} ...
	I0731 18:07:27.130666   73696 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-094310 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.197
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-094310 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 18:07:27.130751   73696 ssh_runner.go:195] Run: crio config
	I0731 18:07:27.176571   73696 cni.go:84] Creating CNI manager for ""
	I0731 18:07:27.176598   73696 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:07:27.176614   73696 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 18:07:27.176640   73696 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.197 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-094310 NodeName:default-k8s-diff-port-094310 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.197"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.197 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 18:07:27.176821   73696 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.197
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-094310"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.197
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.197"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 18:07:27.176904   73696 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 18:07:27.186582   73696 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 18:07:27.186647   73696 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 18:07:27.195571   73696 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0731 18:07:27.211103   73696 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 18:07:27.226226   73696 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0731 18:07:27.241763   73696 ssh_runner.go:195] Run: grep 192.168.72.197	control-plane.minikube.internal$ /etc/hosts
	I0731 18:07:27.245286   73696 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.197	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:07:27.256317   73696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:07:27.377904   73696 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 18:07:27.394151   73696 certs.go:68] Setting up /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/default-k8s-diff-port-094310 for IP: 192.168.72.197
	I0731 18:07:27.394181   73696 certs.go:194] generating shared ca certs ...
	I0731 18:07:27.394201   73696 certs.go:226] acquiring lock for ca certs: {Name:mk49eed439085f9c30746706460ce213570d1997 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:07:27.394382   73696 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key
	I0731 18:07:27.394451   73696 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key
	I0731 18:07:27.394465   73696 certs.go:256] generating profile certs ...
	I0731 18:07:27.394577   73696 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/default-k8s-diff-port-094310/client.key
	I0731 18:07:27.394656   73696 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/default-k8s-diff-port-094310/apiserver.key.5264b27d
	I0731 18:07:27.394703   73696 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/default-k8s-diff-port-094310/proxy-client.key
	I0731 18:07:27.394851   73696 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem (1338 bytes)
	W0731 18:07:27.394896   73696 certs.go:480] ignoring /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259_empty.pem, impossibly tiny 0 bytes
	I0731 18:07:27.394908   73696 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 18:07:27.394935   73696 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem (1082 bytes)
	I0731 18:07:27.394969   73696 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem (1123 bytes)
	I0731 18:07:27.394990   73696 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem (1675 bytes)
	I0731 18:07:27.395028   73696 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem (1708 bytes)
	I0731 18:07:27.395749   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 18:07:27.425292   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 18:07:27.452753   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 18:07:27.481508   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 18:07:27.506990   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/default-k8s-diff-port-094310/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0731 18:07:27.544385   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/default-k8s-diff-port-094310/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 18:07:27.572947   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/default-k8s-diff-port-094310/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 18:07:27.597895   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/default-k8s-diff-port-094310/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 18:07:27.619324   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /usr/share/ca-certificates/152592.pem (1708 bytes)
	I0731 18:07:27.641000   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 18:07:27.662483   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem --> /usr/share/ca-certificates/15259.pem (1338 bytes)
	I0731 18:07:27.684400   73696 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 18:07:27.700058   73696 ssh_runner.go:195] Run: openssl version
	I0731 18:07:27.705637   73696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/152592.pem && ln -fs /usr/share/ca-certificates/152592.pem /etc/ssl/certs/152592.pem"
	I0731 18:07:27.715558   73696 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/152592.pem
	I0731 18:07:27.719545   73696 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 16:54 /usr/share/ca-certificates/152592.pem
	I0731 18:07:27.719611   73696 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/152592.pem
	I0731 18:07:27.725076   73696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/152592.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 18:07:27.736589   73696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 18:07:27.747908   73696 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:07:27.752392   73696 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 16:42 /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:07:27.752448   73696 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:07:27.757939   73696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 18:07:27.769571   73696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15259.pem && ln -fs /usr/share/ca-certificates/15259.pem /etc/ssl/certs/15259.pem"
	I0731 18:07:27.780730   73696 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15259.pem
	I0731 18:07:27.785059   73696 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 16:54 /usr/share/ca-certificates/15259.pem
	I0731 18:07:27.785112   73696 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15259.pem
	I0731 18:07:27.790477   73696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15259.pem /etc/ssl/certs/51391683.0"
	I0731 18:07:27.801519   73696 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 18:07:27.805654   73696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 18:07:27.811381   73696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 18:07:27.816786   73696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 18:07:27.822643   73696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 18:07:27.828371   73696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 18:07:27.833908   73696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 18:07:27.839455   73696 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-094310 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-094310 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.197 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:07:27.839537   73696 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 18:07:27.839605   73696 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 18:07:27.882993   73696 cri.go:89] found id: ""
	I0731 18:07:27.883055   73696 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 18:07:27.894363   73696 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 18:07:27.894386   73696 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 18:07:27.894431   73696 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 18:07:27.905192   73696 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 18:07:27.906138   73696 kubeconfig.go:125] found "default-k8s-diff-port-094310" server: "https://192.168.72.197:8444"
	I0731 18:07:27.908339   73696 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 18:07:27.918565   73696 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.197
	I0731 18:07:27.918603   73696 kubeadm.go:1160] stopping kube-system containers ...
	I0731 18:07:27.918613   73696 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 18:07:27.918663   73696 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 18:07:27.955675   73696 cri.go:89] found id: ""
	I0731 18:07:27.955744   73696 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 18:07:27.972234   73696 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 18:07:27.981273   73696 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 18:07:27.981289   73696 kubeadm.go:157] found existing configuration files:
	
	I0731 18:07:27.981323   73696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0731 18:07:27.989775   73696 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 18:07:27.989837   73696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 18:07:27.998816   73696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0731 18:07:28.007142   73696 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 18:07:28.007197   73696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 18:07:28.016124   73696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0731 18:07:28.024471   73696 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 18:07:28.024519   73696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 18:07:28.033105   73696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0731 18:07:28.041306   73696 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 18:07:28.041355   73696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 18:07:28.049958   73696 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 18:07:28.058718   73696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:28.167720   73696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:29.013539   73696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:29.225696   73696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:29.300822   73696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:29.403471   73696 api_server.go:52] waiting for apiserver process to appear ...
	I0731 18:07:29.403567   73696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:07:29.903755   73696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:07:30.403896   73696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:07:30.904160   73696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:07:29.683622   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:29.684148   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:29.684180   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:29.684088   74994 retry.go:31] will retry after 1.813922887s: waiting for machine to come up
	I0731 18:07:31.500225   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:31.500738   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:31.500769   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:31.500694   74994 retry.go:31] will retry after 2.381670698s: waiting for machine to come up
	I0731 18:07:33.884129   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:33.884564   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:33.884587   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:33.884539   74994 retry.go:31] will retry after 3.269400744s: waiting for machine to come up
	I0731 18:07:31.404093   73696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:07:31.417483   73696 api_server.go:72] duration metric: took 2.014013675s to wait for apiserver process to appear ...
	I0731 18:07:31.417511   73696 api_server.go:88] waiting for apiserver healthz status ...
	I0731 18:07:31.417533   73696 api_server.go:253] Checking apiserver healthz at https://192.168.72.197:8444/healthz ...
	I0731 18:07:34.340211   73696 api_server.go:279] https://192.168.72.197:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 18:07:34.340240   73696 api_server.go:103] status: https://192.168.72.197:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 18:07:34.340274   73696 api_server.go:253] Checking apiserver healthz at https://192.168.72.197:8444/healthz ...
	I0731 18:07:34.426446   73696 api_server.go:279] https://192.168.72.197:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 18:07:34.426504   73696 api_server.go:103] status: https://192.168.72.197:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 18:07:34.426522   73696 api_server.go:253] Checking apiserver healthz at https://192.168.72.197:8444/healthz ...
	I0731 18:07:34.436383   73696 api_server.go:279] https://192.168.72.197:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 18:07:34.436416   73696 api_server.go:103] status: https://192.168.72.197:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 18:07:34.918371   73696 api_server.go:253] Checking apiserver healthz at https://192.168.72.197:8444/healthz ...
	I0731 18:07:34.922668   73696 api_server.go:279] https://192.168.72.197:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 18:07:34.922699   73696 api_server.go:103] status: https://192.168.72.197:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 18:07:35.418265   73696 api_server.go:253] Checking apiserver healthz at https://192.168.72.197:8444/healthz ...
	I0731 18:07:35.435931   73696 api_server.go:279] https://192.168.72.197:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 18:07:35.435966   73696 api_server.go:103] status: https://192.168.72.197:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 18:07:35.918570   73696 api_server.go:253] Checking apiserver healthz at https://192.168.72.197:8444/healthz ...
	I0731 18:07:35.923674   73696 api_server.go:279] https://192.168.72.197:8444/healthz returned 200:
	ok
	I0731 18:07:35.929781   73696 api_server.go:141] control plane version: v1.30.3
	I0731 18:07:35.929809   73696 api_server.go:131] duration metric: took 4.512290009s to wait for apiserver health ...
	I0731 18:07:35.929820   73696 cni.go:84] Creating CNI manager for ""
	I0731 18:07:35.929827   73696 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:07:35.931827   73696 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 18:07:35.933104   73696 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 18:07:35.943548   73696 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 18:07:35.961932   73696 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 18:07:35.977855   73696 system_pods.go:59] 8 kube-system pods found
	I0731 18:07:35.977894   73696 system_pods.go:61] "coredns-7db6d8ff4d-kvxmb" [df8cf19b-5e62-4c38-9124-3257fea48fbb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:07:35.977905   73696 system_pods.go:61] "etcd-default-k8s-diff-port-094310" [fe526f06-bd6c-4708-a0f3-e49b731e3a61] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 18:07:35.977915   73696 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-094310" [f0191941-87ad-4934-a02a-75b07649d5dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 18:07:35.977924   73696 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-094310" [28b4bdc4-4eea-41c0-9182-b07034d7363e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 18:07:35.977936   73696 system_pods.go:61] "kube-proxy-8bgl7" [577052d5-fe7d-4547-bfbf-d3c938884767] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 18:07:35.977946   73696 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-094310" [df25971f-b25a-4344-a91e-c4b0c9ee5282] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 18:07:35.977964   73696 system_pods.go:61] "metrics-server-569cc877fc-64hp4" [847243bf-6568-41ff-a1e4-70b0a89c63dd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 18:07:35.977978   73696 system_pods.go:61] "storage-provisioner" [6493bfa6-e40b-405c-93b6-ee5053efbdf6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 18:07:35.977991   73696 system_pods.go:74] duration metric: took 16.038231ms to wait for pod list to return data ...
	I0731 18:07:35.978003   73696 node_conditions.go:102] verifying NodePressure condition ...
	I0731 18:07:35.983206   73696 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 18:07:35.983234   73696 node_conditions.go:123] node cpu capacity is 2
	I0731 18:07:35.983251   73696 node_conditions.go:105] duration metric: took 5.239492ms to run NodePressure ...
	I0731 18:07:35.983270   73696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:37.155307   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:37.155787   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:37.155822   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:37.155717   74994 retry.go:31] will retry after 3.095991533s: waiting for machine to come up
	I0731 18:07:36.249072   73696 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 18:07:36.253639   73696 kubeadm.go:739] kubelet initialised
	I0731 18:07:36.253661   73696 kubeadm.go:740] duration metric: took 4.559461ms waiting for restarted kubelet to initialise ...
	I0731 18:07:36.253669   73696 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:07:36.258632   73696 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-kvxmb" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:36.262785   73696 pod_ready.go:97] node "default-k8s-diff-port-094310" hosting pod "coredns-7db6d8ff4d-kvxmb" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-094310" has status "Ready":"False"
	I0731 18:07:36.262811   73696 pod_ready.go:81] duration metric: took 4.157359ms for pod "coredns-7db6d8ff4d-kvxmb" in "kube-system" namespace to be "Ready" ...
	E0731 18:07:36.262823   73696 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-094310" hosting pod "coredns-7db6d8ff4d-kvxmb" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-094310" has status "Ready":"False"
	I0731 18:07:36.262831   73696 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:36.269224   73696 pod_ready.go:97] node "default-k8s-diff-port-094310" hosting pod "etcd-default-k8s-diff-port-094310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-094310" has status "Ready":"False"
	I0731 18:07:36.269250   73696 pod_ready.go:81] duration metric: took 6.406018ms for pod "etcd-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	E0731 18:07:36.269263   73696 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-094310" hosting pod "etcd-default-k8s-diff-port-094310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-094310" has status "Ready":"False"
	I0731 18:07:36.269270   73696 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:36.273379   73696 pod_ready.go:97] node "default-k8s-diff-port-094310" hosting pod "kube-apiserver-default-k8s-diff-port-094310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-094310" has status "Ready":"False"
	I0731 18:07:36.273400   73696 pod_ready.go:81] duration metric: took 4.119945ms for pod "kube-apiserver-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	E0731 18:07:36.273408   73696 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-094310" hosting pod "kube-apiserver-default-k8s-diff-port-094310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-094310" has status "Ready":"False"
	I0731 18:07:36.273414   73696 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:36.365153   73696 pod_ready.go:97] node "default-k8s-diff-port-094310" hosting pod "kube-controller-manager-default-k8s-diff-port-094310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-094310" has status "Ready":"False"
	I0731 18:07:36.365183   73696 pod_ready.go:81] duration metric: took 91.758203ms for pod "kube-controller-manager-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	E0731 18:07:36.365195   73696 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-094310" hosting pod "kube-controller-manager-default-k8s-diff-port-094310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-094310" has status "Ready":"False"
	I0731 18:07:36.365201   73696 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8bgl7" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:36.765371   73696 pod_ready.go:92] pod "kube-proxy-8bgl7" in "kube-system" namespace has status "Ready":"True"
	I0731 18:07:36.765393   73696 pod_ready.go:81] duration metric: took 400.181854ms for pod "kube-proxy-8bgl7" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:36.765405   73696 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:38.770757   73696 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-094310" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:40.772702   73696 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-094310" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:41.552094   74203 start.go:364] duration metric: took 3m46.106649241s to acquireMachinesLock for "old-k8s-version-276459"
	I0731 18:07:41.552166   74203 start.go:96] Skipping create...Using existing machine configuration
	I0731 18:07:41.552174   74203 fix.go:54] fixHost starting: 
	I0731 18:07:41.552553   74203 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:07:41.552595   74203 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:07:41.569965   74203 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43329
	I0731 18:07:41.570361   74203 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:07:41.570884   74203 main.go:141] libmachine: Using API Version  1
	I0731 18:07:41.570905   74203 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:07:41.571247   74203 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:07:41.571454   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:07:41.571605   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetState
	I0731 18:07:41.573081   74203 fix.go:112] recreateIfNeeded on old-k8s-version-276459: state=Stopped err=<nil>
	I0731 18:07:41.573114   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	W0731 18:07:41.573276   74203 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 18:07:41.575254   74203 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-276459" ...
	I0731 18:07:40.254868   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.255367   73800 main.go:141] libmachine: (embed-certs-436067) Found IP for machine: 192.168.50.86
	I0731 18:07:40.255385   73800 main.go:141] libmachine: (embed-certs-436067) Reserving static IP address...
	I0731 18:07:40.255405   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has current primary IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.255798   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "embed-certs-436067", mac: "52:54:00:87:1e:25", ip: "192.168.50.86"} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:40.255822   73800 main.go:141] libmachine: (embed-certs-436067) Reserved static IP address: 192.168.50.86
	I0731 18:07:40.255839   73800 main.go:141] libmachine: (embed-certs-436067) DBG | skip adding static IP to network mk-embed-certs-436067 - found existing host DHCP lease matching {name: "embed-certs-436067", mac: "52:54:00:87:1e:25", ip: "192.168.50.86"}
	I0731 18:07:40.255853   73800 main.go:141] libmachine: (embed-certs-436067) DBG | Getting to WaitForSSH function...
	I0731 18:07:40.255865   73800 main.go:141] libmachine: (embed-certs-436067) Waiting for SSH to be available...
	I0731 18:07:40.257994   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.258304   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:40.258331   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.258475   73800 main.go:141] libmachine: (embed-certs-436067) DBG | Using SSH client type: external
	I0731 18:07:40.258492   73800 main.go:141] libmachine: (embed-certs-436067) DBG | Using SSH private key: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/embed-certs-436067/id_rsa (-rw-------)
	I0731 18:07:40.258594   73800 main.go:141] libmachine: (embed-certs-436067) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.86 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19349-8084/.minikube/machines/embed-certs-436067/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 18:07:40.258625   73800 main.go:141] libmachine: (embed-certs-436067) DBG | About to run SSH command:
	I0731 18:07:40.258644   73800 main.go:141] libmachine: (embed-certs-436067) DBG | exit 0
	I0731 18:07:40.387051   73800 main.go:141] libmachine: (embed-certs-436067) DBG | SSH cmd err, output: <nil>: 
	I0731 18:07:40.387459   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetConfigRaw
	I0731 18:07:40.388093   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetIP
	I0731 18:07:40.390805   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.391260   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:40.391306   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.391534   73800 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/embed-certs-436067/config.json ...
	I0731 18:07:40.391769   73800 machine.go:94] provisionDockerMachine start ...
	I0731 18:07:40.391793   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:07:40.392012   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:07:40.394412   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.394809   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:40.394839   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.395029   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:07:40.395209   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:40.395372   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:40.395480   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:07:40.395624   73800 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:40.395808   73800 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.86 22 <nil> <nil>}
	I0731 18:07:40.395817   73800 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 18:07:40.503041   73800 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 18:07:40.503073   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetMachineName
	I0731 18:07:40.503326   73800 buildroot.go:166] provisioning hostname "embed-certs-436067"
	I0731 18:07:40.503352   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetMachineName
	I0731 18:07:40.503539   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:07:40.506604   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.506940   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:40.506967   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.507124   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:07:40.507296   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:40.507438   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:40.507577   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:07:40.507752   73800 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:40.507912   73800 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.86 22 <nil> <nil>}
	I0731 18:07:40.507927   73800 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-436067 && echo "embed-certs-436067" | sudo tee /etc/hostname
	I0731 18:07:40.632627   73800 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-436067
	
	I0731 18:07:40.632678   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:07:40.635632   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.635989   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:40.636017   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.636168   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:07:40.636386   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:40.636554   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:40.636751   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:07:40.636963   73800 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:40.637192   73800 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.86 22 <nil> <nil>}
	I0731 18:07:40.637213   73800 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-436067' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-436067/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-436067' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 18:07:40.755249   73800 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 18:07:40.755273   73800 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19349-8084/.minikube CaCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19349-8084/.minikube}
	I0731 18:07:40.755291   73800 buildroot.go:174] setting up certificates
	I0731 18:07:40.755301   73800 provision.go:84] configureAuth start
	I0731 18:07:40.755310   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetMachineName
	I0731 18:07:40.755602   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetIP
	I0731 18:07:40.758306   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.758705   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:40.758731   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.758865   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:07:40.760790   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.761061   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:40.761090   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.761244   73800 provision.go:143] copyHostCerts
	I0731 18:07:40.761299   73800 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem, removing ...
	I0731 18:07:40.761323   73800 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem
	I0731 18:07:40.761376   73800 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem (1082 bytes)
	I0731 18:07:40.761479   73800 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem, removing ...
	I0731 18:07:40.761488   73800 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem
	I0731 18:07:40.761509   73800 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem (1123 bytes)
	I0731 18:07:40.761562   73800 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem, removing ...
	I0731 18:07:40.761569   73800 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem
	I0731 18:07:40.761586   73800 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem (1675 bytes)
	I0731 18:07:40.761635   73800 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem org=jenkins.embed-certs-436067 san=[127.0.0.1 192.168.50.86 embed-certs-436067 localhost minikube]
	I0731 18:07:40.874612   73800 provision.go:177] copyRemoteCerts
	I0731 18:07:40.874666   73800 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 18:07:40.874691   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:07:40.877623   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.878044   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:40.878075   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.878206   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:07:40.878403   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:40.878556   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:07:40.878706   73800 sshutil.go:53] new ssh client: &{IP:192.168.50.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/embed-certs-436067/id_rsa Username:docker}
	I0731 18:07:40.965720   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 18:07:40.987836   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0731 18:07:41.012423   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 18:07:41.036366   73800 provision.go:87] duration metric: took 281.054266ms to configureAuth
	I0731 18:07:41.036392   73800 buildroot.go:189] setting minikube options for container-runtime
	I0731 18:07:41.036561   73800 config.go:182] Loaded profile config "embed-certs-436067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:07:41.036626   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:07:41.039204   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.039587   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:41.039615   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.039814   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:07:41.040021   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:41.040162   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:41.040293   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:07:41.040462   73800 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:41.040642   73800 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.86 22 <nil> <nil>}
	I0731 18:07:41.040663   73800 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 18:07:41.307915   73800 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 18:07:41.307945   73800 machine.go:97] duration metric: took 916.161297ms to provisionDockerMachine
	I0731 18:07:41.307958   73800 start.go:293] postStartSetup for "embed-certs-436067" (driver="kvm2")
	I0731 18:07:41.307971   73800 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 18:07:41.307990   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:07:41.308383   73800 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 18:07:41.308409   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:07:41.311172   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.311532   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:41.311559   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.311712   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:07:41.311940   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:41.312132   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:07:41.312251   73800 sshutil.go:53] new ssh client: &{IP:192.168.50.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/embed-certs-436067/id_rsa Username:docker}
	I0731 18:07:41.397229   73800 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 18:07:41.401356   73800 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 18:07:41.401380   73800 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/addons for local assets ...
	I0731 18:07:41.401458   73800 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/files for local assets ...
	I0731 18:07:41.401571   73800 filesync.go:149] local asset: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem -> 152592.pem in /etc/ssl/certs
	I0731 18:07:41.401696   73800 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 18:07:41.410540   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /etc/ssl/certs/152592.pem (1708 bytes)
	I0731 18:07:41.434298   73800 start.go:296] duration metric: took 126.324424ms for postStartSetup
	I0731 18:07:41.434342   73800 fix.go:56] duration metric: took 19.306472215s for fixHost
	I0731 18:07:41.434363   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:07:41.437502   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.438007   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:41.438038   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.438221   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:07:41.438435   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:41.438613   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:41.438752   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:07:41.438932   73800 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:41.439086   73800 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.86 22 <nil> <nil>}
	I0731 18:07:41.439095   73800 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 18:07:41.551915   73800 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722449261.529568895
	
	I0731 18:07:41.551937   73800 fix.go:216] guest clock: 1722449261.529568895
	I0731 18:07:41.551944   73800 fix.go:229] Guest: 2024-07-31 18:07:41.529568895 +0000 UTC Remote: 2024-07-31 18:07:41.434346377 +0000 UTC m=+286.960766339 (delta=95.222518ms)
	I0731 18:07:41.551999   73800 fix.go:200] guest clock delta is within tolerance: 95.222518ms
	I0731 18:07:41.552010   73800 start.go:83] releasing machines lock for "embed-certs-436067", held for 19.42417291s
	I0731 18:07:41.552036   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:07:41.552377   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetIP
	I0731 18:07:41.554945   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.555385   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:41.555415   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.555583   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:07:41.556139   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:07:41.556362   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:07:41.556448   73800 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 18:07:41.556507   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:07:41.556619   73800 ssh_runner.go:195] Run: cat /version.json
	I0731 18:07:41.556634   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:07:41.559700   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.559847   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.560160   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:41.560227   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.560277   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:07:41.560374   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:41.560440   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.560582   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:41.560652   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:07:41.560697   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:07:41.560745   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:41.560833   73800 sshutil.go:53] new ssh client: &{IP:192.168.50.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/embed-certs-436067/id_rsa Username:docker}
	I0731 18:07:41.560909   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:07:41.561060   73800 sshutil.go:53] new ssh client: &{IP:192.168.50.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/embed-certs-436067/id_rsa Username:docker}
	I0731 18:07:41.640796   73800 ssh_runner.go:195] Run: systemctl --version
	I0731 18:07:41.671461   73800 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 18:07:41.820881   73800 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 18:07:41.826610   73800 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 18:07:41.826673   73800 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 18:07:41.841766   73800 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 18:07:41.841789   73800 start.go:495] detecting cgroup driver to use...
	I0731 18:07:41.841872   73800 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 18:07:41.858636   73800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 18:07:41.873090   73800 docker.go:217] disabling cri-docker service (if available) ...
	I0731 18:07:41.873152   73800 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 18:07:41.890967   73800 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 18:07:41.907886   73800 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 18:07:42.022724   73800 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 18:07:42.173885   73800 docker.go:233] disabling docker service ...
	I0731 18:07:42.173969   73800 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 18:07:42.190959   73800 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 18:07:42.205274   73800 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 18:07:42.358130   73800 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 18:07:42.497981   73800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 18:07:42.513774   73800 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 18:07:42.532713   73800 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 18:07:42.532808   73800 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:42.544367   73800 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 18:07:42.544427   73800 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:42.556427   73800 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:42.566399   73800 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:42.576633   73800 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 18:07:42.588508   73800 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:42.600011   73800 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:42.618858   73800 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:42.630437   73800 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 18:07:42.641459   73800 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 18:07:42.641528   73800 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 18:07:42.655000   73800 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 18:07:42.664912   73800 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:07:42.791781   73800 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 18:07:42.936709   73800 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 18:07:42.936778   73800 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 18:07:42.941132   73800 start.go:563] Will wait 60s for crictl version
	I0731 18:07:42.941189   73800 ssh_runner.go:195] Run: which crictl
	I0731 18:07:42.944870   73800 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 18:07:42.983069   73800 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 18:07:42.983181   73800 ssh_runner.go:195] Run: crio --version
	I0731 18:07:43.011636   73800 ssh_runner.go:195] Run: crio --version
	I0731 18:07:43.043295   73800 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 18:07:43.044545   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetIP
	I0731 18:07:43.047635   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:43.048049   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:43.048080   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:43.048330   73800 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0731 18:07:43.052269   73800 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:07:43.064116   73800 kubeadm.go:883] updating cluster {Name:embed-certs-436067 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-436067 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.86 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 18:07:43.064283   73800 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 18:07:43.064361   73800 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 18:07:43.100437   73800 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 18:07:43.100516   73800 ssh_runner.go:195] Run: which lz4
	I0731 18:07:43.104627   73800 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 18:07:43.108552   73800 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 18:07:43.108586   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 18:07:44.368238   73800 crio.go:462] duration metric: took 1.263636259s to copy over tarball
	I0731 18:07:44.368322   73800 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 18:07:41.576648   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .Start
	I0731 18:07:41.576823   74203 main.go:141] libmachine: (old-k8s-version-276459) Ensuring networks are active...
	I0731 18:07:41.577511   74203 main.go:141] libmachine: (old-k8s-version-276459) Ensuring network default is active
	I0731 18:07:41.578015   74203 main.go:141] libmachine: (old-k8s-version-276459) Ensuring network mk-old-k8s-version-276459 is active
	I0731 18:07:41.578469   74203 main.go:141] libmachine: (old-k8s-version-276459) Getting domain xml...
	I0731 18:07:41.579474   74203 main.go:141] libmachine: (old-k8s-version-276459) Creating domain...
	I0731 18:07:42.876409   74203 main.go:141] libmachine: (old-k8s-version-276459) Waiting to get IP...
	I0731 18:07:42.877345   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:42.877788   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:42.877841   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:42.877763   75164 retry.go:31] will retry after 218.764988ms: waiting for machine to come up
	I0731 18:07:43.098230   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:43.098697   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:43.098722   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:43.098650   75164 retry.go:31] will retry after 285.579707ms: waiting for machine to come up
	I0731 18:07:43.386356   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:43.386897   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:43.386928   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:43.386852   75164 retry.go:31] will retry after 389.197253ms: waiting for machine to come up
	I0731 18:07:43.778183   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:43.778672   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:43.778698   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:43.778622   75164 retry.go:31] will retry after 484.5108ms: waiting for machine to come up
	I0731 18:07:44.264412   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:44.265042   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:44.265073   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:44.264955   75164 retry.go:31] will retry after 621.551625ms: waiting for machine to come up
	I0731 18:07:44.887986   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:44.888534   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:44.888563   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:44.888489   75164 retry.go:31] will retry after 610.567971ms: waiting for machine to come up
	I0731 18:07:42.773583   73696 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-094310" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:44.272853   73696 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-094310" in "kube-system" namespace has status "Ready":"True"
	I0731 18:07:44.272874   73696 pod_ready.go:81] duration metric: took 7.507462023s for pod "kube-scheduler-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:44.272886   73696 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:46.689701   73800 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.321340678s)
	I0731 18:07:46.689730   73800 crio.go:469] duration metric: took 2.321463484s to extract the tarball
	I0731 18:07:46.689738   73800 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 18:07:46.749205   73800 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 18:07:46.805950   73800 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 18:07:46.805979   73800 cache_images.go:84] Images are preloaded, skipping loading
	I0731 18:07:46.805990   73800 kubeadm.go:934] updating node { 192.168.50.86 8443 v1.30.3 crio true true} ...
	I0731 18:07:46.806135   73800 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-436067 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.86
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-436067 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 18:07:46.806233   73800 ssh_runner.go:195] Run: crio config
	I0731 18:07:46.865815   73800 cni.go:84] Creating CNI manager for ""
	I0731 18:07:46.865838   73800 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:07:46.865852   73800 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 18:07:46.865873   73800 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.86 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-436067 NodeName:embed-certs-436067 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.86"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.86 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 18:07:46.866048   73800 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.86
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-436067"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.86
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.86"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 18:07:46.866121   73800 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 18:07:46.875722   73800 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 18:07:46.875786   73800 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 18:07:46.885107   73800 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0731 18:07:46.903868   73800 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 18:07:46.919585   73800 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0731 18:07:46.939034   73800 ssh_runner.go:195] Run: grep 192.168.50.86	control-plane.minikube.internal$ /etc/hosts
	I0731 18:07:46.943460   73800 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.86	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:07:46.957699   73800 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:07:47.065714   73800 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 18:07:47.080655   73800 certs.go:68] Setting up /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/embed-certs-436067 for IP: 192.168.50.86
	I0731 18:07:47.080681   73800 certs.go:194] generating shared ca certs ...
	I0731 18:07:47.080717   73800 certs.go:226] acquiring lock for ca certs: {Name:mk49eed439085f9c30746706460ce213570d1997 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:07:47.080879   73800 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key
	I0731 18:07:47.080938   73800 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key
	I0731 18:07:47.080950   73800 certs.go:256] generating profile certs ...
	I0731 18:07:47.081046   73800 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/embed-certs-436067/client.key
	I0731 18:07:47.081113   73800 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/embed-certs-436067/apiserver.key.7b8160da
	I0731 18:07:47.081168   73800 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/embed-certs-436067/proxy-client.key
	I0731 18:07:47.081312   73800 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem (1338 bytes)
	W0731 18:07:47.081367   73800 certs.go:480] ignoring /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259_empty.pem, impossibly tiny 0 bytes
	I0731 18:07:47.081380   73800 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 18:07:47.081413   73800 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem (1082 bytes)
	I0731 18:07:47.081438   73800 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem (1123 bytes)
	I0731 18:07:47.081468   73800 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem (1675 bytes)
	I0731 18:07:47.081508   73800 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem (1708 bytes)
	I0731 18:07:47.082355   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 18:07:47.130037   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 18:07:47.171218   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 18:07:47.215745   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 18:07:47.244883   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/embed-certs-436067/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0731 18:07:47.270032   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/embed-certs-436067/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 18:07:47.294900   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/embed-certs-436067/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 18:07:47.317285   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/embed-certs-436067/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 18:07:47.343000   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 18:07:47.369906   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem --> /usr/share/ca-certificates/15259.pem (1338 bytes)
	I0731 18:07:47.392022   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /usr/share/ca-certificates/152592.pem (1708 bytes)
	I0731 18:07:47.414219   73800 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 18:07:47.431931   73800 ssh_runner.go:195] Run: openssl version
	I0731 18:07:47.437602   73800 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 18:07:47.447585   73800 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:07:47.451779   73800 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 16:42 /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:07:47.451833   73800 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:07:47.457309   73800 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 18:07:47.466917   73800 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15259.pem && ln -fs /usr/share/ca-certificates/15259.pem /etc/ssl/certs/15259.pem"
	I0731 18:07:47.476211   73800 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15259.pem
	I0731 18:07:47.480149   73800 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 16:54 /usr/share/ca-certificates/15259.pem
	I0731 18:07:47.480215   73800 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15259.pem
	I0731 18:07:47.485412   73800 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15259.pem /etc/ssl/certs/51391683.0"
	I0731 18:07:47.494852   73800 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/152592.pem && ln -fs /usr/share/ca-certificates/152592.pem /etc/ssl/certs/152592.pem"
	I0731 18:07:47.504407   73800 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/152592.pem
	I0731 18:07:47.509594   73800 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 16:54 /usr/share/ca-certificates/152592.pem
	I0731 18:07:47.509658   73800 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/152592.pem
	I0731 18:07:47.515728   73800 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/152592.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 18:07:47.525660   73800 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 18:07:47.529953   73800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 18:07:47.535576   73800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 18:07:47.541158   73800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 18:07:47.546633   73800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 18:07:47.551827   73800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 18:07:47.557100   73800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 18:07:47.562447   73800 kubeadm.go:392] StartCluster: {Name:embed-certs-436067 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-436067 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.86 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:07:47.562551   73800 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 18:07:47.562616   73800 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 18:07:47.610318   73800 cri.go:89] found id: ""
	I0731 18:07:47.610382   73800 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 18:07:47.623036   73800 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 18:07:47.623053   73800 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 18:07:47.623101   73800 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 18:07:47.631709   73800 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 18:07:47.632699   73800 kubeconfig.go:125] found "embed-certs-436067" server: "https://192.168.50.86:8443"
	I0731 18:07:47.634724   73800 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 18:07:47.643183   73800 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.86
	I0731 18:07:47.643207   73800 kubeadm.go:1160] stopping kube-system containers ...
	I0731 18:07:47.643218   73800 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 18:07:47.643264   73800 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 18:07:47.677438   73800 cri.go:89] found id: ""
	I0731 18:07:47.677527   73800 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 18:07:47.693427   73800 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 18:07:47.702889   73800 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 18:07:47.702907   73800 kubeadm.go:157] found existing configuration files:
	
	I0731 18:07:47.702956   73800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 18:07:47.713958   73800 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 18:07:47.714017   73800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 18:07:47.723931   73800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 18:07:47.732615   73800 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 18:07:47.732673   73800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 18:07:47.741168   73800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 18:07:47.749164   73800 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 18:07:47.749217   73800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 18:07:47.757691   73800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 18:07:47.765479   73800 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 18:07:47.765530   73800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 18:07:47.774002   73800 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 18:07:47.783757   73800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:47.890835   73800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:48.951421   73800 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.060547503s)
	I0731 18:07:48.951466   73800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:49.152745   73800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:49.224334   73800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:49.341066   73800 api_server.go:52] waiting for apiserver process to appear ...
	I0731 18:07:49.341147   73800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:07:45.500400   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:45.500938   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:45.500966   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:45.500890   75164 retry.go:31] will retry after 1.069889786s: waiting for machine to come up
	I0731 18:07:46.572634   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:46.573085   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:46.573128   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:46.572979   75164 retry.go:31] will retry after 1.047722466s: waiting for machine to come up
	I0731 18:07:47.622035   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:47.622479   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:47.622507   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:47.622435   75164 retry.go:31] will retry after 1.292658555s: waiting for machine to come up
	I0731 18:07:48.916255   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:48.916755   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:48.916778   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:48.916701   75164 retry.go:31] will retry after 2.006539925s: waiting for machine to come up
	I0731 18:07:46.281654   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:49.189881   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:49.841397   73800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:07:50.341264   73800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:07:50.409398   73800 api_server.go:72] duration metric: took 1.068329172s to wait for apiserver process to appear ...
	I0731 18:07:50.409432   73800 api_server.go:88] waiting for apiserver healthz status ...
	I0731 18:07:50.409457   73800 api_server.go:253] Checking apiserver healthz at https://192.168.50.86:8443/healthz ...
	I0731 18:07:50.410135   73800 api_server.go:269] stopped: https://192.168.50.86:8443/healthz: Get "https://192.168.50.86:8443/healthz": dial tcp 192.168.50.86:8443: connect: connection refused
	I0731 18:07:50.909802   73800 api_server.go:253] Checking apiserver healthz at https://192.168.50.86:8443/healthz ...
	I0731 18:07:52.636930   73800 api_server.go:279] https://192.168.50.86:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 18:07:52.636972   73800 api_server.go:103] status: https://192.168.50.86:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 18:07:52.636989   73800 api_server.go:253] Checking apiserver healthz at https://192.168.50.86:8443/healthz ...
	I0731 18:07:52.666947   73800 api_server.go:279] https://192.168.50.86:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 18:07:52.666980   73800 api_server.go:103] status: https://192.168.50.86:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 18:07:52.910391   73800 api_server.go:253] Checking apiserver healthz at https://192.168.50.86:8443/healthz ...
	I0731 18:07:52.916305   73800 api_server.go:279] https://192.168.50.86:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 18:07:52.916342   73800 api_server.go:103] status: https://192.168.50.86:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 18:07:53.409623   73800 api_server.go:253] Checking apiserver healthz at https://192.168.50.86:8443/healthz ...
	I0731 18:07:53.419159   73800 api_server.go:279] https://192.168.50.86:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 18:07:53.419205   73800 api_server.go:103] status: https://192.168.50.86:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 18:07:53.909654   73800 api_server.go:253] Checking apiserver healthz at https://192.168.50.86:8443/healthz ...
	I0731 18:07:53.913518   73800 api_server.go:279] https://192.168.50.86:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 18:07:53.913541   73800 api_server.go:103] status: https://192.168.50.86:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 18:07:54.409879   73800 api_server.go:253] Checking apiserver healthz at https://192.168.50.86:8443/healthz ...
	I0731 18:07:54.413948   73800 api_server.go:279] https://192.168.50.86:8443/healthz returned 200:
	ok
	I0731 18:07:54.422414   73800 api_server.go:141] control plane version: v1.30.3
	I0731 18:07:54.422444   73800 api_server.go:131] duration metric: took 4.013003689s to wait for apiserver health ...
	I0731 18:07:54.422458   73800 cni.go:84] Creating CNI manager for ""
	I0731 18:07:54.422467   73800 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:07:54.424680   73800 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 18:07:54.425887   73800 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 18:07:54.436394   73800 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 18:07:54.454533   73800 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 18:07:54.464268   73800 system_pods.go:59] 8 kube-system pods found
	I0731 18:07:54.464304   73800 system_pods.go:61] "coredns-7db6d8ff4d-h6ckp" [84faf557-0c8d-4026-b620-37265e017ed0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:07:54.464315   73800 system_pods.go:61] "etcd-embed-certs-436067" [787466df-6e3f-4209-a996-037875d63dc8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 18:07:54.464326   73800 system_pods.go:61] "kube-apiserver-embed-certs-436067" [6366e38e-21f3-41a4-af7a-433953b70eaa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 18:07:54.464335   73800 system_pods.go:61] "kube-controller-manager-embed-certs-436067" [a97f6a49-40cf-433a-8196-c433e3cda8e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 18:07:54.464341   73800 system_pods.go:61] "kube-proxy-tl9pj" [0124eb62-5c00-4f75-a73f-c3e92ddc4a42] Running
	I0731 18:07:54.464354   73800 system_pods.go:61] "kube-scheduler-embed-certs-436067" [afbb9117-f229-44ea-8939-d28c4a402c6b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 18:07:54.464366   73800 system_pods.go:61] "metrics-server-569cc877fc-fzxrw" [2ecdab2a-8ce8-4771-bd94-4e24dee34386] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 18:07:54.464374   73800 system_pods.go:61] "storage-provisioner" [29b17f6d-f9e4-4272-b6da-368431264701] Running
	I0731 18:07:54.464382   73800 system_pods.go:74] duration metric: took 9.82125ms to wait for pod list to return data ...
	I0731 18:07:54.464395   73800 node_conditions.go:102] verifying NodePressure condition ...
	I0731 18:07:54.467718   73800 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 18:07:54.467748   73800 node_conditions.go:123] node cpu capacity is 2
	I0731 18:07:54.467761   73800 node_conditions.go:105] duration metric: took 3.3602ms to run NodePressure ...
	I0731 18:07:54.467779   73800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:50.925369   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:50.925835   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:50.925856   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:50.925790   75164 retry.go:31] will retry after 2.875577792s: waiting for machine to come up
	I0731 18:07:53.802729   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:53.803164   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:53.803192   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:53.803122   75164 retry.go:31] will retry after 2.352020729s: waiting for machine to come up
	I0731 18:07:51.279883   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:53.279992   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:55.778812   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:54.732921   73800 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 18:07:54.736779   73800 kubeadm.go:739] kubelet initialised
	I0731 18:07:54.736798   73800 kubeadm.go:740] duration metric: took 3.850446ms waiting for restarted kubelet to initialise ...
	I0731 18:07:54.736809   73800 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:07:54.741733   73800 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-h6ckp" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:54.745722   73800 pod_ready.go:97] node "embed-certs-436067" hosting pod "coredns-7db6d8ff4d-h6ckp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436067" has status "Ready":"False"
	I0731 18:07:54.745742   73800 pod_ready.go:81] duration metric: took 3.986968ms for pod "coredns-7db6d8ff4d-h6ckp" in "kube-system" namespace to be "Ready" ...
	E0731 18:07:54.745751   73800 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-436067" hosting pod "coredns-7db6d8ff4d-h6ckp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436067" has status "Ready":"False"
	I0731 18:07:54.745757   73800 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:54.749650   73800 pod_ready.go:97] node "embed-certs-436067" hosting pod "etcd-embed-certs-436067" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436067" has status "Ready":"False"
	I0731 18:07:54.749666   73800 pod_ready.go:81] duration metric: took 3.895483ms for pod "etcd-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	E0731 18:07:54.749673   73800 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-436067" hosting pod "etcd-embed-certs-436067" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436067" has status "Ready":"False"
	I0731 18:07:54.749679   73800 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:54.753326   73800 pod_ready.go:97] node "embed-certs-436067" hosting pod "kube-apiserver-embed-certs-436067" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436067" has status "Ready":"False"
	I0731 18:07:54.753351   73800 pod_ready.go:81] duration metric: took 3.66496ms for pod "kube-apiserver-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	E0731 18:07:54.753362   73800 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-436067" hosting pod "kube-apiserver-embed-certs-436067" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436067" has status "Ready":"False"
	I0731 18:07:54.753370   73800 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:54.857956   73800 pod_ready.go:97] node "embed-certs-436067" hosting pod "kube-controller-manager-embed-certs-436067" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436067" has status "Ready":"False"
	I0731 18:07:54.857978   73800 pod_ready.go:81] duration metric: took 104.599259ms for pod "kube-controller-manager-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	E0731 18:07:54.857988   73800 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-436067" hosting pod "kube-controller-manager-embed-certs-436067" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436067" has status "Ready":"False"
	I0731 18:07:54.857995   73800 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-tl9pj" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:55.257589   73800 pod_ready.go:92] pod "kube-proxy-tl9pj" in "kube-system" namespace has status "Ready":"True"
	I0731 18:07:55.257621   73800 pod_ready.go:81] duration metric: took 399.617003ms for pod "kube-proxy-tl9pj" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:55.257630   73800 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:57.262770   73800 pod_ready.go:102] pod "kube-scheduler-embed-certs-436067" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:59.271094   73800 pod_ready.go:102] pod "kube-scheduler-embed-certs-436067" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:56.157721   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:56.158176   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:56.158216   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:56.158110   75164 retry.go:31] will retry after 3.552824334s: waiting for machine to come up
	I0731 18:07:59.712249   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.712759   74203 main.go:141] libmachine: (old-k8s-version-276459) Found IP for machine: 192.168.39.26
	I0731 18:07:59.712783   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has current primary IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.712793   74203 main.go:141] libmachine: (old-k8s-version-276459) Reserving static IP address...
	I0731 18:07:59.713268   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "old-k8s-version-276459", mac: "52:54:00:79:9d:96", ip: "192.168.39.26"} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:07:59.713297   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | skip adding static IP to network mk-old-k8s-version-276459 - found existing host DHCP lease matching {name: "old-k8s-version-276459", mac: "52:54:00:79:9d:96", ip: "192.168.39.26"}
	I0731 18:07:59.713320   74203 main.go:141] libmachine: (old-k8s-version-276459) Reserved static IP address: 192.168.39.26
	I0731 18:07:59.713343   74203 main.go:141] libmachine: (old-k8s-version-276459) Waiting for SSH to be available...
	I0731 18:07:59.713355   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | Getting to WaitForSSH function...
	I0731 18:07:59.716068   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.716456   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:07:59.716490   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.716701   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | Using SSH client type: external
	I0731 18:07:59.716725   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | Using SSH private key: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459/id_rsa (-rw-------)
	I0731 18:07:59.716762   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.26 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 18:07:59.716776   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | About to run SSH command:
	I0731 18:07:59.716792   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | exit 0
	I0731 18:07:59.847720   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | SSH cmd err, output: <nil>: 
	I0731 18:07:59.848089   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetConfigRaw
	I0731 18:07:59.848847   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetIP
	I0731 18:07:59.851632   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.852004   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:07:59.852030   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.852321   74203 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/config.json ...
	I0731 18:07:59.852505   74203 machine.go:94] provisionDockerMachine start ...
	I0731 18:07:59.852524   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:07:59.852752   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:07:59.855198   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.855596   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:07:59.855626   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.855756   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:07:59.855920   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:07:59.856071   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:07:59.856208   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:07:59.856372   74203 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:59.856601   74203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0731 18:07:59.856614   74203 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 18:07:59.963492   74203 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 18:07:59.963524   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetMachineName
	I0731 18:07:59.963762   74203 buildroot.go:166] provisioning hostname "old-k8s-version-276459"
	I0731 18:07:59.963794   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetMachineName
	I0731 18:07:59.963992   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:07:59.967261   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.967725   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:07:59.967762   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.967938   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:07:59.968131   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:07:59.968316   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:07:59.968487   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:07:59.968687   74203 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:59.968872   74203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0731 18:07:59.968890   74203 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-276459 && echo "old-k8s-version-276459" | sudo tee /etc/hostname
	I0731 18:08:00.084360   74203 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-276459
	
	I0731 18:08:00.084390   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:08:00.087433   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.087833   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.087862   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.088016   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:08:00.088187   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.088371   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.088521   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:08:00.088719   74203 main.go:141] libmachine: Using SSH client type: native
	I0731 18:08:00.088893   74203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0731 18:08:00.088915   74203 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-276459' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-276459/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-276459' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 18:08:00.200012   74203 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 18:08:00.200038   74203 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19349-8084/.minikube CaCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19349-8084/.minikube}
	I0731 18:08:00.200069   74203 buildroot.go:174] setting up certificates
	I0731 18:08:00.200081   74203 provision.go:84] configureAuth start
	I0731 18:08:00.200093   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetMachineName
	I0731 18:08:00.200360   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetIP
	I0731 18:08:00.203352   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.203694   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.203721   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.203951   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:08:00.206061   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.206398   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.206432   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.206510   74203 provision.go:143] copyHostCerts
	I0731 18:08:00.206580   74203 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem, removing ...
	I0731 18:08:00.206591   74203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem
	I0731 18:08:00.206654   74203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem (1082 bytes)
	I0731 18:08:00.206759   74203 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem, removing ...
	I0731 18:08:00.206769   74203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem
	I0731 18:08:00.206799   74203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem (1123 bytes)
	I0731 18:08:00.206876   74203 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem, removing ...
	I0731 18:08:00.206885   74203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem
	I0731 18:08:00.206913   74203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem (1675 bytes)
	I0731 18:08:00.207047   74203 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-276459 san=[127.0.0.1 192.168.39.26 localhost minikube old-k8s-version-276459]
	I0731 18:08:00.279363   74203 provision.go:177] copyRemoteCerts
	I0731 18:08:00.279423   74203 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 18:08:00.279456   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:08:00.282234   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.282601   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.282630   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.282751   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:08:00.283004   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.283178   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:08:00.283361   74203 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459/id_rsa Username:docker}
	I0731 18:08:00.935990   73479 start.go:364] duration metric: took 51.517312901s to acquireMachinesLock for "no-preload-673754"
	I0731 18:08:00.936054   73479 start.go:96] Skipping create...Using existing machine configuration
	I0731 18:08:00.936066   73479 fix.go:54] fixHost starting: 
	I0731 18:08:00.936534   73479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:08:00.936589   73479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:08:00.954868   73479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39013
	I0731 18:08:00.955405   73479 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:08:00.955980   73479 main.go:141] libmachine: Using API Version  1
	I0731 18:08:00.956012   73479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:08:00.956386   73479 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:08:00.956589   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 18:08:00.956752   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetState
	I0731 18:08:00.958461   73479 fix.go:112] recreateIfNeeded on no-preload-673754: state=Stopped err=<nil>
	I0731 18:08:00.958485   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	W0731 18:08:00.958655   73479 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 18:08:00.960117   73479 out.go:177] * Restarting existing kvm2 VM for "no-preload-673754" ...
	I0731 18:07:57.779258   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:59.780834   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:00.961340   73479 main.go:141] libmachine: (no-preload-673754) Calling .Start
	I0731 18:08:00.961543   73479 main.go:141] libmachine: (no-preload-673754) Ensuring networks are active...
	I0731 18:08:00.962332   73479 main.go:141] libmachine: (no-preload-673754) Ensuring network default is active
	I0731 18:08:00.962661   73479 main.go:141] libmachine: (no-preload-673754) Ensuring network mk-no-preload-673754 is active
	I0731 18:08:00.963165   73479 main.go:141] libmachine: (no-preload-673754) Getting domain xml...
	I0731 18:08:00.963982   73479 main.go:141] libmachine: (no-preload-673754) Creating domain...
	I0731 18:08:00.365254   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 18:08:00.389729   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0731 18:08:00.413143   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 18:08:00.436040   74203 provision.go:87] duration metric: took 235.932619ms to configureAuth
	I0731 18:08:00.436080   74203 buildroot.go:189] setting minikube options for container-runtime
	I0731 18:08:00.436288   74203 config.go:182] Loaded profile config "old-k8s-version-276459": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0731 18:08:00.436403   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:08:00.439184   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.439543   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.439575   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.439734   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:08:00.439898   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.440093   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.440271   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:08:00.440450   74203 main.go:141] libmachine: Using SSH client type: native
	I0731 18:08:00.440661   74203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0731 18:08:00.440679   74203 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 18:08:00.707438   74203 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 18:08:00.707467   74203 machine.go:97] duration metric: took 854.948491ms to provisionDockerMachine
	I0731 18:08:00.707482   74203 start.go:293] postStartSetup for "old-k8s-version-276459" (driver="kvm2")
	I0731 18:08:00.707494   74203 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 18:08:00.707510   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:08:00.707811   74203 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 18:08:00.707837   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:08:00.710726   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.711285   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.711315   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.711458   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:08:00.711703   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.711895   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:08:00.712049   74203 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459/id_rsa Username:docker}
	I0731 18:08:00.793719   74203 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 18:08:00.797858   74203 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 18:08:00.797888   74203 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/addons for local assets ...
	I0731 18:08:00.797960   74203 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/files for local assets ...
	I0731 18:08:00.798038   74203 filesync.go:149] local asset: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem -> 152592.pem in /etc/ssl/certs
	I0731 18:08:00.798130   74203 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 18:08:00.807013   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /etc/ssl/certs/152592.pem (1708 bytes)
	I0731 18:08:00.829440   74203 start.go:296] duration metric: took 121.944271ms for postStartSetup
	I0731 18:08:00.829487   74203 fix.go:56] duration metric: took 19.277312964s for fixHost
	I0731 18:08:00.829518   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:08:00.832718   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.833048   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.833082   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.833317   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:08:00.833533   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.833752   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.833887   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:08:00.834189   74203 main.go:141] libmachine: Using SSH client type: native
	I0731 18:08:00.834364   74203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0731 18:08:00.834377   74203 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 18:08:00.935834   74203 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722449280.899364873
	
	I0731 18:08:00.935853   74203 fix.go:216] guest clock: 1722449280.899364873
	I0731 18:08:00.935860   74203 fix.go:229] Guest: 2024-07-31 18:08:00.899364873 +0000 UTC Remote: 2024-07-31 18:08:00.829491013 +0000 UTC m=+245.518063325 (delta=69.87386ms)
	I0731 18:08:00.935894   74203 fix.go:200] guest clock delta is within tolerance: 69.87386ms
	I0731 18:08:00.935899   74203 start.go:83] releasing machines lock for "old-k8s-version-276459", held for 19.38376262s
	I0731 18:08:00.935937   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:08:00.936220   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetIP
	I0731 18:08:00.939282   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.939691   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.939722   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.939911   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:08:00.940506   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:08:00.940704   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:08:00.940790   74203 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 18:08:00.940831   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:08:00.940960   74203 ssh_runner.go:195] Run: cat /version.json
	I0731 18:08:00.941043   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:08:00.943883   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.943909   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.944361   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.944405   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.944429   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.944442   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.944542   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:08:00.944639   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:08:00.944766   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.944817   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.944899   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:08:00.944979   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:08:00.945039   74203 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459/id_rsa Username:docker}
	I0731 18:08:00.945110   74203 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459/id_rsa Username:docker}
	I0731 18:08:01.023818   74203 ssh_runner.go:195] Run: systemctl --version
	I0731 18:08:01.063390   74203 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 18:08:01.205084   74203 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 18:08:01.210972   74203 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 18:08:01.211049   74203 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 18:08:01.226156   74203 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 18:08:01.226180   74203 start.go:495] detecting cgroup driver to use...
	I0731 18:08:01.226257   74203 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 18:08:01.241506   74203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 18:08:01.256615   74203 docker.go:217] disabling cri-docker service (if available) ...
	I0731 18:08:01.256671   74203 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 18:08:01.271515   74203 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 18:08:01.287213   74203 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 18:08:01.415827   74203 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 18:08:01.578122   74203 docker.go:233] disabling docker service ...
	I0731 18:08:01.578208   74203 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 18:08:01.596564   74203 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 18:08:01.611984   74203 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 18:08:01.748972   74203 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 18:08:01.896911   74203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 18:08:01.912921   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 18:08:01.931671   74203 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0731 18:08:01.931749   74203 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:01.943737   74203 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 18:08:01.943798   74203 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:01.954571   74203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:01.964733   74203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:01.976087   74203 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 18:08:01.987193   74203 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 18:08:01.996620   74203 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 18:08:01.996670   74203 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 18:08:02.011046   74203 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 18:08:02.022199   74203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:08:02.147855   74203 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 18:08:02.309868   74203 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 18:08:02.309940   74203 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 18:08:02.314966   74203 start.go:563] Will wait 60s for crictl version
	I0731 18:08:02.315031   74203 ssh_runner.go:195] Run: which crictl
	I0731 18:08:02.318685   74203 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 18:08:02.359361   74203 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 18:08:02.359460   74203 ssh_runner.go:195] Run: crio --version
	I0731 18:08:02.387053   74203 ssh_runner.go:195] Run: crio --version
	I0731 18:08:02.417054   74203 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0731 18:08:01.265323   73800 pod_ready.go:92] pod "kube-scheduler-embed-certs-436067" in "kube-system" namespace has status "Ready":"True"
	I0731 18:08:01.265363   73800 pod_ready.go:81] duration metric: took 6.007715949s for pod "kube-scheduler-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:01.265376   73800 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:03.271693   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:02.418272   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetIP
	I0731 18:08:02.421211   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:02.421714   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:02.421743   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:02.421949   74203 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 18:08:02.425878   74203 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:08:02.438082   74203 kubeadm.go:883] updating cluster {Name:old-k8s-version-276459 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-276459 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 18:08:02.438222   74203 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 18:08:02.438293   74203 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 18:08:02.484113   74203 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0731 18:08:02.484189   74203 ssh_runner.go:195] Run: which lz4
	I0731 18:08:02.488365   74203 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 18:08:02.492321   74203 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 18:08:02.492352   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0731 18:08:03.946187   74203 crio.go:462] duration metric: took 1.457852426s to copy over tarball
	I0731 18:08:03.946261   74203 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 18:08:01.781606   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:03.781786   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:02.287159   73479 main.go:141] libmachine: (no-preload-673754) Waiting to get IP...
	I0731 18:08:02.288338   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:02.288812   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:02.288879   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:02.288799   75356 retry.go:31] will retry after 229.074083ms: waiting for machine to come up
	I0731 18:08:02.519266   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:02.519697   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:02.519720   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:02.519663   75356 retry.go:31] will retry after 328.345922ms: waiting for machine to come up
	I0731 18:08:02.849290   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:02.849839   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:02.849871   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:02.849787   75356 retry.go:31] will retry after 339.030371ms: waiting for machine to come up
	I0731 18:08:03.190065   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:03.190587   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:03.190620   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:03.190539   75356 retry.go:31] will retry after 514.955663ms: waiting for machine to come up
	I0731 18:08:03.707808   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:03.708382   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:03.708418   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:03.708349   75356 retry.go:31] will retry after 543.558992ms: waiting for machine to come up
	I0731 18:08:04.253224   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:04.253760   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:04.253781   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:04.253708   75356 retry.go:31] will retry after 925.348689ms: waiting for machine to come up
	I0731 18:08:05.180439   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:05.180833   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:05.180857   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:05.180786   75356 retry.go:31] will retry after 1.014666798s: waiting for machine to come up
	I0731 18:08:06.196879   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:06.197321   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:06.197355   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:06.197258   75356 retry.go:31] will retry after 1.163649074s: waiting for machine to come up
	I0731 18:08:05.278001   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:07.771870   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:06.945760   74203 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.99946679s)
	I0731 18:08:06.945790   74203 crio.go:469] duration metric: took 2.999576832s to extract the tarball
	I0731 18:08:06.945800   74203 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 18:08:06.989081   74203 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 18:08:07.024521   74203 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0731 18:08:07.024545   74203 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 18:08:07.024615   74203 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:08:07.024645   74203 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 18:08:07.024695   74203 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 18:08:07.024729   74203 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 18:08:07.024718   74203 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0731 18:08:07.024780   74203 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0731 18:08:07.024822   74203 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0731 18:08:07.024716   74203 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 18:08:07.026228   74203 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 18:08:07.026237   74203 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 18:08:07.026242   74203 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0731 18:08:07.026263   74203 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:08:07.026231   74203 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 18:08:07.026231   74203 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0731 18:08:07.026863   74203 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 18:08:07.027091   74203 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0731 18:08:07.282735   74203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0731 18:08:07.284464   74203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0731 18:08:07.287001   74203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 18:08:07.305873   74203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0731 18:08:07.307144   74203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0731 18:08:07.311401   74203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0731 18:08:07.318119   74203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0731 18:08:07.366929   74203 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0731 18:08:07.366979   74203 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0731 18:08:07.367041   74203 ssh_runner.go:195] Run: which crictl
	I0731 18:08:07.393481   74203 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0731 18:08:07.393534   74203 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0731 18:08:07.393594   74203 ssh_runner.go:195] Run: which crictl
	I0731 18:08:07.441987   74203 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0731 18:08:07.442036   74203 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 18:08:07.442083   74203 ssh_runner.go:195] Run: which crictl
	I0731 18:08:07.449033   74203 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0731 18:08:07.449085   74203 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 18:08:07.449137   74203 ssh_runner.go:195] Run: which crictl
	I0731 18:08:07.465248   74203 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0731 18:08:07.465291   74203 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 18:08:07.465341   74203 ssh_runner.go:195] Run: which crictl
	I0731 18:08:07.476013   74203 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0731 18:08:07.476053   74203 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0731 18:08:07.476074   74203 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 18:08:07.476090   74203 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0731 18:08:07.476111   74203 ssh_runner.go:195] Run: which crictl
	I0731 18:08:07.476129   74203 ssh_runner.go:195] Run: which crictl
	I0731 18:08:07.476146   74203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0731 18:08:07.476111   74203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0731 18:08:07.476196   74203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 18:08:07.476220   74203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0731 18:08:07.476273   74203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0731 18:08:07.592532   74203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0731 18:08:07.592677   74203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0731 18:08:07.592709   74203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0731 18:08:07.592797   74203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0731 18:08:07.637254   74203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0731 18:08:07.637276   74203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0731 18:08:07.637288   74203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0731 18:08:07.637292   74203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0731 18:08:07.640419   74203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0731 18:08:07.860814   74203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:08:08.002115   74203 cache_images.go:92] duration metric: took 977.553376ms to LoadCachedImages
	W0731 18:08:08.002248   74203 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0731 18:08:08.002267   74203 kubeadm.go:934] updating node { 192.168.39.26 8443 v1.20.0 crio true true} ...
	I0731 18:08:08.002404   74203 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-276459 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.26
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-276459 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 18:08:08.002500   74203 ssh_runner.go:195] Run: crio config
	I0731 18:08:08.059237   74203 cni.go:84] Creating CNI manager for ""
	I0731 18:08:08.059264   74203 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:08:08.059281   74203 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 18:08:08.059313   74203 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.26 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-276459 NodeName:old-k8s-version-276459 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.26"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.26 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0731 18:08:08.059503   74203 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.26
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-276459"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.26
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.26"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 18:08:08.059575   74203 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0731 18:08:08.070299   74203 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 18:08:08.070388   74203 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 18:08:08.082083   74203 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0731 18:08:08.101728   74203 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 18:08:08.120721   74203 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0731 18:08:08.137997   74203 ssh_runner.go:195] Run: grep 192.168.39.26	control-plane.minikube.internal$ /etc/hosts
	I0731 18:08:08.141797   74203 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.26	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:08:08.156861   74203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:08:08.287700   74203 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 18:08:08.307598   74203 certs.go:68] Setting up /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459 for IP: 192.168.39.26
	I0731 18:08:08.307623   74203 certs.go:194] generating shared ca certs ...
	I0731 18:08:08.307644   74203 certs.go:226] acquiring lock for ca certs: {Name:mk49eed439085f9c30746706460ce213570d1997 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:08:08.307811   74203 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key
	I0731 18:08:08.307855   74203 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key
	I0731 18:08:08.307868   74203 certs.go:256] generating profile certs ...
	I0731 18:08:08.307987   74203 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/client.key
	I0731 18:08:08.308062   74203 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/apiserver.key.7c620cac
	I0731 18:08:08.308123   74203 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/proxy-client.key
	I0731 18:08:08.308283   74203 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem (1338 bytes)
	W0731 18:08:08.308315   74203 certs.go:480] ignoring /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259_empty.pem, impossibly tiny 0 bytes
	I0731 18:08:08.308324   74203 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 18:08:08.308362   74203 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem (1082 bytes)
	I0731 18:08:08.308382   74203 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem (1123 bytes)
	I0731 18:08:08.308402   74203 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem (1675 bytes)
	I0731 18:08:08.308438   74203 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem (1708 bytes)
	I0731 18:08:08.309095   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 18:08:08.355508   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 18:08:08.391999   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 18:08:08.427937   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 18:08:08.456268   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0731 18:08:08.486991   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 18:08:08.519564   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 18:08:08.557029   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 18:08:08.583971   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /usr/share/ca-certificates/152592.pem (1708 bytes)
	I0731 18:08:08.608505   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 18:08:08.630279   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem --> /usr/share/ca-certificates/15259.pem (1338 bytes)
	I0731 18:08:08.655012   74203 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 18:08:08.671907   74203 ssh_runner.go:195] Run: openssl version
	I0731 18:08:08.677538   74203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/152592.pem && ln -fs /usr/share/ca-certificates/152592.pem /etc/ssl/certs/152592.pem"
	I0731 18:08:08.687877   74203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/152592.pem
	I0731 18:08:08.692201   74203 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 16:54 /usr/share/ca-certificates/152592.pem
	I0731 18:08:08.692258   74203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/152592.pem
	I0731 18:08:08.698563   74203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/152592.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 18:08:08.708986   74203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 18:08:08.719132   74203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:08:08.723242   74203 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 16:42 /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:08:08.723299   74203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:08:08.729032   74203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 18:08:08.739306   74203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15259.pem && ln -fs /usr/share/ca-certificates/15259.pem /etc/ssl/certs/15259.pem"
	I0731 18:08:08.749759   74203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15259.pem
	I0731 18:08:08.754167   74203 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 16:54 /usr/share/ca-certificates/15259.pem
	I0731 18:08:08.754228   74203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15259.pem
	I0731 18:08:08.759786   74203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15259.pem /etc/ssl/certs/51391683.0"
	I0731 18:08:08.770180   74203 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 18:08:08.775414   74203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 18:08:08.781830   74203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 18:08:08.787876   74203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 18:08:08.793927   74203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 18:08:08.800090   74203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 18:08:08.806169   74203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 18:08:08.811895   74203 kubeadm.go:392] StartCluster: {Name:old-k8s-version-276459 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-276459 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:08:08.811983   74203 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 18:08:08.812040   74203 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 18:08:08.853889   74203 cri.go:89] found id: ""
	I0731 18:08:08.853989   74203 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 18:08:08.863817   74203 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 18:08:08.863837   74203 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 18:08:08.863887   74203 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 18:08:08.873411   74203 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 18:08:08.874616   74203 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-276459" does not appear in /home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 18:08:08.875356   74203 kubeconfig.go:62] /home/jenkins/minikube-integration/19349-8084/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-276459" cluster setting kubeconfig missing "old-k8s-version-276459" context setting]
	I0731 18:08:08.876650   74203 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/kubeconfig: {Name:mkf2340bc1ada3c683f132aca37228d7588e1fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:08:08.918433   74203 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 18:08:08.931013   74203 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.26
	I0731 18:08:08.931067   74203 kubeadm.go:1160] stopping kube-system containers ...
	I0731 18:08:08.931083   74203 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 18:08:08.931163   74203 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 18:08:08.964683   74203 cri.go:89] found id: ""
	I0731 18:08:08.964759   74203 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 18:08:08.980459   74203 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 18:08:08.989969   74203 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 18:08:08.989997   74203 kubeadm.go:157] found existing configuration files:
	
	I0731 18:08:08.990049   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 18:08:08.999015   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 18:08:08.999074   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 18:08:09.008055   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 18:08:09.016532   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 18:08:09.016599   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 18:08:09.025791   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 18:08:09.034160   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 18:08:09.034227   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 18:08:09.043381   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 18:08:09.053419   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 18:08:09.053832   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 18:08:09.064966   74203 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 18:08:09.073962   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:09.198503   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:10.048258   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:10.283812   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:06.285091   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:08.779998   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:10.780198   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:07.362756   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:07.363299   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:07.363328   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:07.363231   75356 retry.go:31] will retry after 1.508296616s: waiting for machine to come up
	I0731 18:08:08.873528   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:08.874013   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:08.874051   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:08.873971   75356 retry.go:31] will retry after 2.281343566s: waiting for machine to come up
	I0731 18:08:11.157083   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:11.157578   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:11.157609   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:11.157537   75356 retry.go:31] will retry after 2.49049752s: waiting for machine to come up
	I0731 18:08:09.802010   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:12.271900   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:10.390012   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:10.477969   74203 api_server.go:52] waiting for apiserver process to appear ...
	I0731 18:08:10.478093   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:10.978427   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:11.478715   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:11.978685   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:12.478211   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:12.978218   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:13.478493   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:13.978778   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:14.478489   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:14.978983   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:13.278943   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:15.778760   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:13.650131   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:13.650459   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:13.650480   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:13.650428   75356 retry.go:31] will retry after 3.437877467s: waiting for machine to come up
	I0731 18:08:14.771879   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:17.272673   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:15.478444   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:15.978399   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:16.478641   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:16.979036   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:17.479053   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:17.978819   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:18.478280   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:18.978448   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:19.479056   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:19.978969   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:18.279604   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:20.778532   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:17.089986   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:17.090556   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:17.090590   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:17.090509   75356 retry.go:31] will retry after 2.95036051s: waiting for machine to come up
	I0731 18:08:20.044455   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.044914   73479 main.go:141] libmachine: (no-preload-673754) Found IP for machine: 192.168.61.126
	I0731 18:08:20.044935   73479 main.go:141] libmachine: (no-preload-673754) Reserving static IP address...
	I0731 18:08:20.044948   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has current primary IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.045286   73479 main.go:141] libmachine: (no-preload-673754) Reserved static IP address: 192.168.61.126
	I0731 18:08:20.045308   73479 main.go:141] libmachine: (no-preload-673754) Waiting for SSH to be available...
	I0731 18:08:20.045331   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "no-preload-673754", mac: "52:54:00:5a:ec:78", ip: "192.168.61.126"} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:20.045352   73479 main.go:141] libmachine: (no-preload-673754) DBG | skip adding static IP to network mk-no-preload-673754 - found existing host DHCP lease matching {name: "no-preload-673754", mac: "52:54:00:5a:ec:78", ip: "192.168.61.126"}
	I0731 18:08:20.045367   73479 main.go:141] libmachine: (no-preload-673754) DBG | Getting to WaitForSSH function...
	I0731 18:08:20.047574   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.047913   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:20.047939   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.048069   73479 main.go:141] libmachine: (no-preload-673754) DBG | Using SSH client type: external
	I0731 18:08:20.048106   73479 main.go:141] libmachine: (no-preload-673754) DBG | Using SSH private key: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/no-preload-673754/id_rsa (-rw-------)
	I0731 18:08:20.048150   73479 main.go:141] libmachine: (no-preload-673754) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.126 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19349-8084/.minikube/machines/no-preload-673754/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 18:08:20.048168   73479 main.go:141] libmachine: (no-preload-673754) DBG | About to run SSH command:
	I0731 18:08:20.048181   73479 main.go:141] libmachine: (no-preload-673754) DBG | exit 0
	I0731 18:08:20.175606   73479 main.go:141] libmachine: (no-preload-673754) DBG | SSH cmd err, output: <nil>: 
	I0731 18:08:20.175917   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetConfigRaw
	I0731 18:08:20.176508   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetIP
	I0731 18:08:20.179035   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.179374   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:20.179404   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.179686   73479 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/no-preload-673754/config.json ...
	I0731 18:08:20.179869   73479 machine.go:94] provisionDockerMachine start ...
	I0731 18:08:20.179885   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 18:08:20.180088   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:20.182345   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.182702   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:20.182727   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.182848   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:20.183060   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:20.183227   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:20.183414   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:20.183572   73479 main.go:141] libmachine: Using SSH client type: native
	I0731 18:08:20.183747   73479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I0731 18:08:20.183757   73479 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 18:08:20.295090   73479 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 18:08:20.295149   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetMachineName
	I0731 18:08:20.295424   73479 buildroot.go:166] provisioning hostname "no-preload-673754"
	I0731 18:08:20.295454   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetMachineName
	I0731 18:08:20.295631   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:20.298467   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.298771   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:20.298815   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.298897   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:20.299094   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:20.299276   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:20.299462   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:20.299652   73479 main.go:141] libmachine: Using SSH client type: native
	I0731 18:08:20.299806   73479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I0731 18:08:20.299817   73479 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-673754 && echo "no-preload-673754" | sudo tee /etc/hostname
	I0731 18:08:20.424901   73479 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-673754
	
	I0731 18:08:20.424951   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:20.427679   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.428049   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:20.428083   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.428230   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:20.428419   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:20.428601   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:20.428767   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:20.428965   73479 main.go:141] libmachine: Using SSH client type: native
	I0731 18:08:20.429127   73479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I0731 18:08:20.429142   73479 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-673754' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-673754/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-673754' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 18:08:20.546853   73479 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 18:08:20.546884   73479 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19349-8084/.minikube CaCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19349-8084/.minikube}
	I0731 18:08:20.546938   73479 buildroot.go:174] setting up certificates
	I0731 18:08:20.546955   73479 provision.go:84] configureAuth start
	I0731 18:08:20.546971   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetMachineName
	I0731 18:08:20.547275   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetIP
	I0731 18:08:20.550019   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.550372   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:20.550400   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.550525   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:20.552914   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.553261   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:20.553290   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.553416   73479 provision.go:143] copyHostCerts
	I0731 18:08:20.553479   73479 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem, removing ...
	I0731 18:08:20.553490   73479 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem
	I0731 18:08:20.553547   73479 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem (1082 bytes)
	I0731 18:08:20.553675   73479 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem, removing ...
	I0731 18:08:20.553687   73479 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem
	I0731 18:08:20.553718   73479 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem (1123 bytes)
	I0731 18:08:20.553796   73479 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem, removing ...
	I0731 18:08:20.553806   73479 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem
	I0731 18:08:20.553826   73479 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem (1675 bytes)
	I0731 18:08:20.553883   73479 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem org=jenkins.no-preload-673754 san=[127.0.0.1 192.168.61.126 localhost minikube no-preload-673754]
	I0731 18:08:20.878891   73479 provision.go:177] copyRemoteCerts
	I0731 18:08:20.878963   73479 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 18:08:20.878990   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:20.881529   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.881868   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:20.881900   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.882053   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:20.882245   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:20.882450   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:20.882617   73479 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/no-preload-673754/id_rsa Username:docker}
	I0731 18:08:20.968757   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 18:08:20.992136   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0731 18:08:21.013768   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 18:08:21.035808   73479 provision.go:87] duration metric: took 488.837788ms to configureAuth
	I0731 18:08:21.035839   73479 buildroot.go:189] setting minikube options for container-runtime
	I0731 18:08:21.036018   73479 config.go:182] Loaded profile config "no-preload-673754": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 18:08:21.036099   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:21.038949   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.039335   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:21.039363   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.039556   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:21.039756   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:21.039960   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:21.040071   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:21.040219   73479 main.go:141] libmachine: Using SSH client type: native
	I0731 18:08:21.040380   73479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I0731 18:08:21.040396   73479 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 18:08:21.319623   73479 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 18:08:21.319657   73479 machine.go:97] duration metric: took 1.139776085s to provisionDockerMachine
	I0731 18:08:21.319672   73479 start.go:293] postStartSetup for "no-preload-673754" (driver="kvm2")
	I0731 18:08:21.319689   73479 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 18:08:21.319710   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 18:08:21.320049   73479 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 18:08:21.320076   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:21.322963   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.323436   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:21.323465   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.323634   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:21.323809   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:21.324003   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:21.324127   73479 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/no-preload-673754/id_rsa Username:docker}
	I0731 18:08:21.409076   73479 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 18:08:21.412884   73479 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 18:08:21.412917   73479 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/addons for local assets ...
	I0731 18:08:21.413020   73479 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/files for local assets ...
	I0731 18:08:21.413108   73479 filesync.go:149] local asset: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem -> 152592.pem in /etc/ssl/certs
	I0731 18:08:21.413233   73479 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 18:08:21.421812   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /etc/ssl/certs/152592.pem (1708 bytes)
	I0731 18:08:21.447124   73479 start.go:296] duration metric: took 127.423498ms for postStartSetup
	I0731 18:08:21.447196   73479 fix.go:56] duration metric: took 20.511108968s for fixHost
	I0731 18:08:21.447226   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:21.450022   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.450408   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:21.450431   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.450628   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:21.450846   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:21.451009   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:21.451161   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:21.451327   73479 main.go:141] libmachine: Using SSH client type: native
	I0731 18:08:21.451527   73479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I0731 18:08:21.451541   73479 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 18:08:21.563653   73479 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722449301.536356236
	
	I0731 18:08:21.563672   73479 fix.go:216] guest clock: 1722449301.536356236
	I0731 18:08:21.563679   73479 fix.go:229] Guest: 2024-07-31 18:08:21.536356236 +0000 UTC Remote: 2024-07-31 18:08:21.447206545 +0000 UTC m=+354.621330953 (delta=89.149691ms)
	I0731 18:08:21.563702   73479 fix.go:200] guest clock delta is within tolerance: 89.149691ms
	I0731 18:08:21.563709   73479 start.go:83] releasing machines lock for "no-preload-673754", held for 20.627680156s
	I0731 18:08:21.563734   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 18:08:21.563992   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetIP
	I0731 18:08:21.566875   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.567265   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:21.567290   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.567505   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 18:08:21.568045   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 18:08:21.568237   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 18:08:21.568368   73479 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 18:08:21.568408   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:21.568465   73479 ssh_runner.go:195] Run: cat /version.json
	I0731 18:08:21.568492   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:21.571178   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.571554   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:21.571603   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.571653   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.571729   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:21.571902   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:21.572071   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:21.572213   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:21.572240   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.572256   73479 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/no-preload-673754/id_rsa Username:docker}
	I0731 18:08:21.572373   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:21.572505   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:21.572609   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:21.572739   73479 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/no-preload-673754/id_rsa Username:docker}
	I0731 18:08:21.682894   73479 ssh_runner.go:195] Run: systemctl --version
	I0731 18:08:21.689126   73479 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 18:08:21.829572   73479 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 18:08:21.836507   73479 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 18:08:21.836589   73479 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 18:08:21.855127   73479 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 18:08:21.855176   73479 start.go:495] detecting cgroup driver to use...
	I0731 18:08:21.855256   73479 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 18:08:21.870886   73479 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 18:08:21.884762   73479 docker.go:217] disabling cri-docker service (if available) ...
	I0731 18:08:21.884833   73479 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 18:08:21.899480   73479 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 18:08:21.912438   73479 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 18:08:22.024528   73479 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 18:08:22.177400   73479 docker.go:233] disabling docker service ...
	I0731 18:08:22.177500   73479 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 18:08:22.191225   73479 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 18:08:22.204004   73479 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 18:08:22.327408   73479 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 18:08:22.449116   73479 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 18:08:22.463031   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 18:08:22.481864   73479 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0731 18:08:22.481935   73479 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:22.491687   73479 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 18:08:22.491768   73479 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:22.501686   73479 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:22.511207   73479 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:22.521390   73479 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 18:08:22.531355   73479 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:22.541544   73479 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:22.556829   73479 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:22.566012   73479 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 18:08:22.574865   73479 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 18:08:22.574938   73479 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 18:08:22.588125   73479 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 18:08:22.597257   73479 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:08:22.716379   73479 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 18:08:22.855465   73479 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 18:08:22.855526   73479 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 18:08:22.860016   73479 start.go:563] Will wait 60s for crictl version
	I0731 18:08:22.860088   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:08:22.863395   73479 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 18:08:22.904523   73479 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 18:08:22.904611   73479 ssh_runner.go:195] Run: crio --version
	I0731 18:08:22.934571   73479 ssh_runner.go:195] Run: crio --version
	I0731 18:08:22.965884   73479 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0731 18:08:19.771740   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:22.272491   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:20.478866   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:20.978311   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:21.478333   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:21.978289   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:22.479138   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:22.978406   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:23.478138   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:23.979189   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:24.478688   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:24.978795   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:22.779215   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:24.782366   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:22.967087   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetIP
	I0731 18:08:22.969442   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:22.969722   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:22.969746   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:22.970005   73479 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0731 18:08:22.974229   73479 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:08:22.986153   73479 kubeadm.go:883] updating cluster {Name:no-preload-673754 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-673754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.126 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 18:08:22.986292   73479 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0731 18:08:22.986321   73479 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 18:08:23.020129   73479 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0731 18:08:23.020153   73479 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 18:08:23.020215   73479 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:08:23.020234   73479 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 18:08:23.020266   73479 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 18:08:23.020322   73479 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 18:08:23.020337   73479 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0731 18:08:23.020390   73479 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0731 18:08:23.020431   73479 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 18:08:23.020457   73479 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 18:08:23.021820   73479 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:08:23.021821   73479 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0731 18:08:23.021820   73479 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 18:08:23.021901   73479 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0731 18:08:23.021978   73479 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 18:08:23.021833   73479 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 18:08:23.021826   73479 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 18:08:23.021821   73479 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 18:08:23.254700   73479 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 18:08:23.268999   73479 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0731 18:08:23.271466   73479 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0731 18:08:23.272011   73479 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 18:08:23.275695   73479 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 18:08:23.298363   73479 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 18:08:23.320031   73479 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0731 18:08:23.340960   73479 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0731 18:08:23.341004   73479 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 18:08:23.341050   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:08:23.381391   73479 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0731 18:08:23.381441   73479 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 18:08:23.381511   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:08:23.508590   73479 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0731 18:08:23.508650   73479 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 18:08:23.508676   73479 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0731 18:08:23.508702   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:08:23.508716   73479 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 18:08:23.508729   73479 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0731 18:08:23.508751   73479 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 18:08:23.508772   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:08:23.508781   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:08:23.508800   73479 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0731 18:08:23.508830   73479 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0731 18:08:23.508838   73479 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 18:08:23.508860   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:08:23.508879   73479 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0731 18:08:23.519809   73479 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 18:08:23.519834   73479 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 18:08:23.519907   73479 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 18:08:23.595474   73479 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0731 18:08:23.595484   73479 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0731 18:08:23.595590   73479 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0731 18:08:23.595628   73479 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0731 18:08:23.595683   73479 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0731 18:08:23.622893   73479 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0731 18:08:23.623024   73479 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0731 18:08:23.629140   73479 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0731 18:08:23.629173   73479 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0731 18:08:23.629242   73479 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0731 18:08:23.629246   73479 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0731 18:08:23.659281   73479 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0731 18:08:23.659321   73479 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0731 18:08:23.659336   73479 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0731 18:08:23.659379   73479 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0731 18:08:23.659385   73479 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0731 18:08:23.659425   73479 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0731 18:08:23.659381   73479 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0731 18:08:23.659465   73479 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0731 18:08:23.659494   73479 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0731 18:08:23.857129   73479 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:08:26.136212   73479 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.476802709s)
	I0731 18:08:26.136251   73479 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0731 18:08:26.136264   73479 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0: (2.476807388s)
	I0731 18:08:26.136276   73479 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0731 18:08:26.136293   73479 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0731 18:08:26.136329   73479 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0731 18:08:26.136366   73479 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.279204335s)
	I0731 18:08:26.136423   73479 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0731 18:08:26.136474   73479 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:08:26.136521   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:08:24.770974   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:26.771954   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:29.274931   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:25.478432   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:25.978823   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:26.478416   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:26.979075   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:27.478228   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:27.978970   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:28.478319   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:28.979028   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:29.479060   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:29.978544   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:27.278482   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:29.279820   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:27.993828   73479 ssh_runner.go:235] Completed: which crictl: (1.857279777s)
	I0731 18:08:27.993908   73479 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:08:27.993918   73479 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.857561411s)
	I0731 18:08:27.993947   73479 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0731 18:08:27.993981   73479 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0731 18:08:27.994029   73479 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0731 18:08:28.037163   73479 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0731 18:08:28.037288   73479 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0731 18:08:29.880343   73479 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.843037657s)
	I0731 18:08:29.880392   73479 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0731 18:08:29.880339   73479 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.886261639s)
	I0731 18:08:29.880412   73479 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0731 18:08:29.880442   73479 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0731 18:08:29.880509   73479 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0731 18:08:31.229448   73479 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.348909634s)
	I0731 18:08:31.229478   73479 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0731 18:08:31.229512   73479 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0731 18:08:31.229575   73479 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0731 18:08:31.771695   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:34.271817   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:30.478387   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:30.978443   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:31.478484   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:31.979231   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:32.478928   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:32.978790   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:33.478426   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:33.978839   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:34.478411   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:34.978378   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:31.280261   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:33.780411   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:35.783181   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:33.084098   73479 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.854499641s)
	I0731 18:08:33.084136   73479 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0731 18:08:33.084175   73479 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0731 18:08:33.084255   73479 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0731 18:08:36.378466   73479 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.294181026s)
	I0731 18:08:36.378501   73479 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0731 18:08:36.378530   73479 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0731 18:08:36.378575   73479 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0731 18:08:36.772963   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:39.270915   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:35.478287   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:35.978546   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:36.479138   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:36.979173   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:37.478319   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:37.978768   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:38.479161   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:38.979129   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:39.478128   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:39.979147   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:38.278970   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:40.279298   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:37.022757   73479 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0731 18:08:37.022807   73479 cache_images.go:123] Successfully loaded all cached images
	I0731 18:08:37.022815   73479 cache_images.go:92] duration metric: took 14.002647196s to LoadCachedImages
	I0731 18:08:37.022829   73479 kubeadm.go:934] updating node { 192.168.61.126 8443 v1.31.0-beta.0 crio true true} ...
	I0731 18:08:37.022954   73479 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-673754 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.126
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-673754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 18:08:37.023035   73479 ssh_runner.go:195] Run: crio config
	I0731 18:08:37.064803   73479 cni.go:84] Creating CNI manager for ""
	I0731 18:08:37.064825   73479 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:08:37.064834   73479 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 18:08:37.064856   73479 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.126 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-673754 NodeName:no-preload-673754 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.126"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.126 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 18:08:37.065028   73479 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.126
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-673754"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.126
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.126"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 18:08:37.065108   73479 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0731 18:08:37.077141   73479 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 18:08:37.077215   73479 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 18:08:37.086553   73479 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0731 18:08:37.102646   73479 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0731 18:08:37.118113   73479 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0731 18:08:37.134702   73479 ssh_runner.go:195] Run: grep 192.168.61.126	control-plane.minikube.internal$ /etc/hosts
	I0731 18:08:37.138593   73479 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.126	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:08:37.151319   73479 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:08:37.270019   73479 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 18:08:37.287378   73479 certs.go:68] Setting up /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/no-preload-673754 for IP: 192.168.61.126
	I0731 18:08:37.287400   73479 certs.go:194] generating shared ca certs ...
	I0731 18:08:37.287413   73479 certs.go:226] acquiring lock for ca certs: {Name:mk49eed439085f9c30746706460ce213570d1997 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:08:37.287540   73479 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key
	I0731 18:08:37.287577   73479 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key
	I0731 18:08:37.287584   73479 certs.go:256] generating profile certs ...
	I0731 18:08:37.287692   73479 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/no-preload-673754/client.key
	I0731 18:08:37.287761   73479 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/no-preload-673754/apiserver.key.3fff3ffc
	I0731 18:08:37.287803   73479 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/no-preload-673754/proxy-client.key
	I0731 18:08:37.287938   73479 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem (1338 bytes)
	W0731 18:08:37.287973   73479 certs.go:480] ignoring /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259_empty.pem, impossibly tiny 0 bytes
	I0731 18:08:37.287985   73479 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 18:08:37.288020   73479 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem (1082 bytes)
	I0731 18:08:37.288049   73479 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem (1123 bytes)
	I0731 18:08:37.288079   73479 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem (1675 bytes)
	I0731 18:08:37.288143   73479 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem (1708 bytes)
	I0731 18:08:37.288831   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 18:08:37.334317   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 18:08:37.370553   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 18:08:37.403436   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 18:08:37.449133   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/no-preload-673754/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0731 18:08:37.486169   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/no-preload-673754/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 18:08:37.517241   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/no-preload-673754/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 18:08:37.541089   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/no-preload-673754/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 18:08:37.563068   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem --> /usr/share/ca-certificates/15259.pem (1338 bytes)
	I0731 18:08:37.585396   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /usr/share/ca-certificates/152592.pem (1708 bytes)
	I0731 18:08:37.608142   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 18:08:37.630178   73479 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 18:08:37.645994   73479 ssh_runner.go:195] Run: openssl version
	I0731 18:08:37.651663   73479 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15259.pem && ln -fs /usr/share/ca-certificates/15259.pem /etc/ssl/certs/15259.pem"
	I0731 18:08:37.661494   73479 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15259.pem
	I0731 18:08:37.665519   73479 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 16:54 /usr/share/ca-certificates/15259.pem
	I0731 18:08:37.665575   73479 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15259.pem
	I0731 18:08:37.671143   73479 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15259.pem /etc/ssl/certs/51391683.0"
	I0731 18:08:37.681076   73479 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/152592.pem && ln -fs /usr/share/ca-certificates/152592.pem /etc/ssl/certs/152592.pem"
	I0731 18:08:37.692253   73479 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/152592.pem
	I0731 18:08:37.696802   73479 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 16:54 /usr/share/ca-certificates/152592.pem
	I0731 18:08:37.696850   73479 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/152592.pem
	I0731 18:08:37.702282   73479 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/152592.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 18:08:37.713051   73479 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 18:08:37.723644   73479 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:08:37.728170   73479 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 16:42 /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:08:37.728225   73479 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:08:37.733912   73479 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 18:08:37.744004   73479 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 18:08:37.748076   73479 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 18:08:37.753645   73479 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 18:08:37.759077   73479 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 18:08:37.764344   73479 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 18:08:37.769735   73479 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 18:08:37.775894   73479 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 18:08:37.781699   73479 kubeadm.go:392] StartCluster: {Name:no-preload-673754 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-673754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.126 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:08:37.781771   73479 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 18:08:37.781833   73479 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 18:08:37.825614   73479 cri.go:89] found id: ""
	I0731 18:08:37.825685   73479 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 18:08:37.835584   73479 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 18:08:37.835604   73479 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 18:08:37.835659   73479 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 18:08:37.844529   73479 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 18:08:37.845534   73479 kubeconfig.go:125] found "no-preload-673754" server: "https://192.168.61.126:8443"
	I0731 18:08:37.847698   73479 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 18:08:37.856360   73479 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.126
	I0731 18:08:37.856386   73479 kubeadm.go:1160] stopping kube-system containers ...
	I0731 18:08:37.856396   73479 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 18:08:37.856440   73479 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 18:08:37.894614   73479 cri.go:89] found id: ""
	I0731 18:08:37.894689   73479 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 18:08:37.910921   73479 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 18:08:37.919796   73479 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 18:08:37.919814   73479 kubeadm.go:157] found existing configuration files:
	
	I0731 18:08:37.919859   73479 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 18:08:37.928562   73479 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 18:08:37.928617   73479 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 18:08:37.937099   73479 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 18:08:37.945298   73479 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 18:08:37.945378   73479 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 18:08:37.953976   73479 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 18:08:37.962069   73479 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 18:08:37.962119   73479 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 18:08:37.970719   73479 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 18:08:37.979265   73479 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 18:08:37.979318   73479 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 18:08:37.988286   73479 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 18:08:37.997742   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:38.105503   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:39.403672   73479 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.298131314s)
	I0731 18:08:39.403710   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:39.609739   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:39.677484   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:39.773387   73479 api_server.go:52] waiting for apiserver process to appear ...
	I0731 18:08:39.773469   73479 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:40.274185   73479 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:40.774562   73479 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:40.792346   73479 api_server.go:72] duration metric: took 1.018961231s to wait for apiserver process to appear ...
	I0731 18:08:40.792368   73479 api_server.go:88] waiting for apiserver healthz status ...
	I0731 18:08:40.792384   73479 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0731 18:08:41.271890   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:43.771546   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:43.476911   73479 api_server.go:279] https://192.168.61.126:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 18:08:43.476938   73479 api_server.go:103] status: https://192.168.61.126:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 18:08:43.476952   73479 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0731 18:08:43.536762   73479 api_server.go:279] https://192.168.61.126:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 18:08:43.536794   73479 api_server.go:103] status: https://192.168.61.126:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 18:08:43.793157   73479 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0731 18:08:43.798895   73479 api_server.go:279] https://192.168.61.126:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 18:08:43.798924   73479 api_server.go:103] status: https://192.168.61.126:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 18:08:44.292527   73479 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0731 18:08:44.300596   73479 api_server.go:279] https://192.168.61.126:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 18:08:44.300632   73479 api_server.go:103] status: https://192.168.61.126:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 18:08:44.793206   73479 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0731 18:08:44.797982   73479 api_server.go:279] https://192.168.61.126:8443/healthz returned 200:
	ok
	I0731 18:08:44.806150   73479 api_server.go:141] control plane version: v1.31.0-beta.0
	I0731 18:08:44.806172   73479 api_server.go:131] duration metric: took 4.013797537s to wait for apiserver health ...
	I0731 18:08:44.806183   73479 cni.go:84] Creating CNI manager for ""
	I0731 18:08:44.806191   73479 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:08:44.807774   73479 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 18:08:40.478967   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:40.978610   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:41.479192   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:41.978737   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:42.479051   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:42.978274   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:43.478957   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:43.978973   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:44.478269   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:44.978737   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:42.778330   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:44.779163   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:44.809068   73479 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 18:08:44.823284   73479 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 18:08:44.878894   73479 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 18:08:44.892969   73479 system_pods.go:59] 8 kube-system pods found
	I0731 18:08:44.893020   73479 system_pods.go:61] "coredns-5cfdc65f69-k7clq" [e4be77b6-aa7c-45e2-90a1-6a8264fd5101] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:08:44.893031   73479 system_pods.go:61] "etcd-no-preload-673754" [d64e9194-dd33-4a28-b8c3-d25462f1dae8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 18:08:44.893042   73479 system_pods.go:61] "kube-apiserver-no-preload-673754" [4497614d-51de-4a5f-93dc-1397446fe4c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 18:08:44.893055   73479 system_pods.go:61] "kube-controller-manager-no-preload-673754" [fe7f714e-d778-49e7-b4ef-44e6efb5bc33] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 18:08:44.893067   73479 system_pods.go:61] "kube-proxy-hqxh6" [4fb623c0-4be3-421c-85d6-1d76a90b874f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 18:08:44.893078   73479 system_pods.go:61] "kube-scheduler-no-preload-673754" [558e9b78-a6a9-46b8-9db8-004126359c74] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 18:08:44.893088   73479 system_pods.go:61] "metrics-server-78fcd8795b-27pkr" [a63a156b-0446-4bd9-8619-de75edaeb481] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 18:08:44.893098   73479 system_pods.go:61] "storage-provisioner" [a9f0ab70-18f4-4fac-b858-a9177077fe29] Running
	I0731 18:08:44.893109   73479 system_pods.go:74] duration metric: took 14.191984ms to wait for pod list to return data ...
	I0731 18:08:44.893120   73479 node_conditions.go:102] verifying NodePressure condition ...
	I0731 18:08:44.908236   73479 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 18:08:44.908270   73479 node_conditions.go:123] node cpu capacity is 2
	I0731 18:08:44.908283   73479 node_conditions.go:105] duration metric: took 15.154491ms to run NodePressure ...
	I0731 18:08:44.908307   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:45.248571   73479 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 18:08:45.252305   73479 kubeadm.go:739] kubelet initialised
	I0731 18:08:45.252332   73479 kubeadm.go:740] duration metric: took 3.734022ms waiting for restarted kubelet to initialise ...
	I0731 18:08:45.252342   73479 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:08:45.256748   73479 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-k7clq" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:45.261130   73479 pod_ready.go:97] node "no-preload-673754" hosting pod "coredns-5cfdc65f69-k7clq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:45.261149   73479 pod_ready.go:81] duration metric: took 4.373068ms for pod "coredns-5cfdc65f69-k7clq" in "kube-system" namespace to be "Ready" ...
	E0731 18:08:45.261157   73479 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-673754" hosting pod "coredns-5cfdc65f69-k7clq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:45.261162   73479 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:45.265115   73479 pod_ready.go:97] node "no-preload-673754" hosting pod "etcd-no-preload-673754" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:45.265135   73479 pod_ready.go:81] duration metric: took 3.965586ms for pod "etcd-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	E0731 18:08:45.265142   73479 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-673754" hosting pod "etcd-no-preload-673754" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:45.265147   73479 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:45.269566   73479 pod_ready.go:97] node "no-preload-673754" hosting pod "kube-apiserver-no-preload-673754" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:45.269585   73479 pod_ready.go:81] duration metric: took 4.431367ms for pod "kube-apiserver-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	E0731 18:08:45.269595   73479 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-673754" hosting pod "kube-apiserver-no-preload-673754" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:45.269603   73479 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:45.281026   73479 pod_ready.go:97] node "no-preload-673754" hosting pod "kube-controller-manager-no-preload-673754" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:45.281048   73479 pod_ready.go:81] duration metric: took 11.435327ms for pod "kube-controller-manager-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	E0731 18:08:45.281057   73479 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-673754" hosting pod "kube-controller-manager-no-preload-673754" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:45.281065   73479 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-hqxh6" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:45.684313   73479 pod_ready.go:97] node "no-preload-673754" hosting pod "kube-proxy-hqxh6" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:45.684347   73479 pod_ready.go:81] duration metric: took 403.272559ms for pod "kube-proxy-hqxh6" in "kube-system" namespace to be "Ready" ...
	E0731 18:08:45.684356   73479 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-673754" hosting pod "kube-proxy-hqxh6" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:45.684362   73479 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:46.082388   73479 pod_ready.go:97] node "no-preload-673754" hosting pod "kube-scheduler-no-preload-673754" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:46.082419   73479 pod_ready.go:81] duration metric: took 398.048808ms for pod "kube-scheduler-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	E0731 18:08:46.082432   73479 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-673754" hosting pod "kube-scheduler-no-preload-673754" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:46.082442   73479 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:46.482445   73479 pod_ready.go:97] node "no-preload-673754" hosting pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:46.482472   73479 pod_ready.go:81] duration metric: took 400.02111ms for pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace to be "Ready" ...
	E0731 18:08:46.482486   73479 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-673754" hosting pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:46.482493   73479 pod_ready.go:38] duration metric: took 1.230141723s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:08:46.482509   73479 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 18:08:46.495481   73479 ops.go:34] apiserver oom_adj: -16
	I0731 18:08:46.495502   73479 kubeadm.go:597] duration metric: took 8.65989212s to restartPrimaryControlPlane
	I0731 18:08:46.495513   73479 kubeadm.go:394] duration metric: took 8.71382049s to StartCluster
	I0731 18:08:46.495533   73479 settings.go:142] acquiring lock: {Name:mk8ac8269296bf0b887a00ff74caaad180005f89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:08:46.495615   73479 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 18:08:46.497426   73479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/kubeconfig: {Name:mkf2340bc1ada3c683f132aca37228d7588e1fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:08:46.497742   73479 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.126 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 18:08:46.497816   73479 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 18:08:46.497911   73479 addons.go:69] Setting storage-provisioner=true in profile "no-preload-673754"
	I0731 18:08:46.497929   73479 addons.go:69] Setting default-storageclass=true in profile "no-preload-673754"
	I0731 18:08:46.497956   73479 addons.go:69] Setting metrics-server=true in profile "no-preload-673754"
	I0731 18:08:46.497973   73479 config.go:182] Loaded profile config "no-preload-673754": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 18:08:46.497979   73479 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-673754"
	I0731 18:08:46.497988   73479 addons.go:234] Setting addon metrics-server=true in "no-preload-673754"
	W0731 18:08:46.498008   73479 addons.go:243] addon metrics-server should already be in state true
	I0731 18:08:46.497946   73479 addons.go:234] Setting addon storage-provisioner=true in "no-preload-673754"
	I0731 18:08:46.498056   73479 host.go:66] Checking if "no-preload-673754" exists ...
	W0731 18:08:46.498064   73479 addons.go:243] addon storage-provisioner should already be in state true
	I0731 18:08:46.498109   73479 host.go:66] Checking if "no-preload-673754" exists ...
	I0731 18:08:46.498315   73479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:08:46.498315   73479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:08:46.498333   73479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:08:46.498340   73479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:08:46.498448   73479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:08:46.498470   73479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:08:46.501144   73479 out.go:177] * Verifying Kubernetes components...
	I0731 18:08:46.502755   73479 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:08:46.514922   73479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46797
	I0731 18:08:46.514923   73479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44241
	I0731 18:08:46.515418   73479 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:08:46.515618   73479 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:08:46.515928   73479 main.go:141] libmachine: Using API Version  1
	I0731 18:08:46.515950   73479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:08:46.516066   73479 main.go:141] libmachine: Using API Version  1
	I0731 18:08:46.516089   73479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:08:46.516370   73479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40219
	I0731 18:08:46.516440   73479 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:08:46.516663   73479 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:08:46.516809   73479 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:08:46.516811   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetState
	I0731 18:08:46.517213   73479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:08:46.517247   73479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:08:46.517280   73479 main.go:141] libmachine: Using API Version  1
	I0731 18:08:46.517302   73479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:08:46.517618   73479 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:08:46.518191   73479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:08:46.518220   73479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:08:46.520511   73479 addons.go:234] Setting addon default-storageclass=true in "no-preload-673754"
	W0731 18:08:46.520536   73479 addons.go:243] addon default-storageclass should already be in state true
	I0731 18:08:46.520566   73479 host.go:66] Checking if "no-preload-673754" exists ...
	I0731 18:08:46.520917   73479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:08:46.520968   73479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:08:46.533349   73479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38669
	I0731 18:08:46.533802   73479 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:08:46.534250   73479 main.go:141] libmachine: Using API Version  1
	I0731 18:08:46.534272   73479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:08:46.534582   73479 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:08:46.534720   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetState
	I0731 18:08:46.535556   73479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46055
	I0731 18:08:46.535979   73479 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:08:46.536648   73479 main.go:141] libmachine: Using API Version  1
	I0731 18:08:46.536667   73479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:08:46.537080   73479 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:08:46.537331   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 18:08:46.537398   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetState
	I0731 18:08:46.538365   73479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39435
	I0731 18:08:46.538929   73479 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:08:46.539194   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 18:08:46.539401   73479 main.go:141] libmachine: Using API Version  1
	I0731 18:08:46.539419   73479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:08:46.539766   73479 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:08:46.540360   73479 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:08:46.540447   73479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:08:46.540801   73479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:08:46.541139   73479 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0731 18:08:46.541916   73479 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 18:08:46.541932   73479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 18:08:46.541952   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:46.542506   73479 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 18:08:46.542524   73479 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 18:08:46.542541   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:46.545293   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:46.545631   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:46.545759   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:46.545829   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:46.545985   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:46.546116   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:46.546256   73479 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/no-preload-673754/id_rsa Username:docker}
	I0731 18:08:46.546384   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:46.546888   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:46.546907   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:46.546924   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:46.547090   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:46.547256   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:46.547434   73479 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/no-preload-673754/id_rsa Username:docker}
	I0731 18:08:46.570759   73479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40485
	I0731 18:08:46.571222   73479 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:08:46.571668   73479 main.go:141] libmachine: Using API Version  1
	I0731 18:08:46.571688   73479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:08:46.572207   73479 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:08:46.572367   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetState
	I0731 18:08:46.574368   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 18:08:46.574582   73479 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 18:08:46.574607   73479 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 18:08:46.574627   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:46.577768   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:46.578542   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:46.578567   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:46.578741   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:46.578911   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:46.579047   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:46.579459   73479 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/no-preload-673754/id_rsa Username:docker}
	I0731 18:08:46.700752   73479 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 18:08:46.720967   73479 node_ready.go:35] waiting up to 6m0s for node "no-preload-673754" to be "Ready" ...
	I0731 18:08:46.798188   73479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 18:08:46.802534   73479 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 18:08:46.802564   73479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0731 18:08:46.828038   73479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 18:08:46.859309   73479 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 18:08:46.859337   73479 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 18:08:46.921507   73479 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 18:08:46.921536   73479 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 18:08:46.958759   73479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 18:08:48.106542   73479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.278462071s)
	I0731 18:08:48.106599   73479 main.go:141] libmachine: Making call to close driver server
	I0731 18:08:48.106608   73479 main.go:141] libmachine: (no-preload-673754) Calling .Close
	I0731 18:08:48.107151   73479 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:08:48.107177   73479 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:08:48.107187   73479 main.go:141] libmachine: Making call to close driver server
	I0731 18:08:48.107196   73479 main.go:141] libmachine: (no-preload-673754) Calling .Close
	I0731 18:08:48.107601   73479 main.go:141] libmachine: (no-preload-673754) DBG | Closing plugin on server side
	I0731 18:08:48.107604   73479 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:08:48.107631   73479 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:08:48.107831   73479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.309610972s)
	I0731 18:08:48.107872   73479 main.go:141] libmachine: Making call to close driver server
	I0731 18:08:48.107882   73479 main.go:141] libmachine: (no-preload-673754) Calling .Close
	I0731 18:08:48.108105   73479 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:08:48.108119   73479 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:08:48.108138   73479 main.go:141] libmachine: Making call to close driver server
	I0731 18:08:48.108150   73479 main.go:141] libmachine: (no-preload-673754) Calling .Close
	I0731 18:08:48.108351   73479 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:08:48.108367   73479 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:08:48.118038   73479 main.go:141] libmachine: Making call to close driver server
	I0731 18:08:48.118055   73479 main.go:141] libmachine: (no-preload-673754) Calling .Close
	I0731 18:08:48.118329   73479 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:08:48.118349   73479 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:08:48.128563   73479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.169765123s)
	I0731 18:08:48.128606   73479 main.go:141] libmachine: Making call to close driver server
	I0731 18:08:48.128619   73479 main.go:141] libmachine: (no-preload-673754) Calling .Close
	I0731 18:08:48.128901   73479 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:08:48.128915   73479 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:08:48.128924   73479 main.go:141] libmachine: Making call to close driver server
	I0731 18:08:48.128932   73479 main.go:141] libmachine: (no-preload-673754) Calling .Close
	I0731 18:08:48.129137   73479 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:08:48.129152   73479 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:08:48.129162   73479 addons.go:475] Verifying addon metrics-server=true in "no-preload-673754"
	I0731 18:08:48.129174   73479 main.go:141] libmachine: (no-preload-673754) DBG | Closing plugin on server side
	I0731 18:08:48.130887   73479 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0731 18:08:46.271648   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:48.271754   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:45.478411   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:45.978802   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:46.478407   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:46.978134   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:47.479125   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:47.978991   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:48.478597   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:48.978742   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:49.479320   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:49.978288   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:46.779263   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:48.779361   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:48.131964   73479 addons.go:510] duration metric: took 1.634151286s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0731 18:08:48.725682   73479 node_ready.go:53] node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:51.231081   73479 node_ready.go:53] node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:50.771387   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:52.771438   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:50.478112   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:50.978272   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:51.478319   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:51.978880   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:52.479176   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:52.979001   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:53.478508   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:53.978517   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:54.478857   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:54.978290   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:51.278348   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:53.278456   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:55.278495   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:53.725153   73479 node_ready.go:53] node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:54.224475   73479 node_ready.go:49] node "no-preload-673754" has status "Ready":"True"
	I0731 18:08:54.224505   73479 node_ready.go:38] duration metric: took 7.503503116s for node "no-preload-673754" to be "Ready" ...
	I0731 18:08:54.224517   73479 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:08:54.231434   73479 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-k7clq" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:56.237804   73479 pod_ready.go:102] pod "coredns-5cfdc65f69-k7clq" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:54.772597   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:57.271778   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:55.478727   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:55.978552   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:56.478246   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:56.978732   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:57.478262   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:57.978216   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:58.478212   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:58.978406   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:59.478270   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:59.978221   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:57.781459   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:00.278913   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:58.740148   73479 pod_ready.go:102] pod "coredns-5cfdc65f69-k7clq" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:01.237849   73479 pod_ready.go:92] pod "coredns-5cfdc65f69-k7clq" in "kube-system" namespace has status "Ready":"True"
	I0731 18:09:01.237874   73479 pod_ready.go:81] duration metric: took 7.00641308s for pod "coredns-5cfdc65f69-k7clq" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.237887   73479 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.242105   73479 pod_ready.go:92] pod "etcd-no-preload-673754" in "kube-system" namespace has status "Ready":"True"
	I0731 18:09:01.242122   73479 pod_ready.go:81] duration metric: took 4.229266ms for pod "etcd-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.242133   73479 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.246652   73479 pod_ready.go:92] pod "kube-apiserver-no-preload-673754" in "kube-system" namespace has status "Ready":"True"
	I0731 18:09:01.246674   73479 pod_ready.go:81] duration metric: took 4.534937ms for pod "kube-apiserver-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.246686   73479 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.251284   73479 pod_ready.go:92] pod "kube-controller-manager-no-preload-673754" in "kube-system" namespace has status "Ready":"True"
	I0731 18:09:01.251302   73479 pod_ready.go:81] duration metric: took 4.608584ms for pod "kube-controller-manager-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.251321   73479 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hqxh6" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.255030   73479 pod_ready.go:92] pod "kube-proxy-hqxh6" in "kube-system" namespace has status "Ready":"True"
	I0731 18:09:01.255045   73479 pod_ready.go:81] duration metric: took 3.718917ms for pod "kube-proxy-hqxh6" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.255052   73479 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.636799   73479 pod_ready.go:92] pod "kube-scheduler-no-preload-673754" in "kube-system" namespace has status "Ready":"True"
	I0731 18:09:01.636826   73479 pod_ready.go:81] duration metric: took 381.767881ms for pod "kube-scheduler-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.636835   73479 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:59.771686   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:02.271396   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:00.478785   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:00.978350   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:01.478635   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:01.978192   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:02.478480   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:02.979021   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:03.478366   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:03.978984   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:04.479143   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:04.978913   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:02.279613   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:04.778482   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:03.642978   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:05.644941   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:04.771938   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:07.271165   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:05.478608   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:05.978345   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:06.478435   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:06.978551   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:07.478131   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:07.978354   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:08.478977   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:08.979122   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:09.478279   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:09.978350   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:06.780364   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:09.278573   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:08.142974   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:10.643136   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:09.771950   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:11.772464   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:13.773164   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:10.479086   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:10.479175   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:10.516364   74203 cri.go:89] found id: ""
	I0731 18:09:10.516389   74203 logs.go:276] 0 containers: []
	W0731 18:09:10.516405   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:10.516411   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:10.516464   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:10.549398   74203 cri.go:89] found id: ""
	I0731 18:09:10.549422   74203 logs.go:276] 0 containers: []
	W0731 18:09:10.549433   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:10.549440   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:10.549503   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:10.584290   74203 cri.go:89] found id: ""
	I0731 18:09:10.584314   74203 logs.go:276] 0 containers: []
	W0731 18:09:10.584322   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:10.584327   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:10.584381   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:10.615832   74203 cri.go:89] found id: ""
	I0731 18:09:10.615860   74203 logs.go:276] 0 containers: []
	W0731 18:09:10.615871   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:10.615878   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:10.615941   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:10.647597   74203 cri.go:89] found id: ""
	I0731 18:09:10.647617   74203 logs.go:276] 0 containers: []
	W0731 18:09:10.647624   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:10.647629   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:10.647686   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:10.680981   74203 cri.go:89] found id: ""
	I0731 18:09:10.681016   74203 logs.go:276] 0 containers: []
	W0731 18:09:10.681027   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:10.681033   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:10.681093   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:10.713798   74203 cri.go:89] found id: ""
	I0731 18:09:10.713839   74203 logs.go:276] 0 containers: []
	W0731 18:09:10.713851   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:10.713865   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:10.713937   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:10.746378   74203 cri.go:89] found id: ""
	I0731 18:09:10.746405   74203 logs.go:276] 0 containers: []
	W0731 18:09:10.746413   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:10.746423   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:10.746439   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:10.799156   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:10.799187   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:10.812388   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:10.812413   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:10.932251   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:10.932271   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:10.932285   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:10.996810   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:10.996840   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:13.533936   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:13.549194   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:13.549250   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:13.599350   74203 cri.go:89] found id: ""
	I0731 18:09:13.599389   74203 logs.go:276] 0 containers: []
	W0731 18:09:13.599400   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:13.599407   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:13.599466   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:13.651736   74203 cri.go:89] found id: ""
	I0731 18:09:13.651771   74203 logs.go:276] 0 containers: []
	W0731 18:09:13.651791   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:13.651798   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:13.651855   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:13.699804   74203 cri.go:89] found id: ""
	I0731 18:09:13.699832   74203 logs.go:276] 0 containers: []
	W0731 18:09:13.699841   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:13.699846   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:13.699906   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:13.732760   74203 cri.go:89] found id: ""
	I0731 18:09:13.732781   74203 logs.go:276] 0 containers: []
	W0731 18:09:13.732788   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:13.732794   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:13.732849   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:13.766865   74203 cri.go:89] found id: ""
	I0731 18:09:13.766892   74203 logs.go:276] 0 containers: []
	W0731 18:09:13.766902   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:13.766910   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:13.766964   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:13.804706   74203 cri.go:89] found id: ""
	I0731 18:09:13.804733   74203 logs.go:276] 0 containers: []
	W0731 18:09:13.804743   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:13.804757   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:13.804821   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:13.838432   74203 cri.go:89] found id: ""
	I0731 18:09:13.838461   74203 logs.go:276] 0 containers: []
	W0731 18:09:13.838472   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:13.838479   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:13.838534   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:13.870455   74203 cri.go:89] found id: ""
	I0731 18:09:13.870480   74203 logs.go:276] 0 containers: []
	W0731 18:09:13.870490   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:13.870498   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:13.870510   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:13.922911   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:13.922947   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:13.936075   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:13.936098   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:14.006766   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:14.006790   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:14.006810   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:14.071066   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:14.071100   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:11.278892   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:13.279644   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:15.280298   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:12.643341   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:14.643636   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:16.280976   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:18.772338   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:16.615212   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:16.627439   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:16.627499   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:16.660764   74203 cri.go:89] found id: ""
	I0731 18:09:16.660785   74203 logs.go:276] 0 containers: []
	W0731 18:09:16.660792   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:16.660798   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:16.660842   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:16.697154   74203 cri.go:89] found id: ""
	I0731 18:09:16.697182   74203 logs.go:276] 0 containers: []
	W0731 18:09:16.697196   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:16.697201   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:16.697259   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:16.730263   74203 cri.go:89] found id: ""
	I0731 18:09:16.730284   74203 logs.go:276] 0 containers: []
	W0731 18:09:16.730291   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:16.730318   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:16.730369   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:16.765226   74203 cri.go:89] found id: ""
	I0731 18:09:16.765249   74203 logs.go:276] 0 containers: []
	W0731 18:09:16.765257   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:16.765262   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:16.765336   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:16.800502   74203 cri.go:89] found id: ""
	I0731 18:09:16.800528   74203 logs.go:276] 0 containers: []
	W0731 18:09:16.800535   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:16.800541   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:16.800599   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:16.837391   74203 cri.go:89] found id: ""
	I0731 18:09:16.837418   74203 logs.go:276] 0 containers: []
	W0731 18:09:16.837427   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:16.837435   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:16.837490   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:16.867606   74203 cri.go:89] found id: ""
	I0731 18:09:16.867628   74203 logs.go:276] 0 containers: []
	W0731 18:09:16.867637   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:16.867642   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:16.867696   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:16.901639   74203 cri.go:89] found id: ""
	I0731 18:09:16.901669   74203 logs.go:276] 0 containers: []
	W0731 18:09:16.901681   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:16.901693   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:16.901707   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:16.951692   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:16.951729   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:16.965069   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:16.965101   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:17.040337   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:17.040358   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:17.040371   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:17.115058   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:17.115093   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:19.651538   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:19.663682   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:19.663739   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:19.697851   74203 cri.go:89] found id: ""
	I0731 18:09:19.697879   74203 logs.go:276] 0 containers: []
	W0731 18:09:19.697894   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:19.697900   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:19.697996   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:19.732745   74203 cri.go:89] found id: ""
	I0731 18:09:19.732772   74203 logs.go:276] 0 containers: []
	W0731 18:09:19.732783   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:19.732790   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:19.732855   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:19.763843   74203 cri.go:89] found id: ""
	I0731 18:09:19.763865   74203 logs.go:276] 0 containers: []
	W0731 18:09:19.763873   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:19.763878   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:19.763934   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:19.797398   74203 cri.go:89] found id: ""
	I0731 18:09:19.797422   74203 logs.go:276] 0 containers: []
	W0731 18:09:19.797429   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:19.797434   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:19.797504   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:19.833239   74203 cri.go:89] found id: ""
	I0731 18:09:19.833268   74203 logs.go:276] 0 containers: []
	W0731 18:09:19.833278   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:19.833284   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:19.833340   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:19.866135   74203 cri.go:89] found id: ""
	I0731 18:09:19.866163   74203 logs.go:276] 0 containers: []
	W0731 18:09:19.866173   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:19.866181   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:19.866242   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:19.900581   74203 cri.go:89] found id: ""
	I0731 18:09:19.900606   74203 logs.go:276] 0 containers: []
	W0731 18:09:19.900615   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:19.900621   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:19.900720   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:19.936451   74203 cri.go:89] found id: ""
	I0731 18:09:19.936475   74203 logs.go:276] 0 containers: []
	W0731 18:09:19.936487   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:19.936496   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:19.936508   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:19.990522   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:19.990559   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:20.003460   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:20.003487   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:20.070869   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:20.070893   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:20.070912   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:20.148316   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:20.148354   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:17.779144   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:19.781539   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:17.143894   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:19.642139   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:21.642234   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:21.271074   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:23.771002   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:22.685964   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:22.698740   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:22.698814   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:22.735321   74203 cri.go:89] found id: ""
	I0731 18:09:22.735350   74203 logs.go:276] 0 containers: []
	W0731 18:09:22.735360   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:22.735367   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:22.735428   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:22.767689   74203 cri.go:89] found id: ""
	I0731 18:09:22.767718   74203 logs.go:276] 0 containers: []
	W0731 18:09:22.767729   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:22.767736   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:22.767795   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:22.804010   74203 cri.go:89] found id: ""
	I0731 18:09:22.804036   74203 logs.go:276] 0 containers: []
	W0731 18:09:22.804045   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:22.804050   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:22.804101   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:22.836820   74203 cri.go:89] found id: ""
	I0731 18:09:22.836847   74203 logs.go:276] 0 containers: []
	W0731 18:09:22.836858   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:22.836874   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:22.836933   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:22.870163   74203 cri.go:89] found id: ""
	I0731 18:09:22.870187   74203 logs.go:276] 0 containers: []
	W0731 18:09:22.870194   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:22.870199   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:22.870270   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:22.905926   74203 cri.go:89] found id: ""
	I0731 18:09:22.905951   74203 logs.go:276] 0 containers: []
	W0731 18:09:22.905959   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:22.905965   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:22.906020   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:22.938926   74203 cri.go:89] found id: ""
	I0731 18:09:22.938949   74203 logs.go:276] 0 containers: []
	W0731 18:09:22.938957   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:22.938963   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:22.939008   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:22.975150   74203 cri.go:89] found id: ""
	I0731 18:09:22.975185   74203 logs.go:276] 0 containers: []
	W0731 18:09:22.975194   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:22.975204   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:22.975219   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:23.043265   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:23.043290   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:23.043302   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:23.122681   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:23.122717   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:23.161745   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:23.161769   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:23.211274   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:23.211305   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:22.278664   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:24.778771   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:23.643871   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:26.143509   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:25.771922   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:27.772156   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:25.724702   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:25.739335   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:25.739415   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:25.778238   74203 cri.go:89] found id: ""
	I0731 18:09:25.778264   74203 logs.go:276] 0 containers: []
	W0731 18:09:25.778274   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:25.778282   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:25.778349   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:25.816530   74203 cri.go:89] found id: ""
	I0731 18:09:25.816566   74203 logs.go:276] 0 containers: []
	W0731 18:09:25.816579   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:25.816587   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:25.816652   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:25.853524   74203 cri.go:89] found id: ""
	I0731 18:09:25.853562   74203 logs.go:276] 0 containers: []
	W0731 18:09:25.853575   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:25.853583   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:25.853661   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:25.889690   74203 cri.go:89] found id: ""
	I0731 18:09:25.889719   74203 logs.go:276] 0 containers: []
	W0731 18:09:25.889728   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:25.889734   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:25.889803   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:25.922409   74203 cri.go:89] found id: ""
	I0731 18:09:25.922441   74203 logs.go:276] 0 containers: []
	W0731 18:09:25.922452   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:25.922459   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:25.922512   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:25.956849   74203 cri.go:89] found id: ""
	I0731 18:09:25.956876   74203 logs.go:276] 0 containers: []
	W0731 18:09:25.956886   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:25.956893   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:25.956958   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:25.994190   74203 cri.go:89] found id: ""
	I0731 18:09:25.994212   74203 logs.go:276] 0 containers: []
	W0731 18:09:25.994220   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:25.994225   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:25.994270   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:26.027980   74203 cri.go:89] found id: ""
	I0731 18:09:26.028005   74203 logs.go:276] 0 containers: []
	W0731 18:09:26.028014   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:26.028025   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:26.028044   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:26.076627   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:26.076661   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:26.089439   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:26.089464   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:26.167298   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:26.167319   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:26.167333   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:26.244611   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:26.244644   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:28.787238   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:28.800136   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:28.800221   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:28.843038   74203 cri.go:89] found id: ""
	I0731 18:09:28.843062   74203 logs.go:276] 0 containers: []
	W0731 18:09:28.843070   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:28.843076   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:28.843154   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:28.876979   74203 cri.go:89] found id: ""
	I0731 18:09:28.877010   74203 logs.go:276] 0 containers: []
	W0731 18:09:28.877021   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:28.877028   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:28.877095   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:28.913105   74203 cri.go:89] found id: ""
	I0731 18:09:28.913137   74203 logs.go:276] 0 containers: []
	W0731 18:09:28.913147   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:28.913155   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:28.913216   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:28.949113   74203 cri.go:89] found id: ""
	I0731 18:09:28.949144   74203 logs.go:276] 0 containers: []
	W0731 18:09:28.949153   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:28.949160   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:28.949208   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:28.983159   74203 cri.go:89] found id: ""
	I0731 18:09:28.983187   74203 logs.go:276] 0 containers: []
	W0731 18:09:28.983195   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:28.983200   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:28.983276   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:29.016316   74203 cri.go:89] found id: ""
	I0731 18:09:29.016356   74203 logs.go:276] 0 containers: []
	W0731 18:09:29.016364   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:29.016370   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:29.016419   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:29.050015   74203 cri.go:89] found id: ""
	I0731 18:09:29.050047   74203 logs.go:276] 0 containers: []
	W0731 18:09:29.050058   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:29.050069   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:29.050124   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:29.084711   74203 cri.go:89] found id: ""
	I0731 18:09:29.084739   74203 logs.go:276] 0 containers: []
	W0731 18:09:29.084749   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:29.084760   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:29.084777   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:29.135474   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:29.135516   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:29.149989   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:29.150022   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:29.223652   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:29.223676   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:29.223688   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:29.307949   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:29.307983   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:26.779082   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:29.280030   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:28.143957   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:30.643349   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:30.271524   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:32.271862   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:31.848760   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:31.861409   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:31.861470   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:31.894485   74203 cri.go:89] found id: ""
	I0731 18:09:31.894505   74203 logs.go:276] 0 containers: []
	W0731 18:09:31.894513   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:31.894518   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:31.894563   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:31.926760   74203 cri.go:89] found id: ""
	I0731 18:09:31.926784   74203 logs.go:276] 0 containers: []
	W0731 18:09:31.926791   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:31.926797   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:31.926857   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:31.963010   74203 cri.go:89] found id: ""
	I0731 18:09:31.963042   74203 logs.go:276] 0 containers: []
	W0731 18:09:31.963055   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:31.963062   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:31.963165   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:31.995221   74203 cri.go:89] found id: ""
	I0731 18:09:31.995249   74203 logs.go:276] 0 containers: []
	W0731 18:09:31.995260   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:31.995268   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:31.995333   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:32.033912   74203 cri.go:89] found id: ""
	I0731 18:09:32.033942   74203 logs.go:276] 0 containers: []
	W0731 18:09:32.033955   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:32.033963   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:32.034038   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:32.066416   74203 cri.go:89] found id: ""
	I0731 18:09:32.066446   74203 logs.go:276] 0 containers: []
	W0731 18:09:32.066477   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:32.066486   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:32.066549   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:32.100097   74203 cri.go:89] found id: ""
	I0731 18:09:32.100121   74203 logs.go:276] 0 containers: []
	W0731 18:09:32.100129   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:32.100135   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:32.100191   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:32.133061   74203 cri.go:89] found id: ""
	I0731 18:09:32.133088   74203 logs.go:276] 0 containers: []
	W0731 18:09:32.133096   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:32.133106   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:32.133120   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:32.169869   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:32.169897   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:32.218668   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:32.218707   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:32.231016   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:32.231039   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:32.304319   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:32.304342   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:32.304353   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:34.880423   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:34.893775   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:34.893853   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:34.925073   74203 cri.go:89] found id: ""
	I0731 18:09:34.925101   74203 logs.go:276] 0 containers: []
	W0731 18:09:34.925109   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:34.925115   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:34.925178   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:34.960870   74203 cri.go:89] found id: ""
	I0731 18:09:34.960896   74203 logs.go:276] 0 containers: []
	W0731 18:09:34.960904   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:34.960910   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:34.960961   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:34.996290   74203 cri.go:89] found id: ""
	I0731 18:09:34.996332   74203 logs.go:276] 0 containers: []
	W0731 18:09:34.996341   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:34.996347   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:34.996401   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:35.027900   74203 cri.go:89] found id: ""
	I0731 18:09:35.027932   74203 logs.go:276] 0 containers: []
	W0731 18:09:35.027940   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:35.027945   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:35.028004   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:35.060533   74203 cri.go:89] found id: ""
	I0731 18:09:35.060562   74203 logs.go:276] 0 containers: []
	W0731 18:09:35.060579   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:35.060586   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:35.060653   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:35.095307   74203 cri.go:89] found id: ""
	I0731 18:09:35.095339   74203 logs.go:276] 0 containers: []
	W0731 18:09:35.095348   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:35.095355   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:35.095421   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:35.127060   74203 cri.go:89] found id: ""
	I0731 18:09:35.127082   74203 logs.go:276] 0 containers: []
	W0731 18:09:35.127090   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:35.127095   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:35.127169   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:35.161300   74203 cri.go:89] found id: ""
	I0731 18:09:35.161328   74203 logs.go:276] 0 containers: []
	W0731 18:09:35.161339   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:35.161350   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:35.161369   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:35.233033   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:35.233060   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:35.233074   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:35.313279   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:35.313311   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:31.779160   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:33.779209   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:32.644329   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:35.143744   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:34.774758   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:37.271690   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:35.356120   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:35.356145   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:35.408231   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:35.408263   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:37.921242   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:37.933986   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:37.934044   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:37.964524   74203 cri.go:89] found id: ""
	I0731 18:09:37.964558   74203 logs.go:276] 0 containers: []
	W0731 18:09:37.964567   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:37.964574   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:37.964632   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:37.998157   74203 cri.go:89] found id: ""
	I0731 18:09:37.998183   74203 logs.go:276] 0 containers: []
	W0731 18:09:37.998191   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:37.998196   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:37.998257   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:38.034611   74203 cri.go:89] found id: ""
	I0731 18:09:38.034637   74203 logs.go:276] 0 containers: []
	W0731 18:09:38.034645   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:38.034650   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:38.034708   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:38.068005   74203 cri.go:89] found id: ""
	I0731 18:09:38.068029   74203 logs.go:276] 0 containers: []
	W0731 18:09:38.068039   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:38.068047   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:38.068104   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:38.106110   74203 cri.go:89] found id: ""
	I0731 18:09:38.106133   74203 logs.go:276] 0 containers: []
	W0731 18:09:38.106141   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:38.106146   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:38.106192   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:38.138337   74203 cri.go:89] found id: ""
	I0731 18:09:38.138364   74203 logs.go:276] 0 containers: []
	W0731 18:09:38.138375   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:38.138383   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:38.138440   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:38.171517   74203 cri.go:89] found id: ""
	I0731 18:09:38.171546   74203 logs.go:276] 0 containers: []
	W0731 18:09:38.171557   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:38.171564   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:38.171643   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:38.208708   74203 cri.go:89] found id: ""
	I0731 18:09:38.208733   74203 logs.go:276] 0 containers: []
	W0731 18:09:38.208741   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:38.208750   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:38.208760   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:38.243711   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:38.243736   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:38.298673   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:38.298705   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:38.311936   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:38.311962   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:38.384023   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:38.384049   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:38.384067   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:36.278948   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:38.279423   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:40.281213   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:37.644041   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:40.143131   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:39.772098   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:42.272096   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:40.959426   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:40.972581   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:40.972645   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:41.008917   74203 cri.go:89] found id: ""
	I0731 18:09:41.008941   74203 logs.go:276] 0 containers: []
	W0731 18:09:41.008950   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:41.008957   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:41.009018   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:41.045342   74203 cri.go:89] found id: ""
	I0731 18:09:41.045375   74203 logs.go:276] 0 containers: []
	W0731 18:09:41.045384   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:41.045390   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:41.045454   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:41.081385   74203 cri.go:89] found id: ""
	I0731 18:09:41.081409   74203 logs.go:276] 0 containers: []
	W0731 18:09:41.081417   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:41.081423   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:41.081469   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:41.118028   74203 cri.go:89] found id: ""
	I0731 18:09:41.118051   74203 logs.go:276] 0 containers: []
	W0731 18:09:41.118062   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:41.118067   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:41.118114   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:41.154162   74203 cri.go:89] found id: ""
	I0731 18:09:41.154190   74203 logs.go:276] 0 containers: []
	W0731 18:09:41.154201   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:41.154209   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:41.154271   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:41.190789   74203 cri.go:89] found id: ""
	I0731 18:09:41.190814   74203 logs.go:276] 0 containers: []
	W0731 18:09:41.190822   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:41.190827   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:41.190887   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:41.226281   74203 cri.go:89] found id: ""
	I0731 18:09:41.226312   74203 logs.go:276] 0 containers: []
	W0731 18:09:41.226321   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:41.226327   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:41.226382   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:41.258270   74203 cri.go:89] found id: ""
	I0731 18:09:41.258299   74203 logs.go:276] 0 containers: []
	W0731 18:09:41.258309   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:41.258321   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:41.258335   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:41.342713   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:41.342749   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:41.389772   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:41.389795   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:41.442645   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:41.442676   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:41.455850   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:41.455874   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:41.522017   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:44.022439   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:44.035190   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:44.035258   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:44.070759   74203 cri.go:89] found id: ""
	I0731 18:09:44.070783   74203 logs.go:276] 0 containers: []
	W0731 18:09:44.070790   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:44.070796   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:44.070857   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:44.105313   74203 cri.go:89] found id: ""
	I0731 18:09:44.105350   74203 logs.go:276] 0 containers: []
	W0731 18:09:44.105358   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:44.105364   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:44.105416   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:44.140159   74203 cri.go:89] found id: ""
	I0731 18:09:44.140208   74203 logs.go:276] 0 containers: []
	W0731 18:09:44.140220   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:44.140229   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:44.140301   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:44.176407   74203 cri.go:89] found id: ""
	I0731 18:09:44.176429   74203 logs.go:276] 0 containers: []
	W0731 18:09:44.176437   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:44.176442   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:44.176490   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:44.210875   74203 cri.go:89] found id: ""
	I0731 18:09:44.210899   74203 logs.go:276] 0 containers: []
	W0731 18:09:44.210907   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:44.210916   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:44.210969   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:44.247021   74203 cri.go:89] found id: ""
	I0731 18:09:44.247045   74203 logs.go:276] 0 containers: []
	W0731 18:09:44.247055   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:44.247061   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:44.247141   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:44.282983   74203 cri.go:89] found id: ""
	I0731 18:09:44.283011   74203 logs.go:276] 0 containers: []
	W0731 18:09:44.283021   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:44.283029   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:44.283092   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:44.319717   74203 cri.go:89] found id: ""
	I0731 18:09:44.319742   74203 logs.go:276] 0 containers: []
	W0731 18:09:44.319750   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:44.319766   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:44.319781   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:44.398602   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:44.398636   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:44.435350   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:44.435384   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:44.488021   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:44.488053   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:44.501790   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:44.501813   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:44.578374   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:42.779304   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:45.279008   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:42.143287   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:44.144123   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:46.643499   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:44.771059   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:46.771846   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:48.772300   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:47.079192   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:47.093516   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:47.093597   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:47.132872   74203 cri.go:89] found id: ""
	I0731 18:09:47.132899   74203 logs.go:276] 0 containers: []
	W0731 18:09:47.132907   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:47.132913   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:47.132969   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:47.167428   74203 cri.go:89] found id: ""
	I0731 18:09:47.167460   74203 logs.go:276] 0 containers: []
	W0731 18:09:47.167472   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:47.167480   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:47.167551   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:47.202206   74203 cri.go:89] found id: ""
	I0731 18:09:47.202237   74203 logs.go:276] 0 containers: []
	W0731 18:09:47.202250   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:47.202256   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:47.202308   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:47.238513   74203 cri.go:89] found id: ""
	I0731 18:09:47.238537   74203 logs.go:276] 0 containers: []
	W0731 18:09:47.238545   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:47.238551   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:47.238604   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:47.271732   74203 cri.go:89] found id: ""
	I0731 18:09:47.271755   74203 logs.go:276] 0 containers: []
	W0731 18:09:47.271764   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:47.271770   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:47.271828   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:47.305906   74203 cri.go:89] found id: ""
	I0731 18:09:47.305932   74203 logs.go:276] 0 containers: []
	W0731 18:09:47.305943   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:47.305948   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:47.305996   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:47.338427   74203 cri.go:89] found id: ""
	I0731 18:09:47.338452   74203 logs.go:276] 0 containers: []
	W0731 18:09:47.338461   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:47.338468   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:47.338526   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:47.374909   74203 cri.go:89] found id: ""
	I0731 18:09:47.374943   74203 logs.go:276] 0 containers: []
	W0731 18:09:47.374954   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:47.374963   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:47.374976   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:47.387739   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:47.387765   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:47.480479   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:47.480505   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:47.480519   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:47.562857   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:47.562890   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:47.608435   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:47.608466   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:50.164351   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:50.177485   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:50.177546   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:50.211474   74203 cri.go:89] found id: ""
	I0731 18:09:50.211502   74203 logs.go:276] 0 containers: []
	W0731 18:09:50.211512   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:50.211520   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:50.211583   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:50.248167   74203 cri.go:89] found id: ""
	I0731 18:09:50.248190   74203 logs.go:276] 0 containers: []
	W0731 18:09:50.248197   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:50.248203   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:50.248250   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:50.286323   74203 cri.go:89] found id: ""
	I0731 18:09:50.286358   74203 logs.go:276] 0 containers: []
	W0731 18:09:50.286366   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:50.286372   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:50.286420   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:50.316634   74203 cri.go:89] found id: ""
	I0731 18:09:50.316661   74203 logs.go:276] 0 containers: []
	W0731 18:09:50.316670   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:50.316675   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:50.316726   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:47.279198   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:49.280511   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:49.144581   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:51.642915   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:51.272079   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:53.272815   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:50.349881   74203 cri.go:89] found id: ""
	I0731 18:09:50.349909   74203 logs.go:276] 0 containers: []
	W0731 18:09:50.349919   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:50.349926   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:50.349989   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:50.384147   74203 cri.go:89] found id: ""
	I0731 18:09:50.384181   74203 logs.go:276] 0 containers: []
	W0731 18:09:50.384194   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:50.384203   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:50.384272   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:50.418024   74203 cri.go:89] found id: ""
	I0731 18:09:50.418052   74203 logs.go:276] 0 containers: []
	W0731 18:09:50.418062   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:50.418069   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:50.418130   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:50.454484   74203 cri.go:89] found id: ""
	I0731 18:09:50.454517   74203 logs.go:276] 0 containers: []
	W0731 18:09:50.454525   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:50.454533   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:50.454544   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:50.505508   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:50.505545   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:50.518504   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:50.518529   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:50.587950   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:50.587974   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:50.587989   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:50.669268   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:50.669302   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:53.209229   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:53.222114   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:53.222175   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:53.255330   74203 cri.go:89] found id: ""
	I0731 18:09:53.255356   74203 logs.go:276] 0 containers: []
	W0731 18:09:53.255365   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:53.255371   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:53.255432   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:53.290354   74203 cri.go:89] found id: ""
	I0731 18:09:53.290375   74203 logs.go:276] 0 containers: []
	W0731 18:09:53.290382   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:53.290387   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:53.290438   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:53.323621   74203 cri.go:89] found id: ""
	I0731 18:09:53.323645   74203 logs.go:276] 0 containers: []
	W0731 18:09:53.323653   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:53.323658   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:53.323718   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:53.355850   74203 cri.go:89] found id: ""
	I0731 18:09:53.355877   74203 logs.go:276] 0 containers: []
	W0731 18:09:53.355887   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:53.355894   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:53.355957   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:53.388686   74203 cri.go:89] found id: ""
	I0731 18:09:53.388716   74203 logs.go:276] 0 containers: []
	W0731 18:09:53.388726   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:53.388733   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:53.388785   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:53.426924   74203 cri.go:89] found id: ""
	I0731 18:09:53.426952   74203 logs.go:276] 0 containers: []
	W0731 18:09:53.426961   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:53.426967   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:53.427019   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:53.462041   74203 cri.go:89] found id: ""
	I0731 18:09:53.462067   74203 logs.go:276] 0 containers: []
	W0731 18:09:53.462078   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:53.462084   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:53.462145   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:53.493810   74203 cri.go:89] found id: ""
	I0731 18:09:53.493833   74203 logs.go:276] 0 containers: []
	W0731 18:09:53.493842   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:53.493852   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:53.493867   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:53.530019   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:53.530053   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:53.580749   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:53.580782   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:53.594457   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:53.594482   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:53.662096   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:53.662116   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:53.662134   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:51.778292   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:53.779043   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:53.643914   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:56.142699   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:55.772106   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:58.271063   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:56.238479   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:56.251272   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:56.251350   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:56.287380   74203 cri.go:89] found id: ""
	I0731 18:09:56.287406   74203 logs.go:276] 0 containers: []
	W0731 18:09:56.287414   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:56.287419   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:56.287471   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:56.322490   74203 cri.go:89] found id: ""
	I0731 18:09:56.322512   74203 logs.go:276] 0 containers: []
	W0731 18:09:56.322520   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:56.322526   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:56.322572   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:56.355845   74203 cri.go:89] found id: ""
	I0731 18:09:56.355874   74203 logs.go:276] 0 containers: []
	W0731 18:09:56.355885   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:56.355895   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:56.355958   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:56.388304   74203 cri.go:89] found id: ""
	I0731 18:09:56.388330   74203 logs.go:276] 0 containers: []
	W0731 18:09:56.388340   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:56.388348   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:56.388411   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:56.420837   74203 cri.go:89] found id: ""
	I0731 18:09:56.420867   74203 logs.go:276] 0 containers: []
	W0731 18:09:56.420877   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:56.420884   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:56.420950   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:56.453095   74203 cri.go:89] found id: ""
	I0731 18:09:56.453135   74203 logs.go:276] 0 containers: []
	W0731 18:09:56.453146   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:56.453155   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:56.453214   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:56.484245   74203 cri.go:89] found id: ""
	I0731 18:09:56.484272   74203 logs.go:276] 0 containers: []
	W0731 18:09:56.484282   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:56.484296   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:56.484366   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:56.519473   74203 cri.go:89] found id: ""
	I0731 18:09:56.519501   74203 logs.go:276] 0 containers: []
	W0731 18:09:56.519508   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:56.519516   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:56.519530   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:56.532178   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:56.532203   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:56.600092   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:56.600122   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:56.600137   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:56.679176   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:56.679208   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:56.715464   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:56.715499   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:59.267214   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:59.280666   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:59.280740   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:59.312898   74203 cri.go:89] found id: ""
	I0731 18:09:59.312928   74203 logs.go:276] 0 containers: []
	W0731 18:09:59.312940   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:59.312947   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:59.313013   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:59.347881   74203 cri.go:89] found id: ""
	I0731 18:09:59.347907   74203 logs.go:276] 0 containers: []
	W0731 18:09:59.347915   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:59.347919   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:59.347978   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:59.382566   74203 cri.go:89] found id: ""
	I0731 18:09:59.382603   74203 logs.go:276] 0 containers: []
	W0731 18:09:59.382615   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:59.382629   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:59.382691   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:59.417123   74203 cri.go:89] found id: ""
	I0731 18:09:59.417148   74203 logs.go:276] 0 containers: []
	W0731 18:09:59.417157   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:59.417163   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:59.417220   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:59.452674   74203 cri.go:89] found id: ""
	I0731 18:09:59.452699   74203 logs.go:276] 0 containers: []
	W0731 18:09:59.452709   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:59.452715   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:59.452775   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:59.488879   74203 cri.go:89] found id: ""
	I0731 18:09:59.488905   74203 logs.go:276] 0 containers: []
	W0731 18:09:59.488913   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:59.488921   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:59.488981   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:59.521773   74203 cri.go:89] found id: ""
	I0731 18:09:59.521801   74203 logs.go:276] 0 containers: []
	W0731 18:09:59.521809   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:59.521816   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:59.521876   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:59.566619   74203 cri.go:89] found id: ""
	I0731 18:09:59.566649   74203 logs.go:276] 0 containers: []
	W0731 18:09:59.566659   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:59.566670   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:59.566687   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:59.638301   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:59.638351   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:59.638367   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:59.721561   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:59.721597   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:59.759371   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:59.759402   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:59.811223   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:59.811255   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:56.280351   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:58.777896   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:00.779028   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:58.144006   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:00.643536   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:00.772456   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:03.270710   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:02.325339   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:02.337908   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:02.337963   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:02.369343   74203 cri.go:89] found id: ""
	I0731 18:10:02.369369   74203 logs.go:276] 0 containers: []
	W0731 18:10:02.369378   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:02.369384   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:02.369442   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:02.406207   74203 cri.go:89] found id: ""
	I0731 18:10:02.406234   74203 logs.go:276] 0 containers: []
	W0731 18:10:02.406242   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:02.406247   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:02.406297   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:02.442001   74203 cri.go:89] found id: ""
	I0731 18:10:02.442031   74203 logs.go:276] 0 containers: []
	W0731 18:10:02.442041   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:02.442049   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:02.442109   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:02.478407   74203 cri.go:89] found id: ""
	I0731 18:10:02.478431   74203 logs.go:276] 0 containers: []
	W0731 18:10:02.478439   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:02.478444   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:02.478491   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:02.513832   74203 cri.go:89] found id: ""
	I0731 18:10:02.513875   74203 logs.go:276] 0 containers: []
	W0731 18:10:02.513888   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:02.513896   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:02.513962   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:02.550830   74203 cri.go:89] found id: ""
	I0731 18:10:02.550856   74203 logs.go:276] 0 containers: []
	W0731 18:10:02.550867   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:02.550874   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:02.550937   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:02.584649   74203 cri.go:89] found id: ""
	I0731 18:10:02.584676   74203 logs.go:276] 0 containers: []
	W0731 18:10:02.584684   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:02.584691   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:02.584752   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:02.617436   74203 cri.go:89] found id: ""
	I0731 18:10:02.617464   74203 logs.go:276] 0 containers: []
	W0731 18:10:02.617475   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:02.617485   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:02.617500   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:02.671571   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:02.671609   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:02.686657   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:02.686694   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:02.755974   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:02.756008   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:02.756025   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:02.837976   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:02.838012   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:02.779666   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:04.779994   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:02.644075   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:05.142859   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:05.272500   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:07.771599   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:05.375212   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:05.388635   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:05.388703   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:05.427583   74203 cri.go:89] found id: ""
	I0731 18:10:05.427610   74203 logs.go:276] 0 containers: []
	W0731 18:10:05.427617   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:05.427622   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:05.427673   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:05.462550   74203 cri.go:89] found id: ""
	I0731 18:10:05.462575   74203 logs.go:276] 0 containers: []
	W0731 18:10:05.462583   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:05.462589   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:05.462645   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:05.501768   74203 cri.go:89] found id: ""
	I0731 18:10:05.501790   74203 logs.go:276] 0 containers: []
	W0731 18:10:05.501797   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:05.501802   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:05.501860   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:05.539692   74203 cri.go:89] found id: ""
	I0731 18:10:05.539719   74203 logs.go:276] 0 containers: []
	W0731 18:10:05.539731   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:05.539737   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:05.539798   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:05.573844   74203 cri.go:89] found id: ""
	I0731 18:10:05.573872   74203 logs.go:276] 0 containers: []
	W0731 18:10:05.573884   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:05.573891   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:05.573953   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:05.607827   74203 cri.go:89] found id: ""
	I0731 18:10:05.607848   74203 logs.go:276] 0 containers: []
	W0731 18:10:05.607858   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:05.607863   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:05.607913   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:05.639644   74203 cri.go:89] found id: ""
	I0731 18:10:05.639673   74203 logs.go:276] 0 containers: []
	W0731 18:10:05.639684   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:05.639691   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:05.639753   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:05.673164   74203 cri.go:89] found id: ""
	I0731 18:10:05.673188   74203 logs.go:276] 0 containers: []
	W0731 18:10:05.673195   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:05.673203   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:05.673215   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:05.755189   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:05.755221   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:05.793686   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:05.793715   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:05.844930   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:05.844965   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:05.859150   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:05.859176   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:05.929945   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:08.430669   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:08.444918   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:08.444989   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:08.482598   74203 cri.go:89] found id: ""
	I0731 18:10:08.482625   74203 logs.go:276] 0 containers: []
	W0731 18:10:08.482635   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:08.482642   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:08.482708   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:08.519687   74203 cri.go:89] found id: ""
	I0731 18:10:08.519717   74203 logs.go:276] 0 containers: []
	W0731 18:10:08.519726   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:08.519734   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:08.519795   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:08.551600   74203 cri.go:89] found id: ""
	I0731 18:10:08.551638   74203 logs.go:276] 0 containers: []
	W0731 18:10:08.551649   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:08.551657   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:08.551713   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:08.585233   74203 cri.go:89] found id: ""
	I0731 18:10:08.585263   74203 logs.go:276] 0 containers: []
	W0731 18:10:08.585274   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:08.585282   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:08.585343   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:08.622464   74203 cri.go:89] found id: ""
	I0731 18:10:08.622492   74203 logs.go:276] 0 containers: []
	W0731 18:10:08.622502   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:08.622510   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:08.622569   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:08.658360   74203 cri.go:89] found id: ""
	I0731 18:10:08.658390   74203 logs.go:276] 0 containers: []
	W0731 18:10:08.658402   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:08.658410   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:08.658471   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:08.692076   74203 cri.go:89] found id: ""
	I0731 18:10:08.692100   74203 logs.go:276] 0 containers: []
	W0731 18:10:08.692109   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:08.692116   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:08.692179   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:08.729584   74203 cri.go:89] found id: ""
	I0731 18:10:08.729612   74203 logs.go:276] 0 containers: []
	W0731 18:10:08.729622   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:08.729633   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:08.729647   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:08.806395   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:08.806457   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:08.806485   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:08.884008   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:08.884046   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:08.924359   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:08.924398   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:08.978161   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:08.978195   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:07.279327   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:09.281214   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:07.143145   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:09.143995   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:11.643254   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:09.773024   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:12.272862   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:14.273615   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:11.491784   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:11.504711   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:11.504784   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:11.541314   74203 cri.go:89] found id: ""
	I0731 18:10:11.541353   74203 logs.go:276] 0 containers: []
	W0731 18:10:11.541361   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:11.541366   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:11.541424   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:11.576481   74203 cri.go:89] found id: ""
	I0731 18:10:11.576509   74203 logs.go:276] 0 containers: []
	W0731 18:10:11.576527   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:11.576535   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:11.576597   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:11.610370   74203 cri.go:89] found id: ""
	I0731 18:10:11.610395   74203 logs.go:276] 0 containers: []
	W0731 18:10:11.610404   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:11.610412   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:11.610470   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:11.645559   74203 cri.go:89] found id: ""
	I0731 18:10:11.645586   74203 logs.go:276] 0 containers: []
	W0731 18:10:11.645593   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:11.645598   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:11.645654   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:11.677576   74203 cri.go:89] found id: ""
	I0731 18:10:11.677613   74203 logs.go:276] 0 containers: []
	W0731 18:10:11.677624   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:11.677631   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:11.677681   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:11.710173   74203 cri.go:89] found id: ""
	I0731 18:10:11.710199   74203 logs.go:276] 0 containers: []
	W0731 18:10:11.710208   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:11.710215   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:11.710273   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:11.743722   74203 cri.go:89] found id: ""
	I0731 18:10:11.743752   74203 logs.go:276] 0 containers: []
	W0731 18:10:11.743763   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:11.743782   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:11.743857   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:11.776730   74203 cri.go:89] found id: ""
	I0731 18:10:11.776752   74203 logs.go:276] 0 containers: []
	W0731 18:10:11.776759   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:11.776766   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:11.776777   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:11.846385   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:11.846404   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:11.846415   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:11.923748   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:11.923779   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:11.959700   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:11.959734   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:12.009971   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:12.010002   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:14.524097   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:14.537349   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:14.537449   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:14.569907   74203 cri.go:89] found id: ""
	I0731 18:10:14.569934   74203 logs.go:276] 0 containers: []
	W0731 18:10:14.569941   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:14.569947   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:14.569999   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:14.605058   74203 cri.go:89] found id: ""
	I0731 18:10:14.605085   74203 logs.go:276] 0 containers: []
	W0731 18:10:14.605095   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:14.605102   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:14.605155   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:14.640941   74203 cri.go:89] found id: ""
	I0731 18:10:14.640964   74203 logs.go:276] 0 containers: []
	W0731 18:10:14.640975   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:14.640982   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:14.641039   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:14.678774   74203 cri.go:89] found id: ""
	I0731 18:10:14.678803   74203 logs.go:276] 0 containers: []
	W0731 18:10:14.678814   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:14.678822   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:14.678880   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:14.714123   74203 cri.go:89] found id: ""
	I0731 18:10:14.714152   74203 logs.go:276] 0 containers: []
	W0731 18:10:14.714163   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:14.714171   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:14.714230   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:14.750212   74203 cri.go:89] found id: ""
	I0731 18:10:14.750243   74203 logs.go:276] 0 containers: []
	W0731 18:10:14.750255   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:14.750262   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:14.750322   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:14.786820   74203 cri.go:89] found id: ""
	I0731 18:10:14.786842   74203 logs.go:276] 0 containers: []
	W0731 18:10:14.786850   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:14.786856   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:14.786904   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:14.819667   74203 cri.go:89] found id: ""
	I0731 18:10:14.819689   74203 logs.go:276] 0 containers: []
	W0731 18:10:14.819697   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:14.819705   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:14.819725   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:14.832525   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:14.832550   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:14.901190   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:14.901216   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:14.901229   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:14.977123   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:14.977158   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:15.014882   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:15.014912   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:11.779007   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:14.279638   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:14.142303   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:16.143713   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:16.770910   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:18.771058   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:17.564989   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:17.578676   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:17.578740   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:17.610077   74203 cri.go:89] found id: ""
	I0731 18:10:17.610103   74203 logs.go:276] 0 containers: []
	W0731 18:10:17.610112   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:17.610117   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:17.610169   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:17.643143   74203 cri.go:89] found id: ""
	I0731 18:10:17.643166   74203 logs.go:276] 0 containers: []
	W0731 18:10:17.643173   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:17.643179   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:17.643225   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:17.677979   74203 cri.go:89] found id: ""
	I0731 18:10:17.678002   74203 logs.go:276] 0 containers: []
	W0731 18:10:17.678010   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:17.678016   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:17.678086   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:17.711905   74203 cri.go:89] found id: ""
	I0731 18:10:17.711941   74203 logs.go:276] 0 containers: []
	W0731 18:10:17.711953   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:17.711960   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:17.712013   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:17.745842   74203 cri.go:89] found id: ""
	I0731 18:10:17.745870   74203 logs.go:276] 0 containers: []
	W0731 18:10:17.745881   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:17.745889   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:17.745949   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:17.778170   74203 cri.go:89] found id: ""
	I0731 18:10:17.778242   74203 logs.go:276] 0 containers: []
	W0731 18:10:17.778260   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:17.778272   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:17.778340   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:17.810717   74203 cri.go:89] found id: ""
	I0731 18:10:17.810744   74203 logs.go:276] 0 containers: []
	W0731 18:10:17.810755   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:17.810762   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:17.810823   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:17.843237   74203 cri.go:89] found id: ""
	I0731 18:10:17.843268   74203 logs.go:276] 0 containers: []
	W0731 18:10:17.843278   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:17.843288   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:17.843303   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:17.894338   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:17.894376   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:17.907898   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:17.907927   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:17.977115   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:17.977133   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:17.977145   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:18.059924   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:18.059968   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:16.279697   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:18.780698   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:18.144063   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:20.643891   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:20.772956   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:23.270974   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:20.600903   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:20.613609   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:20.613680   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:20.646352   74203 cri.go:89] found id: ""
	I0731 18:10:20.646379   74203 logs.go:276] 0 containers: []
	W0731 18:10:20.646388   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:20.646395   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:20.646453   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:20.680448   74203 cri.go:89] found id: ""
	I0731 18:10:20.680475   74203 logs.go:276] 0 containers: []
	W0731 18:10:20.680486   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:20.680493   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:20.680555   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:20.716330   74203 cri.go:89] found id: ""
	I0731 18:10:20.716365   74203 logs.go:276] 0 containers: []
	W0731 18:10:20.716378   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:20.716387   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:20.716448   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:20.748630   74203 cri.go:89] found id: ""
	I0731 18:10:20.748657   74203 logs.go:276] 0 containers: []
	W0731 18:10:20.748665   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:20.748670   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:20.748736   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:20.787769   74203 cri.go:89] found id: ""
	I0731 18:10:20.787793   74203 logs.go:276] 0 containers: []
	W0731 18:10:20.787802   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:20.787809   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:20.787869   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:20.819884   74203 cri.go:89] found id: ""
	I0731 18:10:20.819911   74203 logs.go:276] 0 containers: []
	W0731 18:10:20.819921   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:20.819929   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:20.819988   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:20.853414   74203 cri.go:89] found id: ""
	I0731 18:10:20.853437   74203 logs.go:276] 0 containers: []
	W0731 18:10:20.853445   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:20.853450   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:20.853508   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:20.889198   74203 cri.go:89] found id: ""
	I0731 18:10:20.889224   74203 logs.go:276] 0 containers: []
	W0731 18:10:20.889231   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:20.889239   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:20.889251   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:20.903240   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:20.903268   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:20.971003   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:20.971032   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:20.971051   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:21.045856   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:21.045888   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:21.086089   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:21.086121   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:23.639664   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:23.652573   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:23.652632   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:23.684719   74203 cri.go:89] found id: ""
	I0731 18:10:23.684746   74203 logs.go:276] 0 containers: []
	W0731 18:10:23.684757   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:23.684765   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:23.684820   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:23.717315   74203 cri.go:89] found id: ""
	I0731 18:10:23.717350   74203 logs.go:276] 0 containers: []
	W0731 18:10:23.717362   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:23.717369   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:23.717432   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:23.750251   74203 cri.go:89] found id: ""
	I0731 18:10:23.750275   74203 logs.go:276] 0 containers: []
	W0731 18:10:23.750286   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:23.750293   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:23.750397   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:23.785700   74203 cri.go:89] found id: ""
	I0731 18:10:23.785726   74203 logs.go:276] 0 containers: []
	W0731 18:10:23.785737   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:23.785745   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:23.785792   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:23.816856   74203 cri.go:89] found id: ""
	I0731 18:10:23.816885   74203 logs.go:276] 0 containers: []
	W0731 18:10:23.816895   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:23.816902   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:23.816965   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:23.849931   74203 cri.go:89] found id: ""
	I0731 18:10:23.849962   74203 logs.go:276] 0 containers: []
	W0731 18:10:23.849972   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:23.849980   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:23.850043   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:23.881413   74203 cri.go:89] found id: ""
	I0731 18:10:23.881444   74203 logs.go:276] 0 containers: []
	W0731 18:10:23.881452   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:23.881458   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:23.881516   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:23.914272   74203 cri.go:89] found id: ""
	I0731 18:10:23.914303   74203 logs.go:276] 0 containers: []
	W0731 18:10:23.914313   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:23.914325   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:23.914352   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:23.979988   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:23.980015   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:23.980027   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:24.057159   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:24.057198   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:24.097567   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:24.097603   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:24.154740   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:24.154781   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:21.279091   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:23.779103   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:25.779754   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:23.142423   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:25.642901   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:25.272277   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:27.771221   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:26.670324   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:26.683866   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:26.683951   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:26.717671   74203 cri.go:89] found id: ""
	I0731 18:10:26.717722   74203 logs.go:276] 0 containers: []
	W0731 18:10:26.717733   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:26.717739   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:26.717790   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:26.751201   74203 cri.go:89] found id: ""
	I0731 18:10:26.751228   74203 logs.go:276] 0 containers: []
	W0731 18:10:26.751236   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:26.751246   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:26.751315   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:26.784768   74203 cri.go:89] found id: ""
	I0731 18:10:26.784793   74203 logs.go:276] 0 containers: []
	W0731 18:10:26.784803   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:26.784811   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:26.784868   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:26.822269   74203 cri.go:89] found id: ""
	I0731 18:10:26.822298   74203 logs.go:276] 0 containers: []
	W0731 18:10:26.822307   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:26.822315   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:26.822378   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:26.854405   74203 cri.go:89] found id: ""
	I0731 18:10:26.854427   74203 logs.go:276] 0 containers: []
	W0731 18:10:26.854434   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:26.854441   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:26.854490   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:26.888975   74203 cri.go:89] found id: ""
	I0731 18:10:26.889000   74203 logs.go:276] 0 containers: []
	W0731 18:10:26.889007   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:26.889013   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:26.889085   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:26.922940   74203 cri.go:89] found id: ""
	I0731 18:10:26.922967   74203 logs.go:276] 0 containers: []
	W0731 18:10:26.922976   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:26.922981   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:26.923040   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:26.955717   74203 cri.go:89] found id: ""
	I0731 18:10:26.955743   74203 logs.go:276] 0 containers: []
	W0731 18:10:26.955754   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:26.955764   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:26.955779   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:27.006453   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:27.006481   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:27.019136   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:27.019159   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:27.086988   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:27.087014   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:27.087031   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:27.161574   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:27.161604   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:29.705620   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:29.718718   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:29.718775   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:29.751079   74203 cri.go:89] found id: ""
	I0731 18:10:29.751123   74203 logs.go:276] 0 containers: []
	W0731 18:10:29.751134   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:29.751142   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:29.751198   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:29.790944   74203 cri.go:89] found id: ""
	I0731 18:10:29.790971   74203 logs.go:276] 0 containers: []
	W0731 18:10:29.790982   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:29.790988   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:29.791041   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:29.827921   74203 cri.go:89] found id: ""
	I0731 18:10:29.827951   74203 logs.go:276] 0 containers: []
	W0731 18:10:29.827965   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:29.827971   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:29.828031   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:29.861365   74203 cri.go:89] found id: ""
	I0731 18:10:29.861398   74203 logs.go:276] 0 containers: []
	W0731 18:10:29.861409   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:29.861417   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:29.861472   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:29.894509   74203 cri.go:89] found id: ""
	I0731 18:10:29.894537   74203 logs.go:276] 0 containers: []
	W0731 18:10:29.894546   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:29.894552   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:29.894614   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:29.926793   74203 cri.go:89] found id: ""
	I0731 18:10:29.926821   74203 logs.go:276] 0 containers: []
	W0731 18:10:29.926832   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:29.926839   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:29.926904   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:29.963765   74203 cri.go:89] found id: ""
	I0731 18:10:29.963792   74203 logs.go:276] 0 containers: []
	W0731 18:10:29.963802   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:29.963809   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:29.963870   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:29.998577   74203 cri.go:89] found id: ""
	I0731 18:10:29.998604   74203 logs.go:276] 0 containers: []
	W0731 18:10:29.998611   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:29.998619   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:29.998630   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:30.050035   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:30.050072   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:30.064147   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:30.064178   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:30.136990   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:30.137012   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:30.137030   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:30.214687   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:30.214719   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:28.279257   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:30.778466   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:27.644082   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:30.144191   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:29.772316   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:32.272651   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:32.753503   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:32.766795   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:32.766873   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:32.812134   74203 cri.go:89] found id: ""
	I0731 18:10:32.812161   74203 logs.go:276] 0 containers: []
	W0731 18:10:32.812169   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:32.812175   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:32.812229   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:32.846997   74203 cri.go:89] found id: ""
	I0731 18:10:32.847029   74203 logs.go:276] 0 containers: []
	W0731 18:10:32.847039   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:32.847044   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:32.847092   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:32.884093   74203 cri.go:89] found id: ""
	I0731 18:10:32.884123   74203 logs.go:276] 0 containers: []
	W0731 18:10:32.884132   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:32.884138   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:32.884188   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:32.920160   74203 cri.go:89] found id: ""
	I0731 18:10:32.920186   74203 logs.go:276] 0 containers: []
	W0731 18:10:32.920197   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:32.920204   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:32.920263   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:32.952750   74203 cri.go:89] found id: ""
	I0731 18:10:32.952777   74203 logs.go:276] 0 containers: []
	W0731 18:10:32.952788   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:32.952795   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:32.952865   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:32.989086   74203 cri.go:89] found id: ""
	I0731 18:10:32.989115   74203 logs.go:276] 0 containers: []
	W0731 18:10:32.989125   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:32.989135   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:32.989189   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:33.021554   74203 cri.go:89] found id: ""
	I0731 18:10:33.021590   74203 logs.go:276] 0 containers: []
	W0731 18:10:33.021602   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:33.021609   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:33.021662   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:33.061097   74203 cri.go:89] found id: ""
	I0731 18:10:33.061128   74203 logs.go:276] 0 containers: []
	W0731 18:10:33.061141   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:33.061160   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:33.061174   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:33.113497   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:33.113534   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:33.126816   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:33.126842   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:33.196713   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:33.196733   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:33.196744   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:33.277697   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:33.277724   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:33.279738   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:35.780181   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:32.643177   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:35.143606   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:34.771678   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:36.772167   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:39.272752   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:35.817143   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:35.829760   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:35.829820   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:35.862974   74203 cri.go:89] found id: ""
	I0731 18:10:35.863002   74203 logs.go:276] 0 containers: []
	W0731 18:10:35.863014   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:35.863022   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:35.863078   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:35.898547   74203 cri.go:89] found id: ""
	I0731 18:10:35.898576   74203 logs.go:276] 0 containers: []
	W0731 18:10:35.898584   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:35.898590   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:35.898651   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:35.930351   74203 cri.go:89] found id: ""
	I0731 18:10:35.930379   74203 logs.go:276] 0 containers: []
	W0731 18:10:35.930390   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:35.930396   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:35.930463   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:35.962623   74203 cri.go:89] found id: ""
	I0731 18:10:35.962652   74203 logs.go:276] 0 containers: []
	W0731 18:10:35.962663   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:35.962670   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:35.962727   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:35.998213   74203 cri.go:89] found id: ""
	I0731 18:10:35.998233   74203 logs.go:276] 0 containers: []
	W0731 18:10:35.998240   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:35.998245   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:35.998291   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:36.032670   74203 cri.go:89] found id: ""
	I0731 18:10:36.032695   74203 logs.go:276] 0 containers: []
	W0731 18:10:36.032703   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:36.032709   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:36.032757   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:36.066349   74203 cri.go:89] found id: ""
	I0731 18:10:36.066381   74203 logs.go:276] 0 containers: []
	W0731 18:10:36.066392   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:36.066399   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:36.066461   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:36.104137   74203 cri.go:89] found id: ""
	I0731 18:10:36.104168   74203 logs.go:276] 0 containers: []
	W0731 18:10:36.104180   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:36.104200   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:36.104215   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:36.155814   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:36.155844   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:36.168885   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:36.168912   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:36.235950   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:36.235972   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:36.235987   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:36.318382   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:36.318414   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:38.853972   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:38.867018   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:38.867089   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:38.902069   74203 cri.go:89] found id: ""
	I0731 18:10:38.902097   74203 logs.go:276] 0 containers: []
	W0731 18:10:38.902109   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:38.902115   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:38.902181   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:38.935272   74203 cri.go:89] found id: ""
	I0731 18:10:38.935296   74203 logs.go:276] 0 containers: []
	W0731 18:10:38.935316   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:38.935329   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:38.935387   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:38.968582   74203 cri.go:89] found id: ""
	I0731 18:10:38.968610   74203 logs.go:276] 0 containers: []
	W0731 18:10:38.968621   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:38.968629   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:38.968688   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:38.999740   74203 cri.go:89] found id: ""
	I0731 18:10:38.999770   74203 logs.go:276] 0 containers: []
	W0731 18:10:38.999780   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:38.999787   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:38.999845   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:39.032964   74203 cri.go:89] found id: ""
	I0731 18:10:39.032993   74203 logs.go:276] 0 containers: []
	W0731 18:10:39.033008   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:39.033015   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:39.033099   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:39.064121   74203 cri.go:89] found id: ""
	I0731 18:10:39.064149   74203 logs.go:276] 0 containers: []
	W0731 18:10:39.064158   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:39.064164   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:39.064222   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:39.098462   74203 cri.go:89] found id: ""
	I0731 18:10:39.098488   74203 logs.go:276] 0 containers: []
	W0731 18:10:39.098498   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:39.098505   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:39.098564   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:39.130627   74203 cri.go:89] found id: ""
	I0731 18:10:39.130653   74203 logs.go:276] 0 containers: []
	W0731 18:10:39.130663   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:39.130674   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:39.130687   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:39.223664   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:39.223698   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:39.260502   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:39.260533   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:39.315643   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:39.315675   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:39.329731   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:39.329761   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:39.395078   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:38.278911   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:40.779921   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:37.643246   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:39.643862   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:41.772051   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:44.271544   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:41.895698   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:41.910111   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:41.910191   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:41.943700   74203 cri.go:89] found id: ""
	I0731 18:10:41.943732   74203 logs.go:276] 0 containers: []
	W0731 18:10:41.943743   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:41.943751   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:41.943812   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:41.976848   74203 cri.go:89] found id: ""
	I0731 18:10:41.976879   74203 logs.go:276] 0 containers: []
	W0731 18:10:41.976888   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:41.976894   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:41.976967   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:42.009424   74203 cri.go:89] found id: ""
	I0731 18:10:42.009451   74203 logs.go:276] 0 containers: []
	W0731 18:10:42.009462   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:42.009477   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:42.009546   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:42.047233   74203 cri.go:89] found id: ""
	I0731 18:10:42.047260   74203 logs.go:276] 0 containers: []
	W0731 18:10:42.047268   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:42.047274   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:42.047342   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:42.079900   74203 cri.go:89] found id: ""
	I0731 18:10:42.079928   74203 logs.go:276] 0 containers: []
	W0731 18:10:42.079938   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:42.079945   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:42.080025   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:42.114122   74203 cri.go:89] found id: ""
	I0731 18:10:42.114152   74203 logs.go:276] 0 containers: []
	W0731 18:10:42.114164   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:42.114172   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:42.114224   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:42.148741   74203 cri.go:89] found id: ""
	I0731 18:10:42.148768   74203 logs.go:276] 0 containers: []
	W0731 18:10:42.148780   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:42.148789   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:42.148853   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:42.184739   74203 cri.go:89] found id: ""
	I0731 18:10:42.184762   74203 logs.go:276] 0 containers: []
	W0731 18:10:42.184769   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:42.184777   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:42.184791   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:42.254676   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:42.254694   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:42.254706   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:42.334936   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:42.334978   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:42.371511   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:42.371540   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:42.421800   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:42.421831   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:44.934983   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:44.947212   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:44.947293   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:44.979722   74203 cri.go:89] found id: ""
	I0731 18:10:44.979748   74203 logs.go:276] 0 containers: []
	W0731 18:10:44.979760   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:44.979767   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:44.979819   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:45.011594   74203 cri.go:89] found id: ""
	I0731 18:10:45.011620   74203 logs.go:276] 0 containers: []
	W0731 18:10:45.011630   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:45.011637   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:45.011803   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:45.043174   74203 cri.go:89] found id: ""
	I0731 18:10:45.043197   74203 logs.go:276] 0 containers: []
	W0731 18:10:45.043207   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:45.043214   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:45.043278   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:45.074629   74203 cri.go:89] found id: ""
	I0731 18:10:45.074652   74203 logs.go:276] 0 containers: []
	W0731 18:10:45.074662   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:45.074669   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:45.074727   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:45.108917   74203 cri.go:89] found id: ""
	I0731 18:10:45.108944   74203 logs.go:276] 0 containers: []
	W0731 18:10:45.108952   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:45.108959   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:45.109018   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:45.142200   74203 cri.go:89] found id: ""
	I0731 18:10:45.142227   74203 logs.go:276] 0 containers: []
	W0731 18:10:45.142237   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:45.142244   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:45.142306   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:45.177076   74203 cri.go:89] found id: ""
	I0731 18:10:45.177101   74203 logs.go:276] 0 containers: []
	W0731 18:10:45.177109   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:45.177114   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:45.177168   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:45.209352   74203 cri.go:89] found id: ""
	I0731 18:10:45.209376   74203 logs.go:276] 0 containers: []
	W0731 18:10:45.209383   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:45.209392   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:45.209407   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:45.257966   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:45.257998   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:45.272429   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:45.272462   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 18:10:43.279626   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:45.778975   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:42.145247   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:44.642278   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:46.644897   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:46.771785   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:48.772117   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	W0731 18:10:45.347952   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:45.347973   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:45.347988   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:45.428556   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:45.428609   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:47.971089   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:47.986677   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:47.986749   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:48.020396   74203 cri.go:89] found id: ""
	I0731 18:10:48.020426   74203 logs.go:276] 0 containers: []
	W0731 18:10:48.020438   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:48.020446   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:48.020512   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:48.058129   74203 cri.go:89] found id: ""
	I0731 18:10:48.058161   74203 logs.go:276] 0 containers: []
	W0731 18:10:48.058172   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:48.058180   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:48.058249   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:48.091894   74203 cri.go:89] found id: ""
	I0731 18:10:48.091922   74203 logs.go:276] 0 containers: []
	W0731 18:10:48.091932   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:48.091939   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:48.091998   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:48.124757   74203 cri.go:89] found id: ""
	I0731 18:10:48.124788   74203 logs.go:276] 0 containers: []
	W0731 18:10:48.124798   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:48.124807   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:48.124871   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:48.159145   74203 cri.go:89] found id: ""
	I0731 18:10:48.159172   74203 logs.go:276] 0 containers: []
	W0731 18:10:48.159184   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:48.159191   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:48.159253   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:48.200024   74203 cri.go:89] found id: ""
	I0731 18:10:48.200051   74203 logs.go:276] 0 containers: []
	W0731 18:10:48.200061   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:48.200069   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:48.200128   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:48.233838   74203 cri.go:89] found id: ""
	I0731 18:10:48.233870   74203 logs.go:276] 0 containers: []
	W0731 18:10:48.233880   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:48.233886   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:48.233941   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:48.265786   74203 cri.go:89] found id: ""
	I0731 18:10:48.265812   74203 logs.go:276] 0 containers: []
	W0731 18:10:48.265821   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:48.265832   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:48.265846   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:48.280422   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:48.280449   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:48.346774   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:48.346796   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:48.346808   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:48.424017   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:48.424052   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:48.464139   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:48.464166   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:47.781556   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:50.278635   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:49.143684   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:51.144631   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:51.272847   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:53.771397   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:51.013681   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:51.028745   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:51.028814   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:51.062656   74203 cri.go:89] found id: ""
	I0731 18:10:51.062683   74203 logs.go:276] 0 containers: []
	W0731 18:10:51.062691   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:51.062700   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:51.062749   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:51.099203   74203 cri.go:89] found id: ""
	I0731 18:10:51.099228   74203 logs.go:276] 0 containers: []
	W0731 18:10:51.099237   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:51.099243   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:51.099310   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:51.133507   74203 cri.go:89] found id: ""
	I0731 18:10:51.133533   74203 logs.go:276] 0 containers: []
	W0731 18:10:51.133540   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:51.133546   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:51.133596   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:51.169935   74203 cri.go:89] found id: ""
	I0731 18:10:51.169954   74203 logs.go:276] 0 containers: []
	W0731 18:10:51.169961   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:51.169966   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:51.170012   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:51.202877   74203 cri.go:89] found id: ""
	I0731 18:10:51.202903   74203 logs.go:276] 0 containers: []
	W0731 18:10:51.202913   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:51.202919   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:51.202988   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:51.239913   74203 cri.go:89] found id: ""
	I0731 18:10:51.239939   74203 logs.go:276] 0 containers: []
	W0731 18:10:51.239949   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:51.239957   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:51.240018   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:51.272024   74203 cri.go:89] found id: ""
	I0731 18:10:51.272095   74203 logs.go:276] 0 containers: []
	W0731 18:10:51.272115   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:51.272123   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:51.272185   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:51.307016   74203 cri.go:89] found id: ""
	I0731 18:10:51.307043   74203 logs.go:276] 0 containers: []
	W0731 18:10:51.307053   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:51.307063   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:51.307079   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:51.364018   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:51.364066   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:51.384277   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:51.384303   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:51.472657   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:51.472679   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:51.472696   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:51.548408   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:51.548439   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:54.086526   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:54.099293   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:54.099368   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:54.129927   74203 cri.go:89] found id: ""
	I0731 18:10:54.129954   74203 logs.go:276] 0 containers: []
	W0731 18:10:54.129965   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:54.129972   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:54.130042   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:54.166428   74203 cri.go:89] found id: ""
	I0731 18:10:54.166457   74203 logs.go:276] 0 containers: []
	W0731 18:10:54.166468   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:54.166476   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:54.166538   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:54.204523   74203 cri.go:89] found id: ""
	I0731 18:10:54.204549   74203 logs.go:276] 0 containers: []
	W0731 18:10:54.204556   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:54.204562   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:54.204619   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:54.241706   74203 cri.go:89] found id: ""
	I0731 18:10:54.241730   74203 logs.go:276] 0 containers: []
	W0731 18:10:54.241737   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:54.241744   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:54.241802   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:54.277154   74203 cri.go:89] found id: ""
	I0731 18:10:54.277178   74203 logs.go:276] 0 containers: []
	W0731 18:10:54.277187   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:54.277193   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:54.277255   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:54.310198   74203 cri.go:89] found id: ""
	I0731 18:10:54.310223   74203 logs.go:276] 0 containers: []
	W0731 18:10:54.310231   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:54.310237   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:54.310283   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:54.344807   74203 cri.go:89] found id: ""
	I0731 18:10:54.344837   74203 logs.go:276] 0 containers: []
	W0731 18:10:54.344847   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:54.344854   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:54.344915   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:54.383358   74203 cri.go:89] found id: ""
	I0731 18:10:54.383391   74203 logs.go:276] 0 containers: []
	W0731 18:10:54.383400   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:54.383410   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:54.383424   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:54.431876   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:54.431908   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:54.444797   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:54.444824   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:54.518816   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:54.518839   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:54.518855   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:54.600072   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:54.600109   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:52.279006   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:54.279520   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:53.643093   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:56.143250   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:56.272955   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:58.771584   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:57.141070   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:57.155903   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:57.155975   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:57.189406   74203 cri.go:89] found id: ""
	I0731 18:10:57.189428   74203 logs.go:276] 0 containers: []
	W0731 18:10:57.189435   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:57.189441   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:57.189510   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:57.221507   74203 cri.go:89] found id: ""
	I0731 18:10:57.221531   74203 logs.go:276] 0 containers: []
	W0731 18:10:57.221540   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:57.221547   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:57.221614   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:57.257843   74203 cri.go:89] found id: ""
	I0731 18:10:57.257868   74203 logs.go:276] 0 containers: []
	W0731 18:10:57.257880   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:57.257887   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:57.257939   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:57.292697   74203 cri.go:89] found id: ""
	I0731 18:10:57.292728   74203 logs.go:276] 0 containers: []
	W0731 18:10:57.292737   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:57.292744   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:57.292802   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:57.325705   74203 cri.go:89] found id: ""
	I0731 18:10:57.325728   74203 logs.go:276] 0 containers: []
	W0731 18:10:57.325735   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:57.325740   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:57.325787   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:57.357436   74203 cri.go:89] found id: ""
	I0731 18:10:57.357463   74203 logs.go:276] 0 containers: []
	W0731 18:10:57.357471   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:57.357477   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:57.357534   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:57.388215   74203 cri.go:89] found id: ""
	I0731 18:10:57.388240   74203 logs.go:276] 0 containers: []
	W0731 18:10:57.388249   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:57.388256   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:57.388315   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:57.419609   74203 cri.go:89] found id: ""
	I0731 18:10:57.419631   74203 logs.go:276] 0 containers: []
	W0731 18:10:57.419643   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:57.419652   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:57.419663   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:57.497157   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:57.497188   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:57.533512   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:57.533552   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:57.587866   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:57.587904   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:57.601191   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:57.601222   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:57.681899   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:00.182160   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:00.195509   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:00.195598   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:00.230650   74203 cri.go:89] found id: ""
	I0731 18:11:00.230674   74203 logs.go:276] 0 containers: []
	W0731 18:11:00.230682   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:00.230689   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:00.230747   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:00.268629   74203 cri.go:89] found id: ""
	I0731 18:11:00.268656   74203 logs.go:276] 0 containers: []
	W0731 18:11:00.268666   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:00.268672   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:00.268740   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:00.301805   74203 cri.go:89] found id: ""
	I0731 18:11:00.301827   74203 logs.go:276] 0 containers: []
	W0731 18:11:00.301836   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:00.301843   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:00.301901   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:00.333844   74203 cri.go:89] found id: ""
	I0731 18:11:00.333871   74203 logs.go:276] 0 containers: []
	W0731 18:11:00.333882   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:00.333889   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:00.333949   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:56.779307   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:58.779655   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:58.643375   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:00.643713   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:01.272195   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:03.272739   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:00.366250   74203 cri.go:89] found id: ""
	I0731 18:11:00.366278   74203 logs.go:276] 0 containers: []
	W0731 18:11:00.366288   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:00.366295   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:00.366358   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:00.399301   74203 cri.go:89] found id: ""
	I0731 18:11:00.399325   74203 logs.go:276] 0 containers: []
	W0731 18:11:00.399335   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:00.399342   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:00.399405   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:00.432182   74203 cri.go:89] found id: ""
	I0731 18:11:00.432207   74203 logs.go:276] 0 containers: []
	W0731 18:11:00.432218   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:00.432224   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:00.432284   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:00.465395   74203 cri.go:89] found id: ""
	I0731 18:11:00.465423   74203 logs.go:276] 0 containers: []
	W0731 18:11:00.465432   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:00.465440   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:00.465453   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:00.516042   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:00.516077   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:00.528621   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:00.528647   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:00.600297   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:00.600322   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:00.600339   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:00.680368   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:00.680399   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:03.217684   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:03.230691   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:03.230752   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:03.264882   74203 cri.go:89] found id: ""
	I0731 18:11:03.264910   74203 logs.go:276] 0 containers: []
	W0731 18:11:03.264918   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:03.264924   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:03.264976   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:03.301608   74203 cri.go:89] found id: ""
	I0731 18:11:03.301733   74203 logs.go:276] 0 containers: []
	W0731 18:11:03.301754   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:03.301765   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:03.301838   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:03.335077   74203 cri.go:89] found id: ""
	I0731 18:11:03.335102   74203 logs.go:276] 0 containers: []
	W0731 18:11:03.335121   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:03.335128   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:03.335196   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:03.370755   74203 cri.go:89] found id: ""
	I0731 18:11:03.370783   74203 logs.go:276] 0 containers: []
	W0731 18:11:03.370794   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:03.370802   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:03.370862   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:03.403004   74203 cri.go:89] found id: ""
	I0731 18:11:03.403035   74203 logs.go:276] 0 containers: []
	W0731 18:11:03.403045   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:03.403052   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:03.403125   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:03.437169   74203 cri.go:89] found id: ""
	I0731 18:11:03.437209   74203 logs.go:276] 0 containers: []
	W0731 18:11:03.437219   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:03.437235   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:03.437296   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:03.469956   74203 cri.go:89] found id: ""
	I0731 18:11:03.469981   74203 logs.go:276] 0 containers: []
	W0731 18:11:03.469991   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:03.469998   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:03.470056   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:03.503850   74203 cri.go:89] found id: ""
	I0731 18:11:03.503878   74203 logs.go:276] 0 containers: []
	W0731 18:11:03.503888   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:03.503898   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:03.503913   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:03.554993   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:03.555036   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:03.567898   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:03.567925   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:03.630151   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:03.630188   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:03.630207   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:03.708552   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:03.708596   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:01.278830   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:03.278880   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:05.778296   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:03.143289   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:05.152015   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:05.771810   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:08.271205   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:06.249728   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:06.261923   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:06.261998   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:06.296249   74203 cri.go:89] found id: ""
	I0731 18:11:06.296276   74203 logs.go:276] 0 containers: []
	W0731 18:11:06.296286   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:06.296292   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:06.296356   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:06.329355   74203 cri.go:89] found id: ""
	I0731 18:11:06.329381   74203 logs.go:276] 0 containers: []
	W0731 18:11:06.329389   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:06.329395   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:06.329443   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:06.362585   74203 cri.go:89] found id: ""
	I0731 18:11:06.362618   74203 logs.go:276] 0 containers: []
	W0731 18:11:06.362630   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:06.362643   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:06.362704   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:06.396489   74203 cri.go:89] found id: ""
	I0731 18:11:06.396514   74203 logs.go:276] 0 containers: []
	W0731 18:11:06.396521   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:06.396527   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:06.396590   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:06.428859   74203 cri.go:89] found id: ""
	I0731 18:11:06.428888   74203 logs.go:276] 0 containers: []
	W0731 18:11:06.428897   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:06.428903   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:06.428960   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:06.468817   74203 cri.go:89] found id: ""
	I0731 18:11:06.468846   74203 logs.go:276] 0 containers: []
	W0731 18:11:06.468856   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:06.468864   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:06.468924   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:06.499975   74203 cri.go:89] found id: ""
	I0731 18:11:06.500000   74203 logs.go:276] 0 containers: []
	W0731 18:11:06.500008   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:06.500013   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:06.500067   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:06.537410   74203 cri.go:89] found id: ""
	I0731 18:11:06.537440   74203 logs.go:276] 0 containers: []
	W0731 18:11:06.537451   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:06.537461   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:06.537476   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:06.589664   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:06.589709   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:06.603978   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:06.604005   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:06.673436   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:06.673454   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:06.673465   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:06.757101   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:06.757143   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:09.299562   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:09.311910   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:09.311971   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:09.346517   74203 cri.go:89] found id: ""
	I0731 18:11:09.346545   74203 logs.go:276] 0 containers: []
	W0731 18:11:09.346555   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:09.346562   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:09.346634   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:09.377688   74203 cri.go:89] found id: ""
	I0731 18:11:09.377713   74203 logs.go:276] 0 containers: []
	W0731 18:11:09.377720   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:09.377726   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:09.377787   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:09.412149   74203 cri.go:89] found id: ""
	I0731 18:11:09.412176   74203 logs.go:276] 0 containers: []
	W0731 18:11:09.412186   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:09.412193   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:09.412259   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:09.444134   74203 cri.go:89] found id: ""
	I0731 18:11:09.444162   74203 logs.go:276] 0 containers: []
	W0731 18:11:09.444172   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:09.444178   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:09.444233   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:09.481407   74203 cri.go:89] found id: ""
	I0731 18:11:09.481436   74203 logs.go:276] 0 containers: []
	W0731 18:11:09.481447   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:09.481453   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:09.481513   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:09.514926   74203 cri.go:89] found id: ""
	I0731 18:11:09.514950   74203 logs.go:276] 0 containers: []
	W0731 18:11:09.514967   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:09.514974   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:09.515036   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:09.547253   74203 cri.go:89] found id: ""
	I0731 18:11:09.547278   74203 logs.go:276] 0 containers: []
	W0731 18:11:09.547285   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:09.547291   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:09.547376   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:09.587585   74203 cri.go:89] found id: ""
	I0731 18:11:09.587614   74203 logs.go:276] 0 containers: []
	W0731 18:11:09.587622   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:09.587632   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:09.587646   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:09.642024   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:09.642057   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:09.655244   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:09.655270   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:09.721446   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:09.721474   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:09.721489   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:09.803315   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:09.803349   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:07.779195   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:10.278028   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:07.643242   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:10.143895   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:10.271515   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:12.771322   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:12.344355   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:12.357122   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:12.357194   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:12.392237   74203 cri.go:89] found id: ""
	I0731 18:11:12.392258   74203 logs.go:276] 0 containers: []
	W0731 18:11:12.392267   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:12.392272   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:12.392339   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:12.424490   74203 cri.go:89] found id: ""
	I0731 18:11:12.424514   74203 logs.go:276] 0 containers: []
	W0731 18:11:12.424523   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:12.424529   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:12.424587   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:12.458438   74203 cri.go:89] found id: ""
	I0731 18:11:12.458467   74203 logs.go:276] 0 containers: []
	W0731 18:11:12.458477   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:12.458483   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:12.458545   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:12.495343   74203 cri.go:89] found id: ""
	I0731 18:11:12.495371   74203 logs.go:276] 0 containers: []
	W0731 18:11:12.495383   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:12.495391   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:12.495455   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:12.527285   74203 cri.go:89] found id: ""
	I0731 18:11:12.527314   74203 logs.go:276] 0 containers: []
	W0731 18:11:12.527324   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:12.527334   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:12.527393   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:12.560341   74203 cri.go:89] found id: ""
	I0731 18:11:12.560369   74203 logs.go:276] 0 containers: []
	W0731 18:11:12.560379   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:12.560387   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:12.560444   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:12.595084   74203 cri.go:89] found id: ""
	I0731 18:11:12.595120   74203 logs.go:276] 0 containers: []
	W0731 18:11:12.595133   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:12.595141   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:12.595215   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:12.630666   74203 cri.go:89] found id: ""
	I0731 18:11:12.630692   74203 logs.go:276] 0 containers: []
	W0731 18:11:12.630702   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:12.630711   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:12.630727   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:12.683588   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:12.683620   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:12.696899   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:12.696925   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:12.757815   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:12.757837   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:12.757870   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:12.834888   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:12.834927   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:12.278464   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:14.279031   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:12.643960   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:15.142811   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:14.771367   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:16.772010   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:19.271857   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:15.372797   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:15.386268   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:15.386356   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:15.420446   74203 cri.go:89] found id: ""
	I0731 18:11:15.420477   74203 logs.go:276] 0 containers: []
	W0731 18:11:15.420488   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:15.420497   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:15.420556   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:15.456092   74203 cri.go:89] found id: ""
	I0731 18:11:15.456118   74203 logs.go:276] 0 containers: []
	W0731 18:11:15.456129   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:15.456136   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:15.456194   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:15.488277   74203 cri.go:89] found id: ""
	I0731 18:11:15.488304   74203 logs.go:276] 0 containers: []
	W0731 18:11:15.488316   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:15.488323   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:15.488384   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:15.520701   74203 cri.go:89] found id: ""
	I0731 18:11:15.520730   74203 logs.go:276] 0 containers: []
	W0731 18:11:15.520741   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:15.520749   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:15.520818   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:15.552831   74203 cri.go:89] found id: ""
	I0731 18:11:15.552854   74203 logs.go:276] 0 containers: []
	W0731 18:11:15.552862   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:15.552867   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:15.552920   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:15.589161   74203 cri.go:89] found id: ""
	I0731 18:11:15.589191   74203 logs.go:276] 0 containers: []
	W0731 18:11:15.589203   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:15.589210   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:15.589274   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:15.622501   74203 cri.go:89] found id: ""
	I0731 18:11:15.622532   74203 logs.go:276] 0 containers: []
	W0731 18:11:15.622544   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:15.622552   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:15.622611   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:15.654772   74203 cri.go:89] found id: ""
	I0731 18:11:15.654801   74203 logs.go:276] 0 containers: []
	W0731 18:11:15.654815   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:15.654826   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:15.654843   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:15.703103   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:15.703148   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:15.716620   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:15.716645   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:15.783391   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:15.783416   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:15.783431   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:15.857462   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:15.857495   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:18.394223   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:18.407297   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:18.407374   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:18.439542   74203 cri.go:89] found id: ""
	I0731 18:11:18.439564   74203 logs.go:276] 0 containers: []
	W0731 18:11:18.439572   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:18.439578   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:18.439625   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:18.471838   74203 cri.go:89] found id: ""
	I0731 18:11:18.471863   74203 logs.go:276] 0 containers: []
	W0731 18:11:18.471873   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:18.471883   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:18.471943   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:18.505325   74203 cri.go:89] found id: ""
	I0731 18:11:18.505355   74203 logs.go:276] 0 containers: []
	W0731 18:11:18.505365   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:18.505372   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:18.505432   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:18.536155   74203 cri.go:89] found id: ""
	I0731 18:11:18.536180   74203 logs.go:276] 0 containers: []
	W0731 18:11:18.536189   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:18.536194   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:18.536241   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:18.569301   74203 cri.go:89] found id: ""
	I0731 18:11:18.569329   74203 logs.go:276] 0 containers: []
	W0731 18:11:18.569339   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:18.569344   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:18.569398   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:18.603053   74203 cri.go:89] found id: ""
	I0731 18:11:18.603079   74203 logs.go:276] 0 containers: []
	W0731 18:11:18.603087   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:18.603092   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:18.603170   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:18.636259   74203 cri.go:89] found id: ""
	I0731 18:11:18.636287   74203 logs.go:276] 0 containers: []
	W0731 18:11:18.636298   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:18.636305   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:18.636361   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:18.667839   74203 cri.go:89] found id: ""
	I0731 18:11:18.667863   74203 logs.go:276] 0 containers: []
	W0731 18:11:18.667873   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:18.667883   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:18.667897   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:18.681005   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:18.681030   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:18.747793   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:18.747875   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:18.747892   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:18.828970   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:18.829005   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:18.866724   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:18.866749   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:16.279368   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:18.778730   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:20.779465   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:17.144041   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:19.645356   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:21.272651   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:23.771240   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:21.416598   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:21.431968   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:21.432027   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:21.469670   74203 cri.go:89] found id: ""
	I0731 18:11:21.469696   74203 logs.go:276] 0 containers: []
	W0731 18:11:21.469703   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:21.469709   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:21.469762   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:21.508461   74203 cri.go:89] found id: ""
	I0731 18:11:21.508490   74203 logs.go:276] 0 containers: []
	W0731 18:11:21.508500   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:21.508506   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:21.508570   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:21.548101   74203 cri.go:89] found id: ""
	I0731 18:11:21.548127   74203 logs.go:276] 0 containers: []
	W0731 18:11:21.548136   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:21.548142   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:21.548204   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:21.582617   74203 cri.go:89] found id: ""
	I0731 18:11:21.582646   74203 logs.go:276] 0 containers: []
	W0731 18:11:21.582653   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:21.582659   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:21.582712   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:21.614185   74203 cri.go:89] found id: ""
	I0731 18:11:21.614210   74203 logs.go:276] 0 containers: []
	W0731 18:11:21.614218   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:21.614223   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:21.614278   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:21.647596   74203 cri.go:89] found id: ""
	I0731 18:11:21.647619   74203 logs.go:276] 0 containers: []
	W0731 18:11:21.647629   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:21.647636   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:21.647693   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:21.680106   74203 cri.go:89] found id: ""
	I0731 18:11:21.680132   74203 logs.go:276] 0 containers: []
	W0731 18:11:21.680142   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:21.680149   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:21.680208   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:21.714708   74203 cri.go:89] found id: ""
	I0731 18:11:21.714733   74203 logs.go:276] 0 containers: []
	W0731 18:11:21.714742   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:21.714754   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:21.714779   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:21.783425   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:21.783448   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:21.783462   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:21.859943   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:21.859980   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:21.898374   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:21.898405   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:21.945753   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:21.945784   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:24.459481   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:24.471376   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:24.471435   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:24.506474   74203 cri.go:89] found id: ""
	I0731 18:11:24.506502   74203 logs.go:276] 0 containers: []
	W0731 18:11:24.506511   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:24.506516   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:24.506572   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:24.547298   74203 cri.go:89] found id: ""
	I0731 18:11:24.547324   74203 logs.go:276] 0 containers: []
	W0731 18:11:24.547332   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:24.547337   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:24.547402   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:24.579912   74203 cri.go:89] found id: ""
	I0731 18:11:24.579944   74203 logs.go:276] 0 containers: []
	W0731 18:11:24.579955   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:24.579963   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:24.580032   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:24.613754   74203 cri.go:89] found id: ""
	I0731 18:11:24.613782   74203 logs.go:276] 0 containers: []
	W0731 18:11:24.613791   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:24.613799   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:24.613859   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:24.649782   74203 cri.go:89] found id: ""
	I0731 18:11:24.649811   74203 logs.go:276] 0 containers: []
	W0731 18:11:24.649822   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:24.649829   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:24.649888   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:24.689232   74203 cri.go:89] found id: ""
	I0731 18:11:24.689264   74203 logs.go:276] 0 containers: []
	W0731 18:11:24.689274   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:24.689283   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:24.689361   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:24.727861   74203 cri.go:89] found id: ""
	I0731 18:11:24.727894   74203 logs.go:276] 0 containers: []
	W0731 18:11:24.727902   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:24.727924   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:24.727983   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:24.763839   74203 cri.go:89] found id: ""
	I0731 18:11:24.763866   74203 logs.go:276] 0 containers: []
	W0731 18:11:24.763876   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:24.763886   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:24.763901   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:24.841090   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:24.841131   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:24.877206   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:24.877231   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:24.926149   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:24.926180   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:24.938795   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:24.938822   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:25.008349   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:23.279256   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:25.778644   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:22.143312   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:24.144259   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:26.144310   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:25.771403   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:28.270613   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:27.509192   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:27.522506   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:27.522582   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:27.557915   74203 cri.go:89] found id: ""
	I0731 18:11:27.557943   74203 logs.go:276] 0 containers: []
	W0731 18:11:27.557954   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:27.557962   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:27.558019   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:27.594295   74203 cri.go:89] found id: ""
	I0731 18:11:27.594322   74203 logs.go:276] 0 containers: []
	W0731 18:11:27.594332   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:27.594348   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:27.594410   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:27.626830   74203 cri.go:89] found id: ""
	I0731 18:11:27.626857   74203 logs.go:276] 0 containers: []
	W0731 18:11:27.626868   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:27.626875   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:27.626964   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:27.662062   74203 cri.go:89] found id: ""
	I0731 18:11:27.662084   74203 logs.go:276] 0 containers: []
	W0731 18:11:27.662092   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:27.662099   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:27.662158   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:27.695686   74203 cri.go:89] found id: ""
	I0731 18:11:27.695715   74203 logs.go:276] 0 containers: []
	W0731 18:11:27.695727   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:27.695735   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:27.695785   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:27.729444   74203 cri.go:89] found id: ""
	I0731 18:11:27.729467   74203 logs.go:276] 0 containers: []
	W0731 18:11:27.729475   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:27.729481   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:27.729531   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:27.761889   74203 cri.go:89] found id: ""
	I0731 18:11:27.761916   74203 logs.go:276] 0 containers: []
	W0731 18:11:27.761926   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:27.761934   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:27.761995   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:27.796178   74203 cri.go:89] found id: ""
	I0731 18:11:27.796199   74203 logs.go:276] 0 containers: []
	W0731 18:11:27.796206   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:27.796214   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:27.796227   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:27.849613   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:27.849650   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:27.862892   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:27.862923   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:27.928691   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:27.928717   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:27.928740   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:28.006310   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:28.006340   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:27.779125   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:30.279252   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:28.643172   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:30.645474   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:30.271016   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:32.771684   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:30.543065   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:30.555951   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:30.556013   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:30.597411   74203 cri.go:89] found id: ""
	I0731 18:11:30.597440   74203 logs.go:276] 0 containers: []
	W0731 18:11:30.597451   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:30.597458   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:30.597518   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:30.629836   74203 cri.go:89] found id: ""
	I0731 18:11:30.629866   74203 logs.go:276] 0 containers: []
	W0731 18:11:30.629873   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:30.629878   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:30.629932   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:30.667402   74203 cri.go:89] found id: ""
	I0731 18:11:30.667432   74203 logs.go:276] 0 containers: []
	W0731 18:11:30.667443   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:30.667450   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:30.667513   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:30.701677   74203 cri.go:89] found id: ""
	I0731 18:11:30.701708   74203 logs.go:276] 0 containers: []
	W0731 18:11:30.701716   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:30.701722   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:30.701773   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:30.736685   74203 cri.go:89] found id: ""
	I0731 18:11:30.736714   74203 logs.go:276] 0 containers: []
	W0731 18:11:30.736721   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:30.736736   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:30.736786   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:30.771501   74203 cri.go:89] found id: ""
	I0731 18:11:30.771526   74203 logs.go:276] 0 containers: []
	W0731 18:11:30.771543   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:30.771549   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:30.771597   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:30.805878   74203 cri.go:89] found id: ""
	I0731 18:11:30.805902   74203 logs.go:276] 0 containers: []
	W0731 18:11:30.805911   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:30.805921   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:30.805966   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:30.839001   74203 cri.go:89] found id: ""
	I0731 18:11:30.839027   74203 logs.go:276] 0 containers: []
	W0731 18:11:30.839038   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:30.839048   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:30.839062   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:30.893357   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:30.893387   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:30.907222   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:30.907248   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:30.985626   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:30.985648   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:30.985668   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:31.067900   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:31.067948   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:33.607259   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:33.621596   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:33.621656   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:33.663616   74203 cri.go:89] found id: ""
	I0731 18:11:33.663642   74203 logs.go:276] 0 containers: []
	W0731 18:11:33.663649   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:33.663655   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:33.663704   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:33.702133   74203 cri.go:89] found id: ""
	I0731 18:11:33.702159   74203 logs.go:276] 0 containers: []
	W0731 18:11:33.702167   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:33.702173   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:33.702226   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:33.733730   74203 cri.go:89] found id: ""
	I0731 18:11:33.733752   74203 logs.go:276] 0 containers: []
	W0731 18:11:33.733760   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:33.733765   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:33.733813   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:33.765036   74203 cri.go:89] found id: ""
	I0731 18:11:33.765064   74203 logs.go:276] 0 containers: []
	W0731 18:11:33.765074   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:33.765080   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:33.765128   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:33.799604   74203 cri.go:89] found id: ""
	I0731 18:11:33.799630   74203 logs.go:276] 0 containers: []
	W0731 18:11:33.799640   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:33.799648   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:33.799716   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:33.831434   74203 cri.go:89] found id: ""
	I0731 18:11:33.831455   74203 logs.go:276] 0 containers: []
	W0731 18:11:33.831464   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:33.831469   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:33.831514   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:33.862975   74203 cri.go:89] found id: ""
	I0731 18:11:33.863004   74203 logs.go:276] 0 containers: []
	W0731 18:11:33.863014   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:33.863022   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:33.863090   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:33.895674   74203 cri.go:89] found id: ""
	I0731 18:11:33.895704   74203 logs.go:276] 0 containers: []
	W0731 18:11:33.895714   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:33.895723   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:33.895737   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:33.931954   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:33.931980   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:33.985353   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:33.985385   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:33.997857   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:33.997882   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:34.060523   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:34.060553   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:34.060575   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:32.778212   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:35.278655   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:33.151579   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:35.643326   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:34.771873   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:36.772309   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:39.271582   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:36.643003   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:36.659306   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:36.659385   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:36.717097   74203 cri.go:89] found id: ""
	I0731 18:11:36.717129   74203 logs.go:276] 0 containers: []
	W0731 18:11:36.717141   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:36.717149   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:36.717212   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:36.750288   74203 cri.go:89] found id: ""
	I0731 18:11:36.750314   74203 logs.go:276] 0 containers: []
	W0731 18:11:36.750325   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:36.750331   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:36.750391   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:36.785272   74203 cri.go:89] found id: ""
	I0731 18:11:36.785296   74203 logs.go:276] 0 containers: []
	W0731 18:11:36.785304   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:36.785310   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:36.785360   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:36.818927   74203 cri.go:89] found id: ""
	I0731 18:11:36.818953   74203 logs.go:276] 0 containers: []
	W0731 18:11:36.818965   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:36.818972   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:36.819020   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:36.854562   74203 cri.go:89] found id: ""
	I0731 18:11:36.854593   74203 logs.go:276] 0 containers: []
	W0731 18:11:36.854602   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:36.854607   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:36.854670   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:36.887786   74203 cri.go:89] found id: ""
	I0731 18:11:36.887814   74203 logs.go:276] 0 containers: []
	W0731 18:11:36.887825   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:36.887833   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:36.887893   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:36.919418   74203 cri.go:89] found id: ""
	I0731 18:11:36.919446   74203 logs.go:276] 0 containers: []
	W0731 18:11:36.919457   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:36.919471   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:36.919533   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:36.956934   74203 cri.go:89] found id: ""
	I0731 18:11:36.956957   74203 logs.go:276] 0 containers: []
	W0731 18:11:36.956964   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:36.956971   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:36.956989   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:37.003755   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:37.003783   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:37.016977   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:37.017004   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:37.091617   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:37.091646   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:37.091662   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:37.170870   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:37.170903   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:39.714271   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:39.730306   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:39.730383   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:39.765368   74203 cri.go:89] found id: ""
	I0731 18:11:39.765399   74203 logs.go:276] 0 containers: []
	W0731 18:11:39.765407   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:39.765412   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:39.765471   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:39.800394   74203 cri.go:89] found id: ""
	I0731 18:11:39.800419   74203 logs.go:276] 0 containers: []
	W0731 18:11:39.800427   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:39.800433   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:39.800486   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:39.834861   74203 cri.go:89] found id: ""
	I0731 18:11:39.834889   74203 logs.go:276] 0 containers: []
	W0731 18:11:39.834898   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:39.834903   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:39.834958   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:39.868108   74203 cri.go:89] found id: ""
	I0731 18:11:39.868132   74203 logs.go:276] 0 containers: []
	W0731 18:11:39.868141   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:39.868146   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:39.868220   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:39.902097   74203 cri.go:89] found id: ""
	I0731 18:11:39.902120   74203 logs.go:276] 0 containers: []
	W0731 18:11:39.902128   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:39.902134   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:39.902184   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:39.933073   74203 cri.go:89] found id: ""
	I0731 18:11:39.933100   74203 logs.go:276] 0 containers: []
	W0731 18:11:39.933109   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:39.933114   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:39.933165   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:39.965748   74203 cri.go:89] found id: ""
	I0731 18:11:39.965775   74203 logs.go:276] 0 containers: []
	W0731 18:11:39.965785   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:39.965796   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:39.965856   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:39.998164   74203 cri.go:89] found id: ""
	I0731 18:11:39.998189   74203 logs.go:276] 0 containers: []
	W0731 18:11:39.998197   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:39.998205   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:39.998222   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:40.049991   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:40.050027   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:40.063676   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:40.063705   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:40.125855   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:40.125880   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:40.125896   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:40.207937   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:40.207970   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:37.778894   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:40.278489   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:37.643651   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:40.144731   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:41.271897   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:43.771556   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:42.746315   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:42.758998   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:42.759053   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:42.791921   74203 cri.go:89] found id: ""
	I0731 18:11:42.791946   74203 logs.go:276] 0 containers: []
	W0731 18:11:42.791954   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:42.791958   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:42.792004   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:42.822888   74203 cri.go:89] found id: ""
	I0731 18:11:42.822914   74203 logs.go:276] 0 containers: []
	W0731 18:11:42.822922   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:42.822927   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:42.822973   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:42.854516   74203 cri.go:89] found id: ""
	I0731 18:11:42.854545   74203 logs.go:276] 0 containers: []
	W0731 18:11:42.854564   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:42.854574   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:42.854638   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:42.890933   74203 cri.go:89] found id: ""
	I0731 18:11:42.890955   74203 logs.go:276] 0 containers: []
	W0731 18:11:42.890963   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:42.890968   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:42.891013   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:42.925170   74203 cri.go:89] found id: ""
	I0731 18:11:42.925196   74203 logs.go:276] 0 containers: []
	W0731 18:11:42.925206   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:42.925213   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:42.925273   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:42.959845   74203 cri.go:89] found id: ""
	I0731 18:11:42.959868   74203 logs.go:276] 0 containers: []
	W0731 18:11:42.959876   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:42.959881   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:42.959938   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:42.997305   74203 cri.go:89] found id: ""
	I0731 18:11:42.997346   74203 logs.go:276] 0 containers: []
	W0731 18:11:42.997358   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:42.997366   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:42.997427   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:43.030663   74203 cri.go:89] found id: ""
	I0731 18:11:43.030690   74203 logs.go:276] 0 containers: []
	W0731 18:11:43.030700   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:43.030711   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:43.030725   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:43.112280   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:43.112303   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:43.112318   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:43.209002   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:43.209035   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:43.249596   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:43.249629   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:43.302419   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:43.302449   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:42.278874   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:44.273355   73696 pod_ready.go:81] duration metric: took 4m0.000454583s for pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace to be "Ready" ...
	E0731 18:11:44.273380   73696 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace to be "Ready" (will not retry!)
	I0731 18:11:44.273399   73696 pod_ready.go:38] duration metric: took 4m8.019714552s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:11:44.273430   73696 kubeadm.go:597] duration metric: took 4m16.379038728s to restartPrimaryControlPlane
	W0731 18:11:44.273506   73696 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 18:11:44.273531   73696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 18:11:42.643165   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:44.644976   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:46.271751   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:48.771274   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:45.816910   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:45.829909   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:45.829976   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:45.865534   74203 cri.go:89] found id: ""
	I0731 18:11:45.865561   74203 logs.go:276] 0 containers: []
	W0731 18:11:45.865572   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:45.865584   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:45.865646   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:45.901552   74203 cri.go:89] found id: ""
	I0731 18:11:45.901585   74203 logs.go:276] 0 containers: []
	W0731 18:11:45.901593   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:45.901598   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:45.901678   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:45.938790   74203 cri.go:89] found id: ""
	I0731 18:11:45.938820   74203 logs.go:276] 0 containers: []
	W0731 18:11:45.938842   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:45.938859   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:45.938926   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:45.971502   74203 cri.go:89] found id: ""
	I0731 18:11:45.971534   74203 logs.go:276] 0 containers: []
	W0731 18:11:45.971546   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:45.971553   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:45.971620   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:46.009281   74203 cri.go:89] found id: ""
	I0731 18:11:46.009316   74203 logs.go:276] 0 containers: []
	W0731 18:11:46.009327   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:46.009335   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:46.009399   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:46.042899   74203 cri.go:89] found id: ""
	I0731 18:11:46.042928   74203 logs.go:276] 0 containers: []
	W0731 18:11:46.042939   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:46.042947   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:46.043005   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:46.079982   74203 cri.go:89] found id: ""
	I0731 18:11:46.080013   74203 logs.go:276] 0 containers: []
	W0731 18:11:46.080024   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:46.080031   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:46.080098   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:46.113136   74203 cri.go:89] found id: ""
	I0731 18:11:46.113168   74203 logs.go:276] 0 containers: []
	W0731 18:11:46.113179   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:46.113191   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:46.113206   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:46.165818   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:46.165855   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:46.181058   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:46.181083   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:46.256805   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:46.256826   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:46.256838   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:46.353045   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:46.353093   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:48.894656   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:48.910648   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:48.910723   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:48.941080   74203 cri.go:89] found id: ""
	I0731 18:11:48.941103   74203 logs.go:276] 0 containers: []
	W0731 18:11:48.941111   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:48.941117   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:48.941164   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:48.972113   74203 cri.go:89] found id: ""
	I0731 18:11:48.972136   74203 logs.go:276] 0 containers: []
	W0731 18:11:48.972146   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:48.972151   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:48.972208   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:49.004521   74203 cri.go:89] found id: ""
	I0731 18:11:49.004547   74203 logs.go:276] 0 containers: []
	W0731 18:11:49.004557   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:49.004571   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:49.004658   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:49.036600   74203 cri.go:89] found id: ""
	I0731 18:11:49.036622   74203 logs.go:276] 0 containers: []
	W0731 18:11:49.036629   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:49.036635   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:49.036683   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:49.071397   74203 cri.go:89] found id: ""
	I0731 18:11:49.071426   74203 logs.go:276] 0 containers: []
	W0731 18:11:49.071436   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:49.071444   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:49.071501   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:49.108907   74203 cri.go:89] found id: ""
	I0731 18:11:49.108933   74203 logs.go:276] 0 containers: []
	W0731 18:11:49.108944   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:49.108952   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:49.109007   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:49.141808   74203 cri.go:89] found id: ""
	I0731 18:11:49.141834   74203 logs.go:276] 0 containers: []
	W0731 18:11:49.141844   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:49.141856   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:49.141917   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:49.174063   74203 cri.go:89] found id: ""
	I0731 18:11:49.174087   74203 logs.go:276] 0 containers: []
	W0731 18:11:49.174095   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:49.174104   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:49.174116   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:49.212152   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:49.212181   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:49.267297   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:49.267324   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:49.281342   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:49.281365   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:49.349843   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:49.349866   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:49.349882   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:47.144588   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:49.644395   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:51.271203   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:53.770849   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:51.927764   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:51.940480   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:51.940539   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:51.973731   74203 cri.go:89] found id: ""
	I0731 18:11:51.973759   74203 logs.go:276] 0 containers: []
	W0731 18:11:51.973768   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:51.973780   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:51.973837   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:52.003761   74203 cri.go:89] found id: ""
	I0731 18:11:52.003783   74203 logs.go:276] 0 containers: []
	W0731 18:11:52.003790   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:52.003795   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:52.003844   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:52.035009   74203 cri.go:89] found id: ""
	I0731 18:11:52.035028   74203 logs.go:276] 0 containers: []
	W0731 18:11:52.035035   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:52.035041   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:52.035100   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:52.065475   74203 cri.go:89] found id: ""
	I0731 18:11:52.065501   74203 logs.go:276] 0 containers: []
	W0731 18:11:52.065509   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:52.065515   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:52.065574   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:52.097529   74203 cri.go:89] found id: ""
	I0731 18:11:52.097558   74203 logs.go:276] 0 containers: []
	W0731 18:11:52.097567   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:52.097573   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:52.097622   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:52.128881   74203 cri.go:89] found id: ""
	I0731 18:11:52.128909   74203 logs.go:276] 0 containers: []
	W0731 18:11:52.128917   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:52.128923   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:52.128974   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:52.159894   74203 cri.go:89] found id: ""
	I0731 18:11:52.159921   74203 logs.go:276] 0 containers: []
	W0731 18:11:52.159931   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:52.159939   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:52.159998   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:52.191955   74203 cri.go:89] found id: ""
	I0731 18:11:52.191981   74203 logs.go:276] 0 containers: []
	W0731 18:11:52.191990   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:52.191999   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:52.192009   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:52.246389   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:52.246423   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:52.260226   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:52.260253   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:52.328423   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:52.328447   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:52.328459   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:52.408456   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:52.408495   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:54.947734   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:54.960359   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:54.960420   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:54.994231   74203 cri.go:89] found id: ""
	I0731 18:11:54.994256   74203 logs.go:276] 0 containers: []
	W0731 18:11:54.994264   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:54.994270   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:54.994332   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:55.027323   74203 cri.go:89] found id: ""
	I0731 18:11:55.027364   74203 logs.go:276] 0 containers: []
	W0731 18:11:55.027374   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:55.027382   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:55.027440   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:55.061741   74203 cri.go:89] found id: ""
	I0731 18:11:55.061763   74203 logs.go:276] 0 containers: []
	W0731 18:11:55.061771   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:55.061776   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:55.061822   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:55.100685   74203 cri.go:89] found id: ""
	I0731 18:11:55.100712   74203 logs.go:276] 0 containers: []
	W0731 18:11:55.100720   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:55.100726   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:55.100780   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:55.141917   74203 cri.go:89] found id: ""
	I0731 18:11:55.141958   74203 logs.go:276] 0 containers: []
	W0731 18:11:55.141971   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:55.141980   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:55.142054   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:55.176669   74203 cri.go:89] found id: ""
	I0731 18:11:55.176702   74203 logs.go:276] 0 containers: []
	W0731 18:11:55.176711   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:55.176718   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:55.176780   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:55.209795   74203 cri.go:89] found id: ""
	I0731 18:11:55.209829   74203 logs.go:276] 0 containers: []
	W0731 18:11:55.209842   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:55.209850   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:55.209915   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:55.244503   74203 cri.go:89] found id: ""
	I0731 18:11:55.244527   74203 logs.go:276] 0 containers: []
	W0731 18:11:55.244537   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:55.244556   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:55.244572   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:55.320033   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:55.320071   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:52.143803   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:54.644223   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:56.273321   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:58.772541   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:55.357684   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:55.357719   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:55.411465   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:55.411501   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:55.423802   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:55.423833   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:55.487945   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:57.988078   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:58.001639   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:58.001724   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:58.036075   74203 cri.go:89] found id: ""
	I0731 18:11:58.036099   74203 logs.go:276] 0 containers: []
	W0731 18:11:58.036107   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:58.036112   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:58.036163   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:58.067316   74203 cri.go:89] found id: ""
	I0731 18:11:58.067340   74203 logs.go:276] 0 containers: []
	W0731 18:11:58.067348   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:58.067353   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:58.067420   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:58.102446   74203 cri.go:89] found id: ""
	I0731 18:11:58.102470   74203 logs.go:276] 0 containers: []
	W0731 18:11:58.102479   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:58.102485   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:58.102553   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:58.134924   74203 cri.go:89] found id: ""
	I0731 18:11:58.134949   74203 logs.go:276] 0 containers: []
	W0731 18:11:58.134957   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:58.134963   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:58.135023   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:58.171589   74203 cri.go:89] found id: ""
	I0731 18:11:58.171611   74203 logs.go:276] 0 containers: []
	W0731 18:11:58.171620   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:58.171625   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:58.171673   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:58.203813   74203 cri.go:89] found id: ""
	I0731 18:11:58.203836   74203 logs.go:276] 0 containers: []
	W0731 18:11:58.203844   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:58.203850   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:58.203911   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:58.236251   74203 cri.go:89] found id: ""
	I0731 18:11:58.236277   74203 logs.go:276] 0 containers: []
	W0731 18:11:58.236288   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:58.236295   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:58.236357   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:58.270595   74203 cri.go:89] found id: ""
	I0731 18:11:58.270624   74203 logs.go:276] 0 containers: []
	W0731 18:11:58.270636   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:58.270647   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:58.270662   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:58.321889   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:58.321927   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:58.334529   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:58.334552   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:58.398489   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:58.398515   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:58.398540   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:58.479657   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:58.479695   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:57.143080   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:59.144357   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:01.643343   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:01.266100   73800 pod_ready.go:81] duration metric: took 4m0.000711681s for pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace to be "Ready" ...
	E0731 18:12:01.266123   73800 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace to be "Ready" (will not retry!)
	I0731 18:12:01.266160   73800 pod_ready.go:38] duration metric: took 4m6.529342365s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:12:01.266205   73800 kubeadm.go:597] duration metric: took 4m13.643145888s to restartPrimaryControlPlane
	W0731 18:12:01.266270   73800 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 18:12:01.266297   73800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 18:12:01.014684   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:12:01.027959   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:12:01.028026   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:12:01.065423   74203 cri.go:89] found id: ""
	I0731 18:12:01.065459   74203 logs.go:276] 0 containers: []
	W0731 18:12:01.065472   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:12:01.065481   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:12:01.065545   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:12:01.099519   74203 cri.go:89] found id: ""
	I0731 18:12:01.099549   74203 logs.go:276] 0 containers: []
	W0731 18:12:01.099561   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:12:01.099568   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:12:01.099630   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:12:01.131239   74203 cri.go:89] found id: ""
	I0731 18:12:01.131262   74203 logs.go:276] 0 containers: []
	W0731 18:12:01.131270   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:12:01.131275   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:12:01.131321   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:12:01.163209   74203 cri.go:89] found id: ""
	I0731 18:12:01.163229   74203 logs.go:276] 0 containers: []
	W0731 18:12:01.163237   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:12:01.163242   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:12:01.163295   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:12:01.201165   74203 cri.go:89] found id: ""
	I0731 18:12:01.201193   74203 logs.go:276] 0 containers: []
	W0731 18:12:01.201204   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:12:01.201217   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:12:01.201274   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:12:01.233310   74203 cri.go:89] found id: ""
	I0731 18:12:01.233334   74203 logs.go:276] 0 containers: []
	W0731 18:12:01.233342   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:12:01.233348   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:12:01.233415   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:12:01.263412   74203 cri.go:89] found id: ""
	I0731 18:12:01.263442   74203 logs.go:276] 0 containers: []
	W0731 18:12:01.263452   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:12:01.263459   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:12:01.263521   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:12:01.296598   74203 cri.go:89] found id: ""
	I0731 18:12:01.296624   74203 logs.go:276] 0 containers: []
	W0731 18:12:01.296632   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:12:01.296642   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:12:01.296656   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:12:01.372362   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:12:01.372381   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:12:01.372395   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:12:01.461997   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:12:01.462029   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:12:01.507610   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:12:01.507636   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:12:01.558335   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:12:01.558375   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:12:04.073333   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:12:04.091122   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:12:04.091205   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:12:04.130510   74203 cri.go:89] found id: ""
	I0731 18:12:04.130545   74203 logs.go:276] 0 containers: []
	W0731 18:12:04.130558   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:12:04.130566   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:12:04.130632   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:12:04.174749   74203 cri.go:89] found id: ""
	I0731 18:12:04.174775   74203 logs.go:276] 0 containers: []
	W0731 18:12:04.174785   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:12:04.174792   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:12:04.174846   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:12:04.212123   74203 cri.go:89] found id: ""
	I0731 18:12:04.212160   74203 logs.go:276] 0 containers: []
	W0731 18:12:04.212172   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:12:04.212180   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:12:04.212254   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:12:04.251558   74203 cri.go:89] found id: ""
	I0731 18:12:04.251589   74203 logs.go:276] 0 containers: []
	W0731 18:12:04.251600   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:12:04.251608   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:12:04.251671   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:12:04.284831   74203 cri.go:89] found id: ""
	I0731 18:12:04.284864   74203 logs.go:276] 0 containers: []
	W0731 18:12:04.284878   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:12:04.284886   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:12:04.284954   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:12:04.325076   74203 cri.go:89] found id: ""
	I0731 18:12:04.325115   74203 logs.go:276] 0 containers: []
	W0731 18:12:04.325126   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:12:04.325135   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:12:04.325195   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:12:04.370883   74203 cri.go:89] found id: ""
	I0731 18:12:04.370922   74203 logs.go:276] 0 containers: []
	W0731 18:12:04.370933   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:12:04.370940   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:12:04.370999   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:12:04.410639   74203 cri.go:89] found id: ""
	I0731 18:12:04.410671   74203 logs.go:276] 0 containers: []
	W0731 18:12:04.410685   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:12:04.410697   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:12:04.410713   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:12:04.462988   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:12:04.463023   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:12:04.479086   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:12:04.479123   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:12:04.544675   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:12:04.544699   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:12:04.544712   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:12:04.633231   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:12:04.633267   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:12:03.645118   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:06.143865   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:07.174252   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:12:07.187289   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:12:07.187393   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:12:07.220927   74203 cri.go:89] found id: ""
	I0731 18:12:07.220953   74203 logs.go:276] 0 containers: []
	W0731 18:12:07.220964   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:12:07.220972   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:12:07.221040   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:12:07.256817   74203 cri.go:89] found id: ""
	I0731 18:12:07.256849   74203 logs.go:276] 0 containers: []
	W0731 18:12:07.256861   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:12:07.256870   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:12:07.256935   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:12:07.290267   74203 cri.go:89] found id: ""
	I0731 18:12:07.290297   74203 logs.go:276] 0 containers: []
	W0731 18:12:07.290309   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:12:07.290315   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:12:07.290373   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:12:07.330037   74203 cri.go:89] found id: ""
	I0731 18:12:07.330068   74203 logs.go:276] 0 containers: []
	W0731 18:12:07.330079   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:12:07.330087   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:12:07.330143   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:12:07.366745   74203 cri.go:89] found id: ""
	I0731 18:12:07.366770   74203 logs.go:276] 0 containers: []
	W0731 18:12:07.366778   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:12:07.366783   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:12:07.366861   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:12:07.400608   74203 cri.go:89] found id: ""
	I0731 18:12:07.400637   74203 logs.go:276] 0 containers: []
	W0731 18:12:07.400648   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:12:07.400661   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:12:07.400727   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:12:07.434996   74203 cri.go:89] found id: ""
	I0731 18:12:07.435028   74203 logs.go:276] 0 containers: []
	W0731 18:12:07.435037   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:12:07.435044   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:12:07.435130   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:12:07.474347   74203 cri.go:89] found id: ""
	I0731 18:12:07.474375   74203 logs.go:276] 0 containers: []
	W0731 18:12:07.474387   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:12:07.474400   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:12:07.474415   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:12:07.549009   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:12:07.549045   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:12:07.586710   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:12:07.586736   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:12:07.640770   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:12:07.640800   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:12:07.654380   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:12:07.654405   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:12:07.721479   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:12:10.221837   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:12:10.235686   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:12:10.235746   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:12:10.268769   74203 cri.go:89] found id: ""
	I0731 18:12:10.268794   74203 logs.go:276] 0 containers: []
	W0731 18:12:10.268802   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:12:10.268808   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:12:10.268860   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:12:10.305229   74203 cri.go:89] found id: ""
	I0731 18:12:10.305264   74203 logs.go:276] 0 containers: []
	W0731 18:12:10.305277   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:12:10.305290   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:12:10.305353   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:12:10.337070   74203 cri.go:89] found id: ""
	I0731 18:12:10.337095   74203 logs.go:276] 0 containers: []
	W0731 18:12:10.337104   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:12:10.337109   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:12:10.337155   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:12:08.643708   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:10.645483   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:10.372979   74203 cri.go:89] found id: ""
	I0731 18:12:10.373005   74203 logs.go:276] 0 containers: []
	W0731 18:12:10.373015   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:12:10.373022   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:12:10.373079   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:12:10.407225   74203 cri.go:89] found id: ""
	I0731 18:12:10.407252   74203 logs.go:276] 0 containers: []
	W0731 18:12:10.407264   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:12:10.407270   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:12:10.407327   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:12:10.443338   74203 cri.go:89] found id: ""
	I0731 18:12:10.443366   74203 logs.go:276] 0 containers: []
	W0731 18:12:10.443377   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:12:10.443385   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:12:10.443474   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:12:10.477005   74203 cri.go:89] found id: ""
	I0731 18:12:10.477030   74203 logs.go:276] 0 containers: []
	W0731 18:12:10.477038   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:12:10.477043   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:12:10.477092   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:12:10.509338   74203 cri.go:89] found id: ""
	I0731 18:12:10.509367   74203 logs.go:276] 0 containers: []
	W0731 18:12:10.509378   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:12:10.509389   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:12:10.509405   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:12:10.559604   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:12:10.559639   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:12:10.572652   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:12:10.572682   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:12:10.642749   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:12:10.642772   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:12:10.642789   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:12:10.728716   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:12:10.728753   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:12:13.267783   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:12:13.282235   74203 kubeadm.go:597] duration metric: took 4m4.41837453s to restartPrimaryControlPlane
	W0731 18:12:13.282324   74203 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 18:12:13.282355   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 18:12:15.410363   73696 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.136815784s)
	I0731 18:12:15.410431   73696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:12:15.426599   73696 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 18:12:15.435823   73696 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 18:12:15.444553   73696 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 18:12:15.444581   73696 kubeadm.go:157] found existing configuration files:
	
	I0731 18:12:15.444624   73696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0731 18:12:15.453198   73696 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 18:12:15.453273   73696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 18:12:15.461988   73696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0731 18:12:15.470178   73696 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 18:12:15.470238   73696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 18:12:15.478903   73696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0731 18:12:15.487176   73696 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 18:12:15.487215   73696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 18:12:15.496114   73696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0731 18:12:15.504518   73696 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 18:12:15.504579   73696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 18:12:15.513915   73696 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 18:12:15.563318   73696 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0731 18:12:15.563381   73696 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 18:12:15.697426   73696 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 18:12:15.697574   73696 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 18:12:15.697688   73696 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 18:12:15.902621   73696 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 18:12:15.904763   73696 out.go:204]   - Generating certificates and keys ...
	I0731 18:12:15.904869   73696 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 18:12:15.904948   73696 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 18:12:15.905049   73696 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 18:12:15.905149   73696 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 18:12:15.905247   73696 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 18:12:15.905328   73696 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 18:12:15.905426   73696 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 18:12:15.905516   73696 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 18:12:15.905620   73696 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 18:12:15.905729   73696 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 18:12:15.905812   73696 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 18:12:15.905890   73696 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 18:12:16.011366   73696 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 18:12:16.171776   73696 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0731 18:12:16.404302   73696 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 18:12:16.559451   73696 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 18:12:16.686612   73696 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 18:12:16.687311   73696 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 18:12:16.689956   73696 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 18:12:13.142855   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:15.144107   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:16.959318   74203 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.676937263s)
	I0731 18:12:16.959425   74203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:12:16.973440   74203 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 18:12:16.983482   74203 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 18:12:16.993930   74203 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 18:12:16.993951   74203 kubeadm.go:157] found existing configuration files:
	
	I0731 18:12:16.993993   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 18:12:17.002713   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 18:12:17.002771   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 18:12:17.012107   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 18:12:17.022548   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 18:12:17.022604   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 18:12:17.033569   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 18:12:17.043338   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 18:12:17.043391   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 18:12:17.052064   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 18:12:17.060785   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 18:12:17.060850   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 18:12:17.069499   74203 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 18:12:17.136512   74203 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0731 18:12:17.136579   74203 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 18:12:17.286224   74203 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 18:12:17.286383   74203 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 18:12:17.286506   74203 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 18:12:17.467092   74203 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 18:12:17.468918   74203 out.go:204]   - Generating certificates and keys ...
	I0731 18:12:17.469024   74203 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 18:12:17.469135   74203 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 18:12:17.469229   74203 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 18:12:17.469307   74203 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 18:12:17.469439   74203 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 18:12:17.469525   74203 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 18:12:17.469609   74203 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 18:12:17.470025   74203 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 18:12:17.470501   74203 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 18:12:17.470852   74203 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 18:12:17.470899   74203 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 18:12:17.470949   74203 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 18:12:17.673308   74203 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 18:12:17.922789   74203 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 18:12:18.391239   74203 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 18:12:18.464854   74203 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 18:12:18.480495   74203 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 18:12:18.480675   74203 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 18:12:18.480746   74203 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 18:12:18.632564   74203 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 18:12:18.635416   74203 out.go:204]   - Booting up control plane ...
	I0731 18:12:18.635542   74203 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 18:12:18.643338   74203 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 18:12:18.645881   74203 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 18:12:18.646898   74203 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 18:12:18.650052   74203 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 18:12:16.691876   73696 out.go:204]   - Booting up control plane ...
	I0731 18:12:16.691967   73696 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 18:12:16.692064   73696 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 18:12:16.692643   73696 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 18:12:16.713038   73696 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 18:12:16.713123   73696 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 18:12:16.713159   73696 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 18:12:16.855506   73696 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0731 18:12:16.855638   73696 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0731 18:12:17.856697   73696 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001297342s
	I0731 18:12:17.856823   73696 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0731 18:12:17.144295   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:19.644100   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:21.644654   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:22.358287   73696 kubeadm.go:310] [api-check] The API server is healthy after 4.501118217s
	I0731 18:12:22.370066   73696 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 18:12:22.382929   73696 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 18:12:22.402765   73696 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 18:12:22.403044   73696 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-094310 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 18:12:22.419724   73696 kubeadm.go:310] [bootstrap-token] Using token: hduea8.ix2m91ewiu6okgi9
	I0731 18:12:22.421231   73696 out.go:204]   - Configuring RBAC rules ...
	I0731 18:12:22.421382   73696 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 18:12:22.426230   73696 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 18:12:22.434423   73696 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 18:12:22.437839   73696 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 18:12:22.449264   73696 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 18:12:22.452420   73696 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 18:12:22.764876   73696 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 18:12:23.216229   73696 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 18:12:23.765173   73696 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 18:12:23.766223   73696 kubeadm.go:310] 
	I0731 18:12:23.766311   73696 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 18:12:23.766356   73696 kubeadm.go:310] 
	I0731 18:12:23.766466   73696 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 18:12:23.766487   73696 kubeadm.go:310] 
	I0731 18:12:23.766521   73696 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 18:12:23.766641   73696 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 18:12:23.766726   73696 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 18:12:23.766741   73696 kubeadm.go:310] 
	I0731 18:12:23.766827   73696 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 18:12:23.766844   73696 kubeadm.go:310] 
	I0731 18:12:23.766899   73696 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 18:12:23.766910   73696 kubeadm.go:310] 
	I0731 18:12:23.766986   73696 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 18:12:23.767089   73696 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 18:12:23.767225   73696 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 18:12:23.767237   73696 kubeadm.go:310] 
	I0731 18:12:23.767310   73696 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 18:12:23.767401   73696 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 18:12:23.767411   73696 kubeadm.go:310] 
	I0731 18:12:23.767531   73696 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token hduea8.ix2m91ewiu6okgi9 \
	I0731 18:12:23.767662   73696 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:460b24b51d10a3760f2391542a8a89e401359854f437f922cbee3f2d8ed6c856 \
	I0731 18:12:23.767695   73696 kubeadm.go:310] 	--control-plane 
	I0731 18:12:23.767702   73696 kubeadm.go:310] 
	I0731 18:12:23.767773   73696 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 18:12:23.767782   73696 kubeadm.go:310] 
	I0731 18:12:23.767847   73696 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token hduea8.ix2m91ewiu6okgi9 \
	I0731 18:12:23.767930   73696 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:460b24b51d10a3760f2391542a8a89e401359854f437f922cbee3f2d8ed6c856 
	I0731 18:12:23.768912   73696 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 18:12:23.769058   73696 cni.go:84] Creating CNI manager for ""
	I0731 18:12:23.769073   73696 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:12:23.771596   73696 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 18:12:23.773122   73696 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 18:12:23.782944   73696 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 18:12:23.800254   73696 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 18:12:23.800383   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:23.800398   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-094310 minikube.k8s.io/updated_at=2024_07_31T18_12_23_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1d737dad7efa60c56d30434fcd857dd3b14c91d9 minikube.k8s.io/name=default-k8s-diff-port-094310 minikube.k8s.io/primary=true
	I0731 18:12:23.827190   73696 ops.go:34] apiserver oom_adj: -16
	I0731 18:12:23.990425   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:24.490585   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:24.991490   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:25.490948   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:25.991461   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:23.645259   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:26.144352   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:26.491041   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:26.990516   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:27.491386   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:27.991150   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:28.490838   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:28.991267   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:29.490459   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:29.990672   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:30.491302   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:30.990644   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:28.644749   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:31.143617   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:32.532203   73800 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.265875459s)
	I0731 18:12:32.532286   73800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:12:32.548139   73800 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 18:12:32.558049   73800 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 18:12:32.567036   73800 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 18:12:32.567060   73800 kubeadm.go:157] found existing configuration files:
	
	I0731 18:12:32.567133   73800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 18:12:32.576069   73800 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 18:12:32.576124   73800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 18:12:32.584762   73800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 18:12:32.592927   73800 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 18:12:32.592980   73800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 18:12:32.601309   73800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 18:12:32.609478   73800 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 18:12:32.609525   73800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 18:12:32.617980   73800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 18:12:32.625943   73800 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 18:12:32.625978   73800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 18:12:32.634091   73800 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 18:12:32.821569   73800 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 18:12:31.491226   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:31.991099   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:32.490751   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:32.991252   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:33.490564   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:33.990977   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:34.491037   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:34.990696   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:35.491381   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:35.990793   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:36.490926   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:36.581312   73696 kubeadm.go:1113] duration metric: took 12.780981821s to wait for elevateKubeSystemPrivileges
	I0731 18:12:36.581370   73696 kubeadm.go:394] duration metric: took 5m8.741923744s to StartCluster
	I0731 18:12:36.581393   73696 settings.go:142] acquiring lock: {Name:mk8ac8269296bf0b887a00ff74caaad180005f89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:12:36.581485   73696 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 18:12:36.583690   73696 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/kubeconfig: {Name:mkf2340bc1ada3c683f132aca37228d7588e1fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:12:36.583986   73696 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.197 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 18:12:36.585079   73696 config.go:182] Loaded profile config "default-k8s-diff-port-094310": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:12:36.585328   73696 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 18:12:36.585677   73696 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-094310"
	I0731 18:12:36.585686   73696 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-094310"
	I0731 18:12:36.585688   73696 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-094310"
	I0731 18:12:36.585705   73696 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-094310"
	W0731 18:12:36.585717   73696 addons.go:243] addon storage-provisioner should already be in state true
	I0731 18:12:36.585720   73696 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-094310"
	I0731 18:12:36.585732   73696 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-094310"
	W0731 18:12:36.585740   73696 addons.go:243] addon metrics-server should already be in state true
	I0731 18:12:36.585752   73696 host.go:66] Checking if "default-k8s-diff-port-094310" exists ...
	I0731 18:12:36.585766   73696 host.go:66] Checking if "default-k8s-diff-port-094310" exists ...
	I0731 18:12:36.586152   73696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:36.586166   73696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:36.586166   73696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:36.586174   73696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:36.586180   73696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:36.586188   73696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:36.586456   73696 out.go:177] * Verifying Kubernetes components...
	I0731 18:12:36.588174   73696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:12:36.605611   73696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38873
	I0731 18:12:36.605856   73696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44661
	I0731 18:12:36.606122   73696 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:36.606710   73696 main.go:141] libmachine: Using API Version  1
	I0731 18:12:36.606731   73696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:36.606809   73696 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:36.607072   73696 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:36.607240   73696 main.go:141] libmachine: Using API Version  1
	I0731 18:12:36.607262   73696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:36.607789   73696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:36.607817   73696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:36.608000   73696 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:36.608173   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetState
	I0731 18:12:36.609009   73696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39155
	I0731 18:12:36.609469   73696 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:36.609954   73696 main.go:141] libmachine: Using API Version  1
	I0731 18:12:36.609973   73696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:36.610333   73696 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:36.610936   73696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:36.610998   73696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:36.612199   73696 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-094310"
	W0731 18:12:36.612224   73696 addons.go:243] addon default-storageclass should already be in state true
	I0731 18:12:36.612254   73696 host.go:66] Checking if "default-k8s-diff-port-094310" exists ...
	I0731 18:12:36.612624   73696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:36.612659   73696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:36.626474   73696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38009
	I0731 18:12:36.626981   73696 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:36.627514   73696 main.go:141] libmachine: Using API Version  1
	I0731 18:12:36.627534   73696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:36.627836   73696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43349
	I0731 18:12:36.628007   73696 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:36.628336   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetState
	I0731 18:12:36.628415   73696 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:36.628816   73696 main.go:141] libmachine: Using API Version  1
	I0731 18:12:36.628831   73696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:36.629237   73696 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:36.629450   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetState
	I0731 18:12:36.630518   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:12:36.631198   73696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43471
	I0731 18:12:36.631550   73696 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:36.632064   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:12:36.632200   73696 main.go:141] libmachine: Using API Version  1
	I0731 18:12:36.632217   73696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:36.632576   73696 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:12:36.632739   73696 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:36.633275   73696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:36.633313   73696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:36.633711   73696 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0731 18:12:33.642776   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:35.643640   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:36.633805   73696 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 18:12:36.633820   73696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 18:12:36.633840   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:12:36.634990   73696 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 18:12:36.635005   73696 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 18:12:36.635022   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:12:36.637135   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:12:36.637767   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:12:36.637792   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:12:36.637891   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:12:36.639047   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:12:36.639583   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:12:36.639617   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:12:36.639885   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:12:36.640106   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:12:36.640235   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:12:36.640419   73696 sshutil.go:53] new ssh client: &{IP:192.168.72.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/default-k8s-diff-port-094310/id_rsa Username:docker}
	I0731 18:12:36.641860   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:12:36.642037   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:12:36.642205   73696 sshutil.go:53] new ssh client: &{IP:192.168.72.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/default-k8s-diff-port-094310/id_rsa Username:docker}
	I0731 18:12:36.659960   73696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36389
	I0731 18:12:36.660280   73696 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:36.660692   73696 main.go:141] libmachine: Using API Version  1
	I0731 18:12:36.660713   73696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:36.660986   73696 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:36.661150   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetState
	I0731 18:12:36.663024   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:12:36.663232   73696 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 18:12:36.663245   73696 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 18:12:36.663264   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:12:36.666016   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:12:36.666393   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:12:36.666472   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:12:36.666562   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:12:36.666730   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:12:36.666832   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:12:36.666935   73696 sshutil.go:53] new ssh client: &{IP:192.168.72.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/default-k8s-diff-port-094310/id_rsa Username:docker}
	I0731 18:12:36.813977   73696 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 18:12:36.832201   73696 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-094310" to be "Ready" ...
	I0731 18:12:36.849864   73696 node_ready.go:49] node "default-k8s-diff-port-094310" has status "Ready":"True"
	I0731 18:12:36.849891   73696 node_ready.go:38] duration metric: took 17.657098ms for node "default-k8s-diff-port-094310" to be "Ready" ...
	I0731 18:12:36.849903   73696 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:12:36.860981   73696 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:36.865178   73696 pod_ready.go:92] pod "etcd-default-k8s-diff-port-094310" in "kube-system" namespace has status "Ready":"True"
	I0731 18:12:36.865198   73696 pod_ready.go:81] duration metric: took 4.190559ms for pod "etcd-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:36.865209   73696 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:36.869977   73696 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-094310" in "kube-system" namespace has status "Ready":"True"
	I0731 18:12:36.869998   73696 pod_ready.go:81] duration metric: took 4.780295ms for pod "kube-apiserver-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:36.870008   73696 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:36.874051   73696 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-094310" in "kube-system" namespace has status "Ready":"True"
	I0731 18:12:36.874069   73696 pod_ready.go:81] duration metric: took 4.053362ms for pod "kube-controller-manager-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:36.874079   73696 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:36.878519   73696 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-094310" in "kube-system" namespace has status "Ready":"True"
	I0731 18:12:36.878536   73696 pod_ready.go:81] duration metric: took 4.448692ms for pod "kube-scheduler-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:36.878544   73696 pod_ready.go:38] duration metric: took 28.628924ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:12:36.878564   73696 api_server.go:52] waiting for apiserver process to appear ...
	I0731 18:12:36.878622   73696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:12:36.892011   73696 api_server.go:72] duration metric: took 307.983877ms to wait for apiserver process to appear ...
	I0731 18:12:36.892031   73696 api_server.go:88] waiting for apiserver healthz status ...
	I0731 18:12:36.892049   73696 api_server.go:253] Checking apiserver healthz at https://192.168.72.197:8444/healthz ...
	I0731 18:12:36.895929   73696 api_server.go:279] https://192.168.72.197:8444/healthz returned 200:
	ok
	I0731 18:12:36.896760   73696 api_server.go:141] control plane version: v1.30.3
	I0731 18:12:36.896780   73696 api_server.go:131] duration metric: took 4.741896ms to wait for apiserver health ...
	I0731 18:12:36.896789   73696 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 18:12:36.974073   73696 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 18:12:36.974092   73696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0731 18:12:37.010218   73696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 18:12:37.018536   73696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 18:12:37.039734   73696 system_pods.go:59] 5 kube-system pods found
	I0731 18:12:37.039767   73696 system_pods.go:61] "etcd-default-k8s-diff-port-094310" [f73c4fb4-b160-4e79-a0a4-8fba255cfb8c] Running
	I0731 18:12:37.039773   73696 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-094310" [820215b3-0eaa-46ca-bf0b-4fa73942160a] Running
	I0731 18:12:37.039778   73696 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-094310" [5ca0ed32-5439-49b5-9a85-1a4b1628371d] Running
	I0731 18:12:37.039787   73696 system_pods.go:61] "kube-proxy-4vvjq" [a8dd9db6-5f45-4c8d-a9d3-03ede1db07eb] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 18:12:37.039792   73696 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-094310" [11590718-4e74-46eb-be90-6405c3f1759f] Running
	I0731 18:12:37.039802   73696 system_pods.go:74] duration metric: took 143.007992ms to wait for pod list to return data ...
	I0731 18:12:37.039812   73696 default_sa.go:34] waiting for default service account to be created ...
	I0731 18:12:37.041650   73696 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 18:12:37.041672   73696 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 18:12:37.096891   73696 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 18:12:37.096920   73696 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 18:12:37.159438   73696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 18:12:37.235560   73696 default_sa.go:45] found service account: "default"
	I0731 18:12:37.235599   73696 default_sa.go:55] duration metric: took 195.778976ms for default service account to be created ...
	I0731 18:12:37.235612   73696 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 18:12:37.439935   73696 system_pods.go:86] 7 kube-system pods found
	I0731 18:12:37.439966   73696 system_pods.go:89] "coredns-7db6d8ff4d-2r7zb" [e7acc926-db53-4c1c-a62f-45e303c69fc7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:12:37.439975   73696 system_pods.go:89] "coredns-7db6d8ff4d-756jj" [c91e86b1-8348-4dc7-9aa1-6909055c2dde] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:12:37.439982   73696 system_pods.go:89] "etcd-default-k8s-diff-port-094310" [f73c4fb4-b160-4e79-a0a4-8fba255cfb8c] Running
	I0731 18:12:37.439988   73696 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-094310" [820215b3-0eaa-46ca-bf0b-4fa73942160a] Running
	I0731 18:12:37.439993   73696 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-094310" [5ca0ed32-5439-49b5-9a85-1a4b1628371d] Running
	I0731 18:12:37.439998   73696 system_pods.go:89] "kube-proxy-4vvjq" [a8dd9db6-5f45-4c8d-a9d3-03ede1db07eb] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 18:12:37.440003   73696 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-094310" [11590718-4e74-46eb-be90-6405c3f1759f] Running
	I0731 18:12:37.440020   73696 retry.go:31] will retry after 230.300903ms: missing components: kube-dns, kube-proxy
	I0731 18:12:37.676385   73696 system_pods.go:86] 7 kube-system pods found
	I0731 18:12:37.676411   73696 system_pods.go:89] "coredns-7db6d8ff4d-2r7zb" [e7acc926-db53-4c1c-a62f-45e303c69fc7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:12:37.676421   73696 system_pods.go:89] "coredns-7db6d8ff4d-756jj" [c91e86b1-8348-4dc7-9aa1-6909055c2dde] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:12:37.676429   73696 system_pods.go:89] "etcd-default-k8s-diff-port-094310" [f73c4fb4-b160-4e79-a0a4-8fba255cfb8c] Running
	I0731 18:12:37.676436   73696 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-094310" [820215b3-0eaa-46ca-bf0b-4fa73942160a] Running
	I0731 18:12:37.676442   73696 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-094310" [5ca0ed32-5439-49b5-9a85-1a4b1628371d] Running
	I0731 18:12:37.676451   73696 system_pods.go:89] "kube-proxy-4vvjq" [a8dd9db6-5f45-4c8d-a9d3-03ede1db07eb] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 18:12:37.676456   73696 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-094310" [11590718-4e74-46eb-be90-6405c3f1759f] Running
	I0731 18:12:37.676475   73696 retry.go:31] will retry after 311.28179ms: missing components: kube-dns, kube-proxy
	I0731 18:12:37.813837   73696 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:37.813870   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .Close
	I0731 18:12:37.814017   73696 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:37.814039   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .Close
	I0731 18:12:37.814265   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | Closing plugin on server side
	I0731 18:12:37.814316   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | Closing plugin on server side
	I0731 18:12:37.814363   73696 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:37.814376   73696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:37.814391   73696 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:37.814402   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .Close
	I0731 18:12:37.814531   73696 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:37.814556   73696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:37.814598   73696 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:37.814608   73696 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:37.814631   73696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:37.814622   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .Close
	I0731 18:12:37.816102   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | Closing plugin on server side
	I0731 18:12:37.816268   73696 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:37.816280   73696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:37.830991   73696 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:37.831018   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .Close
	I0731 18:12:37.831354   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | Closing plugin on server side
	I0731 18:12:37.831354   73696 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:37.831380   73696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:37.995206   73696 system_pods.go:86] 8 kube-system pods found
	I0731 18:12:37.995248   73696 system_pods.go:89] "coredns-7db6d8ff4d-2r7zb" [e7acc926-db53-4c1c-a62f-45e303c69fc7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:12:37.995262   73696 system_pods.go:89] "coredns-7db6d8ff4d-756jj" [c91e86b1-8348-4dc7-9aa1-6909055c2dde] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:12:37.995272   73696 system_pods.go:89] "etcd-default-k8s-diff-port-094310" [f73c4fb4-b160-4e79-a0a4-8fba255cfb8c] Running
	I0731 18:12:37.995295   73696 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-094310" [820215b3-0eaa-46ca-bf0b-4fa73942160a] Running
	I0731 18:12:37.995310   73696 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-094310" [5ca0ed32-5439-49b5-9a85-1a4b1628371d] Running
	I0731 18:12:37.995322   73696 system_pods.go:89] "kube-proxy-4vvjq" [a8dd9db6-5f45-4c8d-a9d3-03ede1db07eb] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 18:12:37.995332   73696 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-094310" [11590718-4e74-46eb-be90-6405c3f1759f] Running
	I0731 18:12:37.995345   73696 system_pods.go:89] "storage-provisioner" [2e4c5a96-4dfc-4af6-8d3e-bab865644328] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 18:12:37.995370   73696 retry.go:31] will retry after 381.430275ms: missing components: kube-dns, kube-proxy
	I0731 18:12:38.392678   73696 system_pods.go:86] 9 kube-system pods found
	I0731 18:12:38.392719   73696 system_pods.go:89] "coredns-7db6d8ff4d-2r7zb" [e7acc926-db53-4c1c-a62f-45e303c69fc7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:12:38.392732   73696 system_pods.go:89] "coredns-7db6d8ff4d-756jj" [c91e86b1-8348-4dc7-9aa1-6909055c2dde] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:12:38.392742   73696 system_pods.go:89] "etcd-default-k8s-diff-port-094310" [f73c4fb4-b160-4e79-a0a4-8fba255cfb8c] Running
	I0731 18:12:38.392751   73696 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-094310" [820215b3-0eaa-46ca-bf0b-4fa73942160a] Running
	I0731 18:12:38.392760   73696 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-094310" [5ca0ed32-5439-49b5-9a85-1a4b1628371d] Running
	I0731 18:12:38.392770   73696 system_pods.go:89] "kube-proxy-4vvjq" [a8dd9db6-5f45-4c8d-a9d3-03ede1db07eb] Running
	I0731 18:12:38.392778   73696 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-094310" [11590718-4e74-46eb-be90-6405c3f1759f] Running
	I0731 18:12:38.392787   73696 system_pods.go:89] "metrics-server-569cc877fc-mskwc" [c57990b4-1d91-4764-9c33-2fd5f7d2f83b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 18:12:38.392802   73696 system_pods.go:89] "storage-provisioner" [2e4c5a96-4dfc-4af6-8d3e-bab865644328] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 18:12:38.392823   73696 retry.go:31] will retry after 567.905994ms: missing components: kube-dns
	I0731 18:12:38.501117   73696 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.341621275s)
	I0731 18:12:38.501181   73696 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:38.501200   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .Close
	I0731 18:12:38.501595   73696 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:38.501615   73696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:38.501625   73696 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:38.501634   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .Close
	I0731 18:12:38.501593   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | Closing plugin on server side
	I0731 18:12:38.501907   73696 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:38.501953   73696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:38.501966   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | Closing plugin on server side
	I0731 18:12:38.501975   73696 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-094310"
	I0731 18:12:38.505204   73696 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0731 18:12:38.506517   73696 addons.go:510] duration metric: took 1.921658263s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0731 18:12:38.967657   73696 system_pods.go:86] 9 kube-system pods found
	I0731 18:12:38.967691   73696 system_pods.go:89] "coredns-7db6d8ff4d-2r7zb" [e7acc926-db53-4c1c-a62f-45e303c69fc7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:12:38.967700   73696 system_pods.go:89] "coredns-7db6d8ff4d-756jj" [c91e86b1-8348-4dc7-9aa1-6909055c2dde] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:12:38.967708   73696 system_pods.go:89] "etcd-default-k8s-diff-port-094310" [f73c4fb4-b160-4e79-a0a4-8fba255cfb8c] Running
	I0731 18:12:38.967716   73696 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-094310" [820215b3-0eaa-46ca-bf0b-4fa73942160a] Running
	I0731 18:12:38.967723   73696 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-094310" [5ca0ed32-5439-49b5-9a85-1a4b1628371d] Running
	I0731 18:12:38.967729   73696 system_pods.go:89] "kube-proxy-4vvjq" [a8dd9db6-5f45-4c8d-a9d3-03ede1db07eb] Running
	I0731 18:12:38.967736   73696 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-094310" [11590718-4e74-46eb-be90-6405c3f1759f] Running
	I0731 18:12:38.967746   73696 system_pods.go:89] "metrics-server-569cc877fc-mskwc" [c57990b4-1d91-4764-9c33-2fd5f7d2f83b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 18:12:38.967759   73696 system_pods.go:89] "storage-provisioner" [2e4c5a96-4dfc-4af6-8d3e-bab865644328] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 18:12:38.967779   73696 retry.go:31] will retry after 488.293971ms: missing components: kube-dns
	I0731 18:12:39.464918   73696 system_pods.go:86] 9 kube-system pods found
	I0731 18:12:39.464956   73696 system_pods.go:89] "coredns-7db6d8ff4d-2r7zb" [e7acc926-db53-4c1c-a62f-45e303c69fc7] Running
	I0731 18:12:39.464965   73696 system_pods.go:89] "coredns-7db6d8ff4d-756jj" [c91e86b1-8348-4dc7-9aa1-6909055c2dde] Running
	I0731 18:12:39.464972   73696 system_pods.go:89] "etcd-default-k8s-diff-port-094310" [f73c4fb4-b160-4e79-a0a4-8fba255cfb8c] Running
	I0731 18:12:39.464978   73696 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-094310" [820215b3-0eaa-46ca-bf0b-4fa73942160a] Running
	I0731 18:12:39.464986   73696 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-094310" [5ca0ed32-5439-49b5-9a85-1a4b1628371d] Running
	I0731 18:12:39.464992   73696 system_pods.go:89] "kube-proxy-4vvjq" [a8dd9db6-5f45-4c8d-a9d3-03ede1db07eb] Running
	I0731 18:12:39.464999   73696 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-094310" [11590718-4e74-46eb-be90-6405c3f1759f] Running
	I0731 18:12:39.465017   73696 system_pods.go:89] "metrics-server-569cc877fc-mskwc" [c57990b4-1d91-4764-9c33-2fd5f7d2f83b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 18:12:39.465028   73696 system_pods.go:89] "storage-provisioner" [2e4c5a96-4dfc-4af6-8d3e-bab865644328] Running
	I0731 18:12:39.465041   73696 system_pods.go:126] duration metric: took 2.229422302s to wait for k8s-apps to be running ...
	I0731 18:12:39.465053   73696 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 18:12:39.465111   73696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:12:39.482063   73696 system_svc.go:56] duration metric: took 16.998965ms WaitForService to wait for kubelet
	I0731 18:12:39.482092   73696 kubeadm.go:582] duration metric: took 2.898066741s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 18:12:39.482138   73696 node_conditions.go:102] verifying NodePressure condition ...
	I0731 18:12:39.486728   73696 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 18:12:39.486752   73696 node_conditions.go:123] node cpu capacity is 2
	I0731 18:12:39.486764   73696 node_conditions.go:105] duration metric: took 4.617934ms to run NodePressure ...
	I0731 18:12:39.486777   73696 start.go:241] waiting for startup goroutines ...
	I0731 18:12:39.486787   73696 start.go:246] waiting for cluster config update ...
	I0731 18:12:39.486798   73696 start.go:255] writing updated cluster config ...
	I0731 18:12:39.487565   73696 ssh_runner.go:195] Run: rm -f paused
	I0731 18:12:39.539591   73696 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 18:12:39.541533   73696 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-094310" cluster and "default" namespace by default
	I0731 18:12:37.644379   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:39.645608   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:41.969949   73800 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0731 18:12:41.970018   73800 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 18:12:41.970137   73800 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 18:12:41.970234   73800 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 18:12:41.970386   73800 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 18:12:41.970495   73800 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 18:12:41.972177   73800 out.go:204]   - Generating certificates and keys ...
	I0731 18:12:41.972244   73800 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 18:12:41.972314   73800 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 18:12:41.972403   73800 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 18:12:41.972480   73800 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 18:12:41.972538   73800 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 18:12:41.972588   73800 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 18:12:41.972654   73800 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 18:12:41.972748   73800 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 18:12:41.972859   73800 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 18:12:41.972982   73800 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 18:12:41.973027   73800 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 18:12:41.973082   73800 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 18:12:41.973152   73800 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 18:12:41.973205   73800 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0731 18:12:41.973252   73800 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 18:12:41.973323   73800 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 18:12:41.973387   73800 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 18:12:41.973456   73800 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 18:12:41.973553   73800 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 18:12:41.974927   73800 out.go:204]   - Booting up control plane ...
	I0731 18:12:41.975019   73800 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 18:12:41.975128   73800 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 18:12:41.975215   73800 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 18:12:41.975342   73800 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 18:12:41.975425   73800 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 18:12:41.975474   73800 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 18:12:41.975635   73800 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0731 18:12:41.975710   73800 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0731 18:12:41.975766   73800 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001397088s
	I0731 18:12:41.975824   73800 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0731 18:12:41.975909   73800 kubeadm.go:310] [api-check] The API server is healthy after 5.001258426s
	I0731 18:12:41.976064   73800 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 18:12:41.976241   73800 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 18:12:41.976355   73800 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 18:12:41.976528   73800 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-436067 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 18:12:41.976605   73800 kubeadm.go:310] [bootstrap-token] Using token: m9csv8.j58cj919sgzkgy1k
	I0731 18:12:41.978880   73800 out.go:204]   - Configuring RBAC rules ...
	I0731 18:12:41.978976   73800 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 18:12:41.979087   73800 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 18:12:41.979277   73800 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 18:12:41.979441   73800 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 18:12:41.979622   73800 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 18:12:41.979708   73800 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 18:12:41.979835   73800 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 18:12:41.979875   73800 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 18:12:41.979918   73800 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 18:12:41.979924   73800 kubeadm.go:310] 
	I0731 18:12:41.979971   73800 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 18:12:41.979979   73800 kubeadm.go:310] 
	I0731 18:12:41.980058   73800 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 18:12:41.980067   73800 kubeadm.go:310] 
	I0731 18:12:41.980098   73800 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 18:12:41.980160   73800 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 18:12:41.980229   73800 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 18:12:41.980236   73800 kubeadm.go:310] 
	I0731 18:12:41.980300   73800 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 18:12:41.980311   73800 kubeadm.go:310] 
	I0731 18:12:41.980384   73800 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 18:12:41.980393   73800 kubeadm.go:310] 
	I0731 18:12:41.980446   73800 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 18:12:41.980548   73800 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 18:12:41.980644   73800 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 18:12:41.980653   73800 kubeadm.go:310] 
	I0731 18:12:41.980759   73800 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 18:12:41.980824   73800 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 18:12:41.980830   73800 kubeadm.go:310] 
	I0731 18:12:41.980896   73800 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token m9csv8.j58cj919sgzkgy1k \
	I0731 18:12:41.980984   73800 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:460b24b51d10a3760f2391542a8a89e401359854f437f922cbee3f2d8ed6c856 \
	I0731 18:12:41.981011   73800 kubeadm.go:310] 	--control-plane 
	I0731 18:12:41.981023   73800 kubeadm.go:310] 
	I0731 18:12:41.981093   73800 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 18:12:41.981099   73800 kubeadm.go:310] 
	I0731 18:12:41.981183   73800 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token m9csv8.j58cj919sgzkgy1k \
	I0731 18:12:41.981306   73800 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:460b24b51d10a3760f2391542a8a89e401359854f437f922cbee3f2d8ed6c856 
	I0731 18:12:41.981317   73800 cni.go:84] Creating CNI manager for ""
	I0731 18:12:41.981324   73800 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:12:41.982701   73800 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 18:12:41.983929   73800 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 18:12:41.995272   73800 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 18:12:42.014929   73800 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 18:12:42.014984   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:42.015033   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-436067 minikube.k8s.io/updated_at=2024_07_31T18_12_42_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1d737dad7efa60c56d30434fcd857dd3b14c91d9 minikube.k8s.io/name=embed-certs-436067 minikube.k8s.io/primary=true
	I0731 18:12:42.164811   73800 ops.go:34] apiserver oom_adj: -16
	I0731 18:12:42.164934   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:42.665108   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:43.165818   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:43.665733   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:44.165074   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:42.144896   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:44.644077   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:44.665477   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:45.165127   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:45.665440   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:46.165555   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:46.665998   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:47.165829   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:47.665704   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:48.164973   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:48.665549   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:49.165210   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:47.142947   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:49.144015   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:51.644495   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:49.665500   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:50.165567   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:50.665547   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:51.166002   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:51.664981   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:52.165135   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:52.665927   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:53.165045   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:53.664981   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:54.165715   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:54.252373   73800 kubeadm.go:1113] duration metric: took 12.237438799s to wait for elevateKubeSystemPrivileges
	I0731 18:12:54.252415   73800 kubeadm.go:394] duration metric: took 5m6.689979758s to StartCluster
	I0731 18:12:54.252435   73800 settings.go:142] acquiring lock: {Name:mk8ac8269296bf0b887a00ff74caaad180005f89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:12:54.252509   73800 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 18:12:54.254175   73800 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/kubeconfig: {Name:mkf2340bc1ada3c683f132aca37228d7588e1fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:12:54.254495   73800 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.86 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 18:12:54.254600   73800 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 18:12:54.254687   73800 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-436067"
	I0731 18:12:54.254721   73800 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-436067"
	I0731 18:12:54.254724   73800 addons.go:69] Setting default-storageclass=true in profile "embed-certs-436067"
	W0731 18:12:54.254734   73800 addons.go:243] addon storage-provisioner should already be in state true
	I0731 18:12:54.254737   73800 config.go:182] Loaded profile config "embed-certs-436067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:12:54.254743   73800 addons.go:69] Setting metrics-server=true in profile "embed-certs-436067"
	I0731 18:12:54.254760   73800 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-436067"
	I0731 18:12:54.254769   73800 host.go:66] Checking if "embed-certs-436067" exists ...
	I0731 18:12:54.254785   73800 addons.go:234] Setting addon metrics-server=true in "embed-certs-436067"
	W0731 18:12:54.254795   73800 addons.go:243] addon metrics-server should already be in state true
	I0731 18:12:54.254826   73800 host.go:66] Checking if "embed-certs-436067" exists ...
	I0731 18:12:54.255205   73800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:54.255208   73800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:54.255233   73800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:54.255238   73800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:54.255302   73800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:54.255323   73800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:54.256412   73800 out.go:177] * Verifying Kubernetes components...
	I0731 18:12:54.257653   73800 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:12:54.274456   73800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34433
	I0731 18:12:54.274959   73800 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:54.275532   73800 main.go:141] libmachine: Using API Version  1
	I0731 18:12:54.275554   73800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:54.275828   73800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44423
	I0731 18:12:54.275851   73800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41513
	I0731 18:12:54.276001   73800 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:54.276152   73800 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:54.276225   73800 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:54.276498   73800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:54.276534   73800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:54.276592   73800 main.go:141] libmachine: Using API Version  1
	I0731 18:12:54.276606   73800 main.go:141] libmachine: Using API Version  1
	I0731 18:12:54.276613   73800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:54.276616   73800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:54.276954   73800 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:54.277055   73800 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:54.277103   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetState
	I0731 18:12:54.277663   73800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:54.277704   73800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:54.280559   73800 addons.go:234] Setting addon default-storageclass=true in "embed-certs-436067"
	W0731 18:12:54.280583   73800 addons.go:243] addon default-storageclass should already be in state true
	I0731 18:12:54.280615   73800 host.go:66] Checking if "embed-certs-436067" exists ...
	I0731 18:12:54.280969   73800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:54.281000   73800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:54.293211   73800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38393
	I0731 18:12:54.293657   73800 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:54.294121   73800 main.go:141] libmachine: Using API Version  1
	I0731 18:12:54.294142   73800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:54.294444   73800 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:54.294642   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetState
	I0731 18:12:54.294724   73800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38771
	I0731 18:12:54.295077   73800 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:54.295590   73800 main.go:141] libmachine: Using API Version  1
	I0731 18:12:54.295609   73800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:54.296058   73800 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:54.296285   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetState
	I0731 18:12:54.296377   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:12:54.298013   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:12:54.298541   73800 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0731 18:12:54.299454   73800 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:12:54.299489   73800 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 18:12:54.299501   73800 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 18:12:54.299515   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:12:54.300664   73800 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 18:12:54.300682   73800 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 18:12:54.300699   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:12:54.301018   73800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36647
	I0731 18:12:54.301671   73800 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:54.302210   73800 main.go:141] libmachine: Using API Version  1
	I0731 18:12:54.302229   73800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:54.302731   73800 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:54.302857   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:12:54.303479   73800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:54.303503   73800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:54.303710   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:12:54.303744   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:12:54.303768   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:12:54.303893   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:12:54.304071   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:12:54.304232   73800 sshutil.go:53] new ssh client: &{IP:192.168.50.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/embed-certs-436067/id_rsa Username:docker}
	I0731 18:12:54.304601   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:12:54.305040   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:12:54.305063   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:12:54.305311   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:12:54.305480   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:12:54.305594   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:12:54.305712   73800 sshutil.go:53] new ssh client: &{IP:192.168.50.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/embed-certs-436067/id_rsa Username:docker}
	I0731 18:12:54.318168   73800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33711
	I0731 18:12:54.318558   73800 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:54.319015   73800 main.go:141] libmachine: Using API Version  1
	I0731 18:12:54.319033   73800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:54.319355   73800 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:54.319552   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetState
	I0731 18:12:54.321369   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:12:54.321540   73800 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 18:12:54.321553   73800 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 18:12:54.321565   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:12:54.324613   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:12:54.324994   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:12:54.325011   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:12:54.325310   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:12:54.325437   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:12:54.325571   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:12:54.325683   73800 sshutil.go:53] new ssh client: &{IP:192.168.50.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/embed-certs-436067/id_rsa Username:docker}
	I0731 18:12:54.435485   73800 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 18:12:54.462541   73800 node_ready.go:35] waiting up to 6m0s for node "embed-certs-436067" to be "Ready" ...
	I0731 18:12:54.473787   73800 node_ready.go:49] node "embed-certs-436067" has status "Ready":"True"
	I0731 18:12:54.473810   73800 node_ready.go:38] duration metric: took 11.237808ms for node "embed-certs-436067" to be "Ready" ...
	I0731 18:12:54.473819   73800 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:12:54.485589   73800 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:54.507887   73800 pod_ready.go:92] pod "etcd-embed-certs-436067" in "kube-system" namespace has status "Ready":"True"
	I0731 18:12:54.507910   73800 pod_ready.go:81] duration metric: took 22.296215ms for pod "etcd-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:54.507921   73800 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:54.524721   73800 pod_ready.go:92] pod "kube-apiserver-embed-certs-436067" in "kube-system" namespace has status "Ready":"True"
	I0731 18:12:54.524742   73800 pod_ready.go:81] duration metric: took 16.814491ms for pod "kube-apiserver-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:54.524751   73800 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:54.536810   73800 pod_ready.go:92] pod "kube-controller-manager-embed-certs-436067" in "kube-system" namespace has status "Ready":"True"
	I0731 18:12:54.536837   73800 pod_ready.go:81] duration metric: took 12.078703ms for pod "kube-controller-manager-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:54.536848   73800 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-85spm" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:54.552538   73800 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 18:12:54.579223   73800 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 18:12:54.579244   73800 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0731 18:12:54.596087   73800 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 18:12:54.617180   73800 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 18:12:54.617209   73800 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 18:12:54.679879   73800 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 18:12:54.679908   73800 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 18:12:54.775272   73800 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 18:12:55.199299   73800 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:55.199335   73800 main.go:141] libmachine: (embed-certs-436067) Calling .Close
	I0731 18:12:55.199342   73800 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:55.199361   73800 main.go:141] libmachine: (embed-certs-436067) Calling .Close
	I0731 18:12:55.199618   73800 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:55.199666   73800 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:55.199678   73800 main.go:141] libmachine: (embed-certs-436067) DBG | Closing plugin on server side
	I0731 18:12:55.199634   73800 main.go:141] libmachine: (embed-certs-436067) DBG | Closing plugin on server side
	I0731 18:12:55.199685   73800 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:55.199710   73800 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:55.199689   73800 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:55.199717   73800 main.go:141] libmachine: (embed-certs-436067) Calling .Close
	I0731 18:12:55.199726   73800 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:55.199735   73800 main.go:141] libmachine: (embed-certs-436067) Calling .Close
	I0731 18:12:55.200002   73800 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:55.200016   73800 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:55.200079   73800 main.go:141] libmachine: (embed-certs-436067) DBG | Closing plugin on server side
	I0731 18:12:55.200107   73800 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:55.200120   73800 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:55.227472   73800 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:55.227497   73800 main.go:141] libmachine: (embed-certs-436067) Calling .Close
	I0731 18:12:55.227792   73800 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:55.227811   73800 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:55.712134   73800 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:55.712159   73800 main.go:141] libmachine: (embed-certs-436067) Calling .Close
	I0731 18:12:55.712516   73800 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:55.712568   73800 main.go:141] libmachine: (embed-certs-436067) DBG | Closing plugin on server side
	I0731 18:12:55.712574   73800 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:55.712596   73800 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:55.712605   73800 main.go:141] libmachine: (embed-certs-436067) Calling .Close
	I0731 18:12:55.712851   73800 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:55.712868   73800 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:55.712867   73800 main.go:141] libmachine: (embed-certs-436067) DBG | Closing plugin on server side
	I0731 18:12:55.712877   73800 addons.go:475] Verifying addon metrics-server=true in "embed-certs-436067"
	I0731 18:12:55.714432   73800 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0731 18:12:54.143455   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:56.144177   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:55.715903   73800 addons.go:510] duration metric: took 1.461304856s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0731 18:12:56.542100   73800 pod_ready.go:92] pod "kube-proxy-85spm" in "kube-system" namespace has status "Ready":"True"
	I0731 18:12:56.542122   73800 pod_ready.go:81] duration metric: took 2.005265959s for pod "kube-proxy-85spm" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:56.542135   73800 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:56.553810   73800 pod_ready.go:92] pod "kube-scheduler-embed-certs-436067" in "kube-system" namespace has status "Ready":"True"
	I0731 18:12:56.553831   73800 pod_ready.go:81] duration metric: took 11.689814ms for pod "kube-scheduler-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:56.553840   73800 pod_ready.go:38] duration metric: took 2.080010607s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:12:56.553853   73800 api_server.go:52] waiting for apiserver process to appear ...
	I0731 18:12:56.553899   73800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:12:56.568301   73800 api_server.go:72] duration metric: took 2.313759916s to wait for apiserver process to appear ...
	I0731 18:12:56.568327   73800 api_server.go:88] waiting for apiserver healthz status ...
	I0731 18:12:56.568345   73800 api_server.go:253] Checking apiserver healthz at https://192.168.50.86:8443/healthz ...
	I0731 18:12:56.573861   73800 api_server.go:279] https://192.168.50.86:8443/healthz returned 200:
	ok
	I0731 18:12:56.575494   73800 api_server.go:141] control plane version: v1.30.3
	I0731 18:12:56.575513   73800 api_server.go:131] duration metric: took 7.1795ms to wait for apiserver health ...
	I0731 18:12:56.575520   73800 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 18:12:56.669169   73800 system_pods.go:59] 9 kube-system pods found
	I0731 18:12:56.669197   73800 system_pods.go:61] "coredns-7db6d8ff4d-fqkfd" [2e5a67a3-6f2b-43a5-8b94-cf48202c5958] Running
	I0731 18:12:56.669202   73800 system_pods.go:61] "coredns-7db6d8ff4d-qpb62" [e57f157b-01b5-42ec-9630-b9b5ae94fe5d] Running
	I0731 18:12:56.669206   73800 system_pods.go:61] "etcd-embed-certs-436067" [d0439817-b307-4673-8a64-34aa31266fbb] Running
	I0731 18:12:56.669210   73800 system_pods.go:61] "kube-apiserver-embed-certs-436067" [c2409e33-d42b-4132-8f63-197c4d445704] Running
	I0731 18:12:56.669214   73800 system_pods.go:61] "kube-controller-manager-embed-certs-436067" [dfea1f44-48a3-4a3c-8a27-be6f01f58add] Running
	I0731 18:12:56.669218   73800 system_pods.go:61] "kube-proxy-85spm" [eecb399a-0365-436d-8f14-a7853a5a2ea3] Running
	I0731 18:12:56.669221   73800 system_pods.go:61] "kube-scheduler-embed-certs-436067" [e406617d-adf7-4baa-9a8a-6fd92847a12f] Running
	I0731 18:12:56.669228   73800 system_pods.go:61] "metrics-server-569cc877fc-pgf6q" [b799fa04-0bf7-4914-9738-964b825577b5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 18:12:56.669231   73800 system_pods.go:61] "storage-provisioner" [a7d95d32-1002-4fba-b0fd-555b872efa1f] Running
	I0731 18:12:56.669240   73800 system_pods.go:74] duration metric: took 93.714593ms to wait for pod list to return data ...
	I0731 18:12:56.669247   73800 default_sa.go:34] waiting for default service account to be created ...
	I0731 18:12:56.866494   73800 default_sa.go:45] found service account: "default"
	I0731 18:12:56.866521   73800 default_sa.go:55] duration metric: took 197.264891ms for default service account to be created ...
	I0731 18:12:56.866532   73800 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 18:12:57.068903   73800 system_pods.go:86] 9 kube-system pods found
	I0731 18:12:57.068930   73800 system_pods.go:89] "coredns-7db6d8ff4d-fqkfd" [2e5a67a3-6f2b-43a5-8b94-cf48202c5958] Running
	I0731 18:12:57.068936   73800 system_pods.go:89] "coredns-7db6d8ff4d-qpb62" [e57f157b-01b5-42ec-9630-b9b5ae94fe5d] Running
	I0731 18:12:57.068940   73800 system_pods.go:89] "etcd-embed-certs-436067" [d0439817-b307-4673-8a64-34aa31266fbb] Running
	I0731 18:12:57.068944   73800 system_pods.go:89] "kube-apiserver-embed-certs-436067" [c2409e33-d42b-4132-8f63-197c4d445704] Running
	I0731 18:12:57.068948   73800 system_pods.go:89] "kube-controller-manager-embed-certs-436067" [dfea1f44-48a3-4a3c-8a27-be6f01f58add] Running
	I0731 18:12:57.068951   73800 system_pods.go:89] "kube-proxy-85spm" [eecb399a-0365-436d-8f14-a7853a5a2ea3] Running
	I0731 18:12:57.068955   73800 system_pods.go:89] "kube-scheduler-embed-certs-436067" [e406617d-adf7-4baa-9a8a-6fd92847a12f] Running
	I0731 18:12:57.068961   73800 system_pods.go:89] "metrics-server-569cc877fc-pgf6q" [b799fa04-0bf7-4914-9738-964b825577b5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 18:12:57.068965   73800 system_pods.go:89] "storage-provisioner" [a7d95d32-1002-4fba-b0fd-555b872efa1f] Running
	I0731 18:12:57.068972   73800 system_pods.go:126] duration metric: took 202.435205ms to wait for k8s-apps to be running ...
	I0731 18:12:57.068980   73800 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 18:12:57.069018   73800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:12:57.083728   73800 system_svc.go:56] duration metric: took 14.739831ms WaitForService to wait for kubelet
	I0731 18:12:57.083756   73800 kubeadm.go:582] duration metric: took 2.829227102s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 18:12:57.083782   73800 node_conditions.go:102] verifying NodePressure condition ...
	I0731 18:12:57.266463   73800 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 18:12:57.266486   73800 node_conditions.go:123] node cpu capacity is 2
	I0731 18:12:57.266495   73800 node_conditions.go:105] duration metric: took 182.707869ms to run NodePressure ...
	I0731 18:12:57.266505   73800 start.go:241] waiting for startup goroutines ...
	I0731 18:12:57.266512   73800 start.go:246] waiting for cluster config update ...
	I0731 18:12:57.266521   73800 start.go:255] writing updated cluster config ...
	I0731 18:12:57.266767   73800 ssh_runner.go:195] Run: rm -f paused
	I0731 18:12:57.313723   73800 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 18:12:57.315966   73800 out.go:177] * Done! kubectl is now configured to use "embed-certs-436067" cluster and "default" namespace by default
	I0731 18:12:58.652853   74203 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0731 18:12:58.653480   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:12:58.653735   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:12:58.643237   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:13:01.143274   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:13:01.643357   73479 pod_ready.go:81] duration metric: took 4m0.006506347s for pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace to be "Ready" ...
	E0731 18:13:01.643382   73479 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0731 18:13:01.643388   73479 pod_ready.go:38] duration metric: took 4m7.418860701s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:13:01.643402   73479 api_server.go:52] waiting for apiserver process to appear ...
	I0731 18:13:01.643428   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:13:01.643481   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:13:01.692071   73479 cri.go:89] found id: "895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0"
	I0731 18:13:01.692092   73479 cri.go:89] found id: ""
	I0731 18:13:01.692101   73479 logs.go:276] 1 containers: [895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0]
	I0731 18:13:01.692159   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:01.697266   73479 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:13:01.697356   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:13:01.736299   73479 cri.go:89] found id: "65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645"
	I0731 18:13:01.736350   73479 cri.go:89] found id: ""
	I0731 18:13:01.736360   73479 logs.go:276] 1 containers: [65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645]
	I0731 18:13:01.736417   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:01.740672   73479 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:13:01.740733   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:13:01.774782   73479 cri.go:89] found id: "f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855"
	I0731 18:13:01.774816   73479 cri.go:89] found id: ""
	I0731 18:13:01.774826   73479 logs.go:276] 1 containers: [f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855]
	I0731 18:13:01.774893   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:01.778542   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:13:01.778618   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:13:01.818749   73479 cri.go:89] found id: "ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2"
	I0731 18:13:01.818769   73479 cri.go:89] found id: ""
	I0731 18:13:01.818776   73479 logs.go:276] 1 containers: [ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2]
	I0731 18:13:01.818828   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:01.827176   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:13:01.827248   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:13:01.860700   73479 cri.go:89] found id: "57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847"
	I0731 18:13:01.860730   73479 cri.go:89] found id: ""
	I0731 18:13:01.860739   73479 logs.go:276] 1 containers: [57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847]
	I0731 18:13:01.860825   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:03.654494   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:13:03.654747   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:13:01.864629   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:13:01.864702   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:13:01.899293   73479 cri.go:89] found id: "ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c"
	I0731 18:13:01.899338   73479 cri.go:89] found id: ""
	I0731 18:13:01.899347   73479 logs.go:276] 1 containers: [ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c]
	I0731 18:13:01.899406   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:01.903202   73479 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:13:01.903272   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:13:01.934472   73479 cri.go:89] found id: ""
	I0731 18:13:01.934505   73479 logs.go:276] 0 containers: []
	W0731 18:13:01.934516   73479 logs.go:278] No container was found matching "kindnet"
	I0731 18:13:01.934523   73479 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 18:13:01.934588   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 18:13:01.967244   73479 cri.go:89] found id: "6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df"
	I0731 18:13:01.967271   73479 cri.go:89] found id: "9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c"
	I0731 18:13:01.967276   73479 cri.go:89] found id: ""
	I0731 18:13:01.967285   73479 logs.go:276] 2 containers: [6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df 9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c]
	I0731 18:13:01.967349   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:01.971167   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:01.975648   73479 logs.go:123] Gathering logs for kubelet ...
	I0731 18:13:01.975670   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:13:02.031430   73479 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:13:02.031472   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 18:13:02.158774   73479 logs.go:123] Gathering logs for kube-apiserver [895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0] ...
	I0731 18:13:02.158803   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0"
	I0731 18:13:02.199495   73479 logs.go:123] Gathering logs for kube-scheduler [ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2] ...
	I0731 18:13:02.199521   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2"
	I0731 18:13:02.232285   73479 logs.go:123] Gathering logs for kube-proxy [57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847] ...
	I0731 18:13:02.232327   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847"
	I0731 18:13:02.272360   73479 logs.go:123] Gathering logs for storage-provisioner [9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c] ...
	I0731 18:13:02.272389   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c"
	I0731 18:13:02.305902   73479 logs.go:123] Gathering logs for dmesg ...
	I0731 18:13:02.305931   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:13:02.319954   73479 logs.go:123] Gathering logs for etcd [65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645] ...
	I0731 18:13:02.319984   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645"
	I0731 18:13:02.361657   73479 logs.go:123] Gathering logs for coredns [f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855] ...
	I0731 18:13:02.361685   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855"
	I0731 18:13:02.395696   73479 logs.go:123] Gathering logs for kube-controller-manager [ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c] ...
	I0731 18:13:02.395724   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c"
	I0731 18:13:02.444671   73479 logs.go:123] Gathering logs for storage-provisioner [6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df] ...
	I0731 18:13:02.444704   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df"
	I0731 18:13:02.480666   73479 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:13:02.480693   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:13:02.967693   73479 logs.go:123] Gathering logs for container status ...
	I0731 18:13:02.967741   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:13:05.512381   73479 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:13:05.528582   73479 api_server.go:72] duration metric: took 4m19.030809429s to wait for apiserver process to appear ...
	I0731 18:13:05.528612   73479 api_server.go:88] waiting for apiserver healthz status ...
	I0731 18:13:05.528652   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:13:05.528730   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:13:05.567984   73479 cri.go:89] found id: "895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0"
	I0731 18:13:05.568004   73479 cri.go:89] found id: ""
	I0731 18:13:05.568013   73479 logs.go:276] 1 containers: [895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0]
	I0731 18:13:05.568073   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:05.571946   73479 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:13:05.572003   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:13:05.620468   73479 cri.go:89] found id: "65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645"
	I0731 18:13:05.620495   73479 cri.go:89] found id: ""
	I0731 18:13:05.620504   73479 logs.go:276] 1 containers: [65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645]
	I0731 18:13:05.620571   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:05.624599   73479 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:13:05.624653   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:13:05.663717   73479 cri.go:89] found id: "f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855"
	I0731 18:13:05.663740   73479 cri.go:89] found id: ""
	I0731 18:13:05.663748   73479 logs.go:276] 1 containers: [f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855]
	I0731 18:13:05.663803   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:05.667601   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:13:05.667672   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:13:05.699764   73479 cri.go:89] found id: "ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2"
	I0731 18:13:05.699791   73479 cri.go:89] found id: ""
	I0731 18:13:05.699801   73479 logs.go:276] 1 containers: [ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2]
	I0731 18:13:05.699858   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:05.703965   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:13:05.704036   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:13:05.739460   73479 cri.go:89] found id: "57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847"
	I0731 18:13:05.739487   73479 cri.go:89] found id: ""
	I0731 18:13:05.739496   73479 logs.go:276] 1 containers: [57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847]
	I0731 18:13:05.739558   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:05.743180   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:13:05.743232   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:13:05.777369   73479 cri.go:89] found id: "ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c"
	I0731 18:13:05.777390   73479 cri.go:89] found id: ""
	I0731 18:13:05.777397   73479 logs.go:276] 1 containers: [ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c]
	I0731 18:13:05.777449   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:05.781388   73479 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:13:05.781435   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:13:05.825567   73479 cri.go:89] found id: ""
	I0731 18:13:05.825599   73479 logs.go:276] 0 containers: []
	W0731 18:13:05.825610   73479 logs.go:278] No container was found matching "kindnet"
	I0731 18:13:05.825617   73479 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 18:13:05.825689   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 18:13:05.859538   73479 cri.go:89] found id: "6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df"
	I0731 18:13:05.859570   73479 cri.go:89] found id: "9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c"
	I0731 18:13:05.859577   73479 cri.go:89] found id: ""
	I0731 18:13:05.859586   73479 logs.go:276] 2 containers: [6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df 9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c]
	I0731 18:13:05.859657   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:05.863513   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:05.866989   73479 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:13:05.867011   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:13:06.314116   73479 logs.go:123] Gathering logs for container status ...
	I0731 18:13:06.314166   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:13:06.357738   73479 logs.go:123] Gathering logs for kubelet ...
	I0731 18:13:06.357764   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:13:06.407330   73479 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:13:06.407365   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 18:13:06.508580   73479 logs.go:123] Gathering logs for kube-apiserver [895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0] ...
	I0731 18:13:06.508616   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0"
	I0731 18:13:06.550032   73479 logs.go:123] Gathering logs for coredns [f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855] ...
	I0731 18:13:06.550071   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855"
	I0731 18:13:06.588519   73479 logs.go:123] Gathering logs for kube-proxy [57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847] ...
	I0731 18:13:06.588548   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847"
	I0731 18:13:06.622872   73479 logs.go:123] Gathering logs for storage-provisioner [6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df] ...
	I0731 18:13:06.622901   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df"
	I0731 18:13:06.666694   73479 logs.go:123] Gathering logs for dmesg ...
	I0731 18:13:06.666721   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:13:06.680326   73479 logs.go:123] Gathering logs for etcd [65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645] ...
	I0731 18:13:06.680355   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645"
	I0731 18:13:06.723966   73479 logs.go:123] Gathering logs for kube-scheduler [ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2] ...
	I0731 18:13:06.723997   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2"
	I0731 18:13:06.760873   73479 logs.go:123] Gathering logs for kube-controller-manager [ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c] ...
	I0731 18:13:06.760901   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c"
	I0731 18:13:06.809348   73479 logs.go:123] Gathering logs for storage-provisioner [9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c] ...
	I0731 18:13:06.809387   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c"
	I0731 18:13:09.341394   73479 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0731 18:13:09.346642   73479 api_server.go:279] https://192.168.61.126:8443/healthz returned 200:
	ok
	I0731 18:13:09.347803   73479 api_server.go:141] control plane version: v1.31.0-beta.0
	I0731 18:13:09.347821   73479 api_server.go:131] duration metric: took 3.819202346s to wait for apiserver health ...
	I0731 18:13:09.347828   73479 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 18:13:09.347850   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:13:09.347903   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:13:09.391857   73479 cri.go:89] found id: "895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0"
	I0731 18:13:09.391885   73479 cri.go:89] found id: ""
	I0731 18:13:09.391895   73479 logs.go:276] 1 containers: [895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0]
	I0731 18:13:09.391956   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:09.395723   73479 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:13:09.395789   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:13:09.430108   73479 cri.go:89] found id: "65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645"
	I0731 18:13:09.430128   73479 cri.go:89] found id: ""
	I0731 18:13:09.430135   73479 logs.go:276] 1 containers: [65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645]
	I0731 18:13:09.430180   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:09.433933   73479 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:13:09.434037   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:13:09.471630   73479 cri.go:89] found id: "f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855"
	I0731 18:13:09.471655   73479 cri.go:89] found id: ""
	I0731 18:13:09.471663   73479 logs.go:276] 1 containers: [f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855]
	I0731 18:13:09.471709   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:09.476432   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:13:09.476496   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:13:09.519568   73479 cri.go:89] found id: "ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2"
	I0731 18:13:09.519590   73479 cri.go:89] found id: ""
	I0731 18:13:09.519598   73479 logs.go:276] 1 containers: [ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2]
	I0731 18:13:09.519641   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:09.523587   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:13:09.523656   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:13:09.559405   73479 cri.go:89] found id: "57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847"
	I0731 18:13:09.559429   73479 cri.go:89] found id: ""
	I0731 18:13:09.559438   73479 logs.go:276] 1 containers: [57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847]
	I0731 18:13:09.559485   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:09.564137   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:13:09.564199   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:13:09.605298   73479 cri.go:89] found id: "ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c"
	I0731 18:13:09.605324   73479 cri.go:89] found id: ""
	I0731 18:13:09.605332   73479 logs.go:276] 1 containers: [ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c]
	I0731 18:13:09.605403   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:09.612233   73479 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:13:09.612296   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:13:09.648804   73479 cri.go:89] found id: ""
	I0731 18:13:09.648836   73479 logs.go:276] 0 containers: []
	W0731 18:13:09.648848   73479 logs.go:278] No container was found matching "kindnet"
	I0731 18:13:09.648855   73479 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 18:13:09.648916   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 18:13:09.694708   73479 cri.go:89] found id: "6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df"
	I0731 18:13:09.694733   73479 cri.go:89] found id: "9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c"
	I0731 18:13:09.694737   73479 cri.go:89] found id: ""
	I0731 18:13:09.694743   73479 logs.go:276] 2 containers: [6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df 9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c]
	I0731 18:13:09.694794   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:09.698687   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:09.702244   73479 logs.go:123] Gathering logs for storage-provisioner [6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df] ...
	I0731 18:13:09.702261   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df"
	I0731 18:13:09.737777   73479 logs.go:123] Gathering logs for storage-provisioner [9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c] ...
	I0731 18:13:09.737808   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c"
	I0731 18:13:09.771128   73479 logs.go:123] Gathering logs for container status ...
	I0731 18:13:09.771161   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:13:09.817498   73479 logs.go:123] Gathering logs for dmesg ...
	I0731 18:13:09.817525   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:13:09.833574   73479 logs.go:123] Gathering logs for etcd [65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645] ...
	I0731 18:13:09.833607   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645"
	I0731 18:13:09.872664   73479 logs.go:123] Gathering logs for kube-scheduler [ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2] ...
	I0731 18:13:09.872691   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2"
	I0731 18:13:09.913741   73479 logs.go:123] Gathering logs for coredns [f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855] ...
	I0731 18:13:09.913771   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855"
	I0731 18:13:09.949469   73479 logs.go:123] Gathering logs for kube-proxy [57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847] ...
	I0731 18:13:09.949512   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847"
	I0731 18:13:09.985409   73479 logs.go:123] Gathering logs for kube-controller-manager [ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c] ...
	I0731 18:13:09.985447   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c"
	I0731 18:13:10.039018   73479 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:13:10.039048   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:13:10.406380   73479 logs.go:123] Gathering logs for kubelet ...
	I0731 18:13:10.406416   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:13:10.459944   73479 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:13:10.459997   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 18:13:10.564092   73479 logs.go:123] Gathering logs for kube-apiserver [895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0] ...
	I0731 18:13:10.564134   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0"
	I0731 18:13:13.124074   73479 system_pods.go:59] 8 kube-system pods found
	I0731 18:13:13.124102   73479 system_pods.go:61] "coredns-5cfdc65f69-k7clq" [e4be77b6-aa7c-45e2-90a1-6a8264fd5101] Running
	I0731 18:13:13.124107   73479 system_pods.go:61] "etcd-no-preload-673754" [d64e9194-dd33-4a28-b8c3-d25462f1dae8] Running
	I0731 18:13:13.124110   73479 system_pods.go:61] "kube-apiserver-no-preload-673754" [4497614d-51de-4a5f-93dc-1397446fe4c8] Running
	I0731 18:13:13.124114   73479 system_pods.go:61] "kube-controller-manager-no-preload-673754" [fe7f714e-d778-49e7-b4ef-44e6efb5bc33] Running
	I0731 18:13:13.124117   73479 system_pods.go:61] "kube-proxy-hqxh6" [4fb623c0-4be3-421c-85d6-1d76a90b874f] Running
	I0731 18:13:13.124119   73479 system_pods.go:61] "kube-scheduler-no-preload-673754" [558e9b78-a6a9-46b8-9db8-004126359c74] Running
	I0731 18:13:13.124125   73479 system_pods.go:61] "metrics-server-78fcd8795b-27pkr" [a63a156b-0446-4bd9-8619-de75edaeb481] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 18:13:13.124129   73479 system_pods.go:61] "storage-provisioner" [a9f0ab70-18f4-4fac-b858-a9177077fe29] Running
	I0731 18:13:13.124135   73479 system_pods.go:74] duration metric: took 3.776302431s to wait for pod list to return data ...
	I0731 18:13:13.124141   73479 default_sa.go:34] waiting for default service account to be created ...
	I0731 18:13:13.127100   73479 default_sa.go:45] found service account: "default"
	I0731 18:13:13.127137   73479 default_sa.go:55] duration metric: took 2.989455ms for default service account to be created ...
	I0731 18:13:13.127148   73479 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 18:13:13.132359   73479 system_pods.go:86] 8 kube-system pods found
	I0731 18:13:13.132379   73479 system_pods.go:89] "coredns-5cfdc65f69-k7clq" [e4be77b6-aa7c-45e2-90a1-6a8264fd5101] Running
	I0731 18:13:13.132387   73479 system_pods.go:89] "etcd-no-preload-673754" [d64e9194-dd33-4a28-b8c3-d25462f1dae8] Running
	I0731 18:13:13.132393   73479 system_pods.go:89] "kube-apiserver-no-preload-673754" [4497614d-51de-4a5f-93dc-1397446fe4c8] Running
	I0731 18:13:13.132399   73479 system_pods.go:89] "kube-controller-manager-no-preload-673754" [fe7f714e-d778-49e7-b4ef-44e6efb5bc33] Running
	I0731 18:13:13.132405   73479 system_pods.go:89] "kube-proxy-hqxh6" [4fb623c0-4be3-421c-85d6-1d76a90b874f] Running
	I0731 18:13:13.132410   73479 system_pods.go:89] "kube-scheduler-no-preload-673754" [558e9b78-a6a9-46b8-9db8-004126359c74] Running
	I0731 18:13:13.132420   73479 system_pods.go:89] "metrics-server-78fcd8795b-27pkr" [a63a156b-0446-4bd9-8619-de75edaeb481] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 18:13:13.132427   73479 system_pods.go:89] "storage-provisioner" [a9f0ab70-18f4-4fac-b858-a9177077fe29] Running
	I0731 18:13:13.132435   73479 system_pods.go:126] duration metric: took 5.281138ms to wait for k8s-apps to be running ...
	I0731 18:13:13.132443   73479 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 18:13:13.132488   73479 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:13:13.148254   73479 system_svc.go:56] duration metric: took 15.802724ms WaitForService to wait for kubelet
	I0731 18:13:13.148281   73479 kubeadm.go:582] duration metric: took 4m26.650509962s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 18:13:13.148315   73479 node_conditions.go:102] verifying NodePressure condition ...
	I0731 18:13:13.151986   73479 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 18:13:13.152006   73479 node_conditions.go:123] node cpu capacity is 2
	I0731 18:13:13.152018   73479 node_conditions.go:105] duration metric: took 3.693857ms to run NodePressure ...
	I0731 18:13:13.152031   73479 start.go:241] waiting for startup goroutines ...
	I0731 18:13:13.152043   73479 start.go:246] waiting for cluster config update ...
	I0731 18:13:13.152058   73479 start.go:255] writing updated cluster config ...
	I0731 18:13:13.152347   73479 ssh_runner.go:195] Run: rm -f paused
	I0731 18:13:13.202434   73479 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0731 18:13:13.205205   73479 out.go:177] * Done! kubectl is now configured to use "no-preload-673754" cluster and "default" namespace by default
	I0731 18:13:13.655618   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:13:13.655843   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:13:33.657356   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:13:33.657560   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:14:13.660934   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:14:13.661161   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:14:13.661183   74203 kubeadm.go:310] 
	I0731 18:14:13.661216   74203 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 18:14:13.661251   74203 kubeadm.go:310] 		timed out waiting for the condition
	I0731 18:14:13.661279   74203 kubeadm.go:310] 
	I0731 18:14:13.661338   74203 kubeadm.go:310] 	This error is likely caused by:
	I0731 18:14:13.661378   74203 kubeadm.go:310] 		- The kubelet is not running
	I0731 18:14:13.661477   74203 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 18:14:13.661483   74203 kubeadm.go:310] 
	I0731 18:14:13.661577   74203 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 18:14:13.661617   74203 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 18:14:13.661646   74203 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 18:14:13.661651   74203 kubeadm.go:310] 
	I0731 18:14:13.661781   74203 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 18:14:13.661897   74203 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 18:14:13.661909   74203 kubeadm.go:310] 
	I0731 18:14:13.662044   74203 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 18:14:13.662164   74203 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 18:14:13.662265   74203 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 18:14:13.662444   74203 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 18:14:13.662477   74203 kubeadm.go:310] 
	I0731 18:14:13.663123   74203 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 18:14:13.663235   74203 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 18:14:13.663331   74203 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0731 18:14:13.663497   74203 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0731 18:14:13.663559   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 18:14:18.956376   74203 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.292787213s)
	I0731 18:14:18.956479   74203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:14:18.970820   74203 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 18:14:18.980747   74203 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 18:14:18.980771   74203 kubeadm.go:157] found existing configuration files:
	
	I0731 18:14:18.980816   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 18:14:18.989985   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 18:14:18.990063   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 18:14:18.999143   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 18:14:19.008740   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 18:14:19.008798   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 18:14:19.018729   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 18:14:19.028953   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 18:14:19.029015   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 18:14:19.039399   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 18:14:19.049072   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 18:14:19.049124   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 18:14:19.059592   74203 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 18:14:19.121542   74203 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0731 18:14:19.121613   74203 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 18:14:19.271989   74203 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 18:14:19.272100   74203 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 18:14:19.272223   74203 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 18:14:19.440224   74203 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 18:14:19.441929   74203 out.go:204]   - Generating certificates and keys ...
	I0731 18:14:19.442025   74203 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 18:14:19.442104   74203 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 18:14:19.442196   74203 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 18:14:19.442245   74203 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 18:14:19.442326   74203 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 18:14:19.442395   74203 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 18:14:19.442498   74203 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 18:14:19.442610   74203 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 18:14:19.442687   74203 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 18:14:19.442770   74203 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 18:14:19.442813   74203 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 18:14:19.442887   74203 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 18:14:19.481696   74203 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 18:14:19.804252   74203 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 18:14:20.038734   74203 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 18:14:20.211133   74203 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 18:14:20.225726   74203 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 18:14:20.227920   74203 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 18:14:20.227977   74203 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 18:14:20.364068   74203 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 18:14:20.365991   74203 out.go:204]   - Booting up control plane ...
	I0731 18:14:20.366094   74203 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 18:14:20.366195   74203 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 18:14:20.366270   74203 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 18:14:20.366379   74203 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 18:14:20.367688   74203 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 18:15:00.365616   74203 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0731 18:15:00.366184   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:15:00.366412   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:15:05.366332   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:15:05.366529   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:15:15.366241   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:15:15.366499   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:15:35.366114   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:15:35.366344   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:16:15.365995   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:16:15.366181   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:16:15.366191   74203 kubeadm.go:310] 
	I0731 18:16:15.366224   74203 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 18:16:15.366448   74203 kubeadm.go:310] 		timed out waiting for the condition
	I0731 18:16:15.366472   74203 kubeadm.go:310] 
	I0731 18:16:15.366517   74203 kubeadm.go:310] 	This error is likely caused by:
	I0731 18:16:15.366568   74203 kubeadm.go:310] 		- The kubelet is not running
	I0731 18:16:15.366723   74203 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 18:16:15.366740   74203 kubeadm.go:310] 
	I0731 18:16:15.366863   74203 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 18:16:15.366924   74203 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 18:16:15.366986   74203 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 18:16:15.366999   74203 kubeadm.go:310] 
	I0731 18:16:15.367153   74203 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 18:16:15.367271   74203 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 18:16:15.367283   74203 kubeadm.go:310] 
	I0731 18:16:15.367418   74203 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 18:16:15.367504   74203 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 18:16:15.367609   74203 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 18:16:15.367725   74203 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 18:16:15.367734   74203 kubeadm.go:310] 
	I0731 18:16:15.369210   74203 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 18:16:15.369361   74203 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 18:16:15.369434   74203 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0731 18:16:15.369496   74203 kubeadm.go:394] duration metric: took 8m6.557607575s to StartCluster
	I0731 18:16:15.369537   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:16:15.369590   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:16:15.432899   74203 cri.go:89] found id: ""
	I0731 18:16:15.432929   74203 logs.go:276] 0 containers: []
	W0731 18:16:15.432941   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:16:15.432947   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:16:15.433005   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:16:15.470506   74203 cri.go:89] found id: ""
	I0731 18:16:15.470534   74203 logs.go:276] 0 containers: []
	W0731 18:16:15.470542   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:16:15.470549   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:16:15.470609   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:16:15.502032   74203 cri.go:89] found id: ""
	I0731 18:16:15.502055   74203 logs.go:276] 0 containers: []
	W0731 18:16:15.502062   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:16:15.502067   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:16:15.502115   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:16:15.533897   74203 cri.go:89] found id: ""
	I0731 18:16:15.533918   74203 logs.go:276] 0 containers: []
	W0731 18:16:15.533925   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:16:15.533930   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:16:15.533980   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:16:15.565275   74203 cri.go:89] found id: ""
	I0731 18:16:15.565311   74203 logs.go:276] 0 containers: []
	W0731 18:16:15.565326   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:16:15.565333   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:16:15.565395   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:16:15.601402   74203 cri.go:89] found id: ""
	I0731 18:16:15.601427   74203 logs.go:276] 0 containers: []
	W0731 18:16:15.601435   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:16:15.601440   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:16:15.601489   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:16:15.638778   74203 cri.go:89] found id: ""
	I0731 18:16:15.638801   74203 logs.go:276] 0 containers: []
	W0731 18:16:15.638808   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:16:15.638813   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:16:15.638861   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:16:15.675697   74203 cri.go:89] found id: ""
	I0731 18:16:15.675720   74203 logs.go:276] 0 containers: []
	W0731 18:16:15.675728   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:16:15.675736   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:16:15.675748   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:16:15.745287   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:16:15.745325   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:16:15.745341   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:16:15.848503   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:16:15.848536   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:16:15.887234   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:16:15.887258   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:16:15.934871   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:16:15.934901   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0731 18:16:15.947727   74203 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0731 18:16:15.947769   74203 out.go:239] * 
	W0731 18:16:15.947817   74203 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 18:16:15.947836   74203 out.go:239] * 
	W0731 18:16:15.948669   74203 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 18:16:15.952286   74203 out.go:177] 
	W0731 18:16:15.953375   74203 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 18:16:15.953424   74203 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0731 18:16:15.953442   74203 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0731 18:16:15.954734   74203 out.go:177] 
	
	
	==> CRI-O <==
	Jul 31 18:22:15 no-preload-673754 crio[725]: time="2024-07-31 18:22:15.172855620Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722450135172834407,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ae4c3d4b-8977-4d7b-bdd1-ce1a7e9267c1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:22:15 no-preload-673754 crio[725]: time="2024-07-31 18:22:15.173478767Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2c2a4f11-ebff-4470-80a7-e376f44350b9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:22:15 no-preload-673754 crio[725]: time="2024-07-31 18:22:15.173551416Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2c2a4f11-ebff-4470-80a7-e376f44350b9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:22:15 no-preload-673754 crio[725]: time="2024-07-31 18:22:15.173771749Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df,PodSandboxId:9d05560d6ac01e4f56f1a23817c448011c5bc8005de8e390585dadba4a1cb1cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722449354986340172,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9f0ab70-18f4-4fac-b858-a9177077fe29,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855,PodSandboxId:ca80c1ffca60f9300fb9fad844ba123990f8649a5d487fa47df8efae1ff0aaa9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722449339791332921,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-k7clq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4be77b6-aa7c-45e2-90a1-6a8264fd5101,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3003a51718276ef12910fdbe77c2a44c14eca900a048fd6d388627fd4aeea1d,PodSandboxId:59479d972775d88e17a39762b70c70e8889bc4913d534d97e654cdbf903fd9c8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722449334714011386,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernet
es.pod.uid: da5fdf31-6093-4e84-baf1-ff5285f9798f,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847,PodSandboxId:415770964bcb598d4caf8eb7371cc11708e1adac5ff3e5ba8ed3b532b1f9b6f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722449324168779807,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hqxh6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fb623c0-4be3-421c-85
d6-1d76a90b874f,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c,PodSandboxId:9d05560d6ac01e4f56f1a23817c448011c5bc8005de8e390585dadba4a1cb1cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722449324149849313,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9f0ab70-18f4-4fac-b858-a9177077fe
29,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2,PodSandboxId:ddc8c73facbad87561a8e880f7857f8f33d3918866604fd2cc4ef72036c0afdd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722449320465111782,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-673754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abd0f70c07cfe668120b96906d35f295,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c,PodSandboxId:b298410c202be1742f59b59dd17a426655a9da051fb1b2502322a12a326678b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722449320445352200,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-673754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3de91f645103a5672ba4430b5689
209,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645,PodSandboxId:008b6aaea7d653d06f0a754e6bb559d9a4add01beade6e6c6da02f6a1af8d7f2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722449320443397326,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-673754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 949490960aadd80ed5a2380aa9b8c41d,},Annotations:map[string]string{io.kube
rnetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0,PodSandboxId:263f72a5df79e40b0bffef7cc10cdb04b0609fa0044305a7727953a383895951,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722449320394169910,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-673754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f53bf54d3ee37ea0b48ca5572890ef53,},Annotations:map[string]string{io.kubernetes.contain
er.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2c2a4f11-ebff-4470-80a7-e376f44350b9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:22:15 no-preload-673754 crio[725]: time="2024-07-31 18:22:15.213320029Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bdf94575-6039-4a27-ab4e-d395cae2bca6 name=/runtime.v1.RuntimeService/Version
	Jul 31 18:22:15 no-preload-673754 crio[725]: time="2024-07-31 18:22:15.213428717Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bdf94575-6039-4a27-ab4e-d395cae2bca6 name=/runtime.v1.RuntimeService/Version
	Jul 31 18:22:15 no-preload-673754 crio[725]: time="2024-07-31 18:22:15.214636257Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7dd5b690-b56c-4c7a-aff7-a945f574fce8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:22:15 no-preload-673754 crio[725]: time="2024-07-31 18:22:15.215062789Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722450135215038237,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7dd5b690-b56c-4c7a-aff7-a945f574fce8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:22:15 no-preload-673754 crio[725]: time="2024-07-31 18:22:15.215688177Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=90ebf9f4-4f67-45d3-94e6-94ceb7d09188 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:22:15 no-preload-673754 crio[725]: time="2024-07-31 18:22:15.215739571Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=90ebf9f4-4f67-45d3-94e6-94ceb7d09188 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:22:15 no-preload-673754 crio[725]: time="2024-07-31 18:22:15.215972823Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df,PodSandboxId:9d05560d6ac01e4f56f1a23817c448011c5bc8005de8e390585dadba4a1cb1cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722449354986340172,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9f0ab70-18f4-4fac-b858-a9177077fe29,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855,PodSandboxId:ca80c1ffca60f9300fb9fad844ba123990f8649a5d487fa47df8efae1ff0aaa9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722449339791332921,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-k7clq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4be77b6-aa7c-45e2-90a1-6a8264fd5101,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3003a51718276ef12910fdbe77c2a44c14eca900a048fd6d388627fd4aeea1d,PodSandboxId:59479d972775d88e17a39762b70c70e8889bc4913d534d97e654cdbf903fd9c8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722449334714011386,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernet
es.pod.uid: da5fdf31-6093-4e84-baf1-ff5285f9798f,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847,PodSandboxId:415770964bcb598d4caf8eb7371cc11708e1adac5ff3e5ba8ed3b532b1f9b6f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722449324168779807,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hqxh6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fb623c0-4be3-421c-85
d6-1d76a90b874f,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c,PodSandboxId:9d05560d6ac01e4f56f1a23817c448011c5bc8005de8e390585dadba4a1cb1cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722449324149849313,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9f0ab70-18f4-4fac-b858-a9177077fe
29,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2,PodSandboxId:ddc8c73facbad87561a8e880f7857f8f33d3918866604fd2cc4ef72036c0afdd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722449320465111782,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-673754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abd0f70c07cfe668120b96906d35f295,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c,PodSandboxId:b298410c202be1742f59b59dd17a426655a9da051fb1b2502322a12a326678b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722449320445352200,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-673754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3de91f645103a5672ba4430b5689
209,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645,PodSandboxId:008b6aaea7d653d06f0a754e6bb559d9a4add01beade6e6c6da02f6a1af8d7f2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722449320443397326,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-673754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 949490960aadd80ed5a2380aa9b8c41d,},Annotations:map[string]string{io.kube
rnetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0,PodSandboxId:263f72a5df79e40b0bffef7cc10cdb04b0609fa0044305a7727953a383895951,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722449320394169910,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-673754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f53bf54d3ee37ea0b48ca5572890ef53,},Annotations:map[string]string{io.kubernetes.contain
er.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=90ebf9f4-4f67-45d3-94e6-94ceb7d09188 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:22:15 no-preload-673754 crio[725]: time="2024-07-31 18:22:15.259649602Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=678c4e1c-521e-4246-b593-e38246f59280 name=/runtime.v1.RuntimeService/Version
	Jul 31 18:22:15 no-preload-673754 crio[725]: time="2024-07-31 18:22:15.259724974Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=678c4e1c-521e-4246-b593-e38246f59280 name=/runtime.v1.RuntimeService/Version
	Jul 31 18:22:15 no-preload-673754 crio[725]: time="2024-07-31 18:22:15.260970507Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1482a3c9-9283-41db-9d00-9f1beb3a0c15 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:22:15 no-preload-673754 crio[725]: time="2024-07-31 18:22:15.261505478Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722450135261481461,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1482a3c9-9283-41db-9d00-9f1beb3a0c15 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:22:15 no-preload-673754 crio[725]: time="2024-07-31 18:22:15.261985935Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f9f19024-4c62-4741-b077-c74eb59a2ab5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:22:15 no-preload-673754 crio[725]: time="2024-07-31 18:22:15.262066330Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f9f19024-4c62-4741-b077-c74eb59a2ab5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:22:15 no-preload-673754 crio[725]: time="2024-07-31 18:22:15.262345463Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df,PodSandboxId:9d05560d6ac01e4f56f1a23817c448011c5bc8005de8e390585dadba4a1cb1cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722449354986340172,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9f0ab70-18f4-4fac-b858-a9177077fe29,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855,PodSandboxId:ca80c1ffca60f9300fb9fad844ba123990f8649a5d487fa47df8efae1ff0aaa9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722449339791332921,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-k7clq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4be77b6-aa7c-45e2-90a1-6a8264fd5101,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3003a51718276ef12910fdbe77c2a44c14eca900a048fd6d388627fd4aeea1d,PodSandboxId:59479d972775d88e17a39762b70c70e8889bc4913d534d97e654cdbf903fd9c8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722449334714011386,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernet
es.pod.uid: da5fdf31-6093-4e84-baf1-ff5285f9798f,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847,PodSandboxId:415770964bcb598d4caf8eb7371cc11708e1adac5ff3e5ba8ed3b532b1f9b6f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722449324168779807,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hqxh6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fb623c0-4be3-421c-85
d6-1d76a90b874f,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c,PodSandboxId:9d05560d6ac01e4f56f1a23817c448011c5bc8005de8e390585dadba4a1cb1cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722449324149849313,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9f0ab70-18f4-4fac-b858-a9177077fe
29,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2,PodSandboxId:ddc8c73facbad87561a8e880f7857f8f33d3918866604fd2cc4ef72036c0afdd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722449320465111782,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-673754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abd0f70c07cfe668120b96906d35f295,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c,PodSandboxId:b298410c202be1742f59b59dd17a426655a9da051fb1b2502322a12a326678b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722449320445352200,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-673754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3de91f645103a5672ba4430b5689
209,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645,PodSandboxId:008b6aaea7d653d06f0a754e6bb559d9a4add01beade6e6c6da02f6a1af8d7f2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722449320443397326,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-673754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 949490960aadd80ed5a2380aa9b8c41d,},Annotations:map[string]string{io.kube
rnetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0,PodSandboxId:263f72a5df79e40b0bffef7cc10cdb04b0609fa0044305a7727953a383895951,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722449320394169910,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-673754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f53bf54d3ee37ea0b48ca5572890ef53,},Annotations:map[string]string{io.kubernetes.contain
er.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f9f19024-4c62-4741-b077-c74eb59a2ab5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:22:15 no-preload-673754 crio[725]: time="2024-07-31 18:22:15.292207854Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=47d1113d-182e-4b62-8b17-f49ceccb2889 name=/runtime.v1.RuntimeService/Version
	Jul 31 18:22:15 no-preload-673754 crio[725]: time="2024-07-31 18:22:15.292287055Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=47d1113d-182e-4b62-8b17-f49ceccb2889 name=/runtime.v1.RuntimeService/Version
	Jul 31 18:22:15 no-preload-673754 crio[725]: time="2024-07-31 18:22:15.293073459Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=25767056-75b0-4fe0-9a6b-4afde90af6ef name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:22:15 no-preload-673754 crio[725]: time="2024-07-31 18:22:15.293451225Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722450135293427283,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=25767056-75b0-4fe0-9a6b-4afde90af6ef name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:22:15 no-preload-673754 crio[725]: time="2024-07-31 18:22:15.293840330Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5bd91200-3eeb-49d6-ab13-ac49381505b5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:22:15 no-preload-673754 crio[725]: time="2024-07-31 18:22:15.293914174Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5bd91200-3eeb-49d6-ab13-ac49381505b5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:22:15 no-preload-673754 crio[725]: time="2024-07-31 18:22:15.294226801Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df,PodSandboxId:9d05560d6ac01e4f56f1a23817c448011c5bc8005de8e390585dadba4a1cb1cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722449354986340172,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9f0ab70-18f4-4fac-b858-a9177077fe29,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855,PodSandboxId:ca80c1ffca60f9300fb9fad844ba123990f8649a5d487fa47df8efae1ff0aaa9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722449339791332921,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-k7clq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4be77b6-aa7c-45e2-90a1-6a8264fd5101,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3003a51718276ef12910fdbe77c2a44c14eca900a048fd6d388627fd4aeea1d,PodSandboxId:59479d972775d88e17a39762b70c70e8889bc4913d534d97e654cdbf903fd9c8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722449334714011386,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernet
es.pod.uid: da5fdf31-6093-4e84-baf1-ff5285f9798f,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847,PodSandboxId:415770964bcb598d4caf8eb7371cc11708e1adac5ff3e5ba8ed3b532b1f9b6f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722449324168779807,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hqxh6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fb623c0-4be3-421c-85
d6-1d76a90b874f,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c,PodSandboxId:9d05560d6ac01e4f56f1a23817c448011c5bc8005de8e390585dadba4a1cb1cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722449324149849313,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9f0ab70-18f4-4fac-b858-a9177077fe
29,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2,PodSandboxId:ddc8c73facbad87561a8e880f7857f8f33d3918866604fd2cc4ef72036c0afdd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722449320465111782,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-673754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abd0f70c07cfe668120b96906d35f295,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c,PodSandboxId:b298410c202be1742f59b59dd17a426655a9da051fb1b2502322a12a326678b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722449320445352200,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-673754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3de91f645103a5672ba4430b5689
209,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645,PodSandboxId:008b6aaea7d653d06f0a754e6bb559d9a4add01beade6e6c6da02f6a1af8d7f2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722449320443397326,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-673754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 949490960aadd80ed5a2380aa9b8c41d,},Annotations:map[string]string{io.kube
rnetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0,PodSandboxId:263f72a5df79e40b0bffef7cc10cdb04b0609fa0044305a7727953a383895951,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722449320394169910,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-673754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f53bf54d3ee37ea0b48ca5572890ef53,},Annotations:map[string]string{io.kubernetes.contain
er.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5bd91200-3eeb-49d6-ab13-ac49381505b5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6f311536202ba       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Running             storage-provisioner       3                   9d05560d6ac01       storage-provisioner
	f043eb2392c22       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   ca80c1ffca60f       coredns-5cfdc65f69-k7clq
	e3003a5171827       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   59479d972775d       busybox
	57bdb8e09be40       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899                                      13 minutes ago      Running             kube-proxy                1                   415770964bcb5       kube-proxy-hqxh6
	9ea2bc105f57a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       2                   9d05560d6ac01       storage-provisioner
	ed1c40e21d8aa       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b                                      13 minutes ago      Running             kube-scheduler            1                   ddc8c73facbad       kube-scheduler-no-preload-673754
	ee75a53c57652       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5                                      13 minutes ago      Running             kube-controller-manager   1                   b298410c202be       kube-controller-manager-no-preload-673754
	65ef90d7b082a       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa                                      13 minutes ago      Running             etcd                      1                   008b6aaea7d65       etcd-no-preload-673754
	895465d024797       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938                                      13 minutes ago      Running             kube-apiserver            1                   263f72a5df79e       kube-apiserver-no-preload-673754
	
	
	==> coredns [f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:53099 - 39463 "HINFO IN 4454942561105742238.88549225925472576. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.011770935s
	
	
	==> describe nodes <==
	Name:               no-preload-673754
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-673754
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1d737dad7efa60c56d30434fcd857dd3b14c91d9
	                    minikube.k8s.io/name=no-preload-673754
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T17_59_07_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 17:59:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-673754
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 18:22:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 18:19:27 +0000   Wed, 31 Jul 2024 17:58:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 18:19:27 +0000   Wed, 31 Jul 2024 17:58:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 18:19:27 +0000   Wed, 31 Jul 2024 17:58:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 18:19:27 +0000   Wed, 31 Jul 2024 18:08:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.126
	  Hostname:    no-preload-673754
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 492b651246f74aa8a677f18840110d78
	  System UUID:                492b6512-46f7-4aa8-a677-f18840110d78
	  Boot ID:                    123a56b1-98f1-4fc5-b8eb-293998eff487
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 coredns-5cfdc65f69-k7clq                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     23m
	  kube-system                 etcd-no-preload-673754                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         23m
	  kube-system                 kube-apiserver-no-preload-673754             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-controller-manager-no-preload-673754    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-proxy-hqxh6                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-scheduler-no-preload-673754             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 metrics-server-78fcd8795b-27pkr              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     23m (x7 over 23m)  kubelet          Node no-preload-673754 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    23m (x8 over 23m)  kubelet          Node no-preload-673754 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  23m (x8 over 23m)  kubelet          Node no-preload-673754 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 23m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  23m                kubelet          Node no-preload-673754 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m                kubelet          Node no-preload-673754 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m                kubelet          Node no-preload-673754 status is now: NodeHasSufficientPID
	  Normal  NodeReady                23m                kubelet          Node no-preload-673754 status is now: NodeReady
	  Normal  RegisteredNode           23m                node-controller  Node no-preload-673754 event: Registered Node no-preload-673754 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-673754 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-673754 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-673754 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-673754 event: Registered Node no-preload-673754 in Controller
	
	
	==> dmesg <==
	[Jul31 18:08] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.056591] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042887] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.015025] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.876733] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.525125] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000013] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.683069] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.058853] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.050846] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.191576] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +0.123429] systemd-fstab-generator[677]: Ignoring "noauto" option for root device
	[  +0.269806] systemd-fstab-generator[710]: Ignoring "noauto" option for root device
	[ +14.548804] systemd-fstab-generator[1179]: Ignoring "noauto" option for root device
	[  +0.057739] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.263358] systemd-fstab-generator[1301]: Ignoring "noauto" option for root device
	[  +3.074634] kauditd_printk_skb: 97 callbacks suppressed
	[  +3.985444] systemd-fstab-generator[1931]: Ignoring "noauto" option for root device
	[  +1.452362] kauditd_printk_skb: 59 callbacks suppressed
	[  +6.728416] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.069810] kauditd_printk_skb: 9 callbacks suppressed
	
	
	==> etcd [65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645] <==
	{"level":"info","ts":"2024-07-31T18:08:40.919476Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-31T18:08:40.925569Z","caller":"embed/etcd.go:727","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-31T18:08:40.925629Z","caller":"embed/etcd.go:598","msg":"serving peer traffic","address":"192.168.61.126:2380"}
	{"level":"info","ts":"2024-07-31T18:08:40.925643Z","caller":"embed/etcd.go:570","msg":"cmux::serve","address":"192.168.61.126:2380"}
	{"level":"info","ts":"2024-07-31T18:08:40.926046Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"2456aadc51424cb5","initial-advertise-peer-urls":["https://192.168.61.126:2380"],"listen-peer-urls":["https://192.168.61.126:2380"],"advertise-client-urls":["https://192.168.61.126:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.126:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-31T18:08:40.926088Z","caller":"embed/etcd.go:858","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-31T18:08:42.248853Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2456aadc51424cb5 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-31T18:08:42.248916Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2456aadc51424cb5 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-31T18:08:42.248958Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2456aadc51424cb5 received MsgPreVoteResp from 2456aadc51424cb5 at term 2"}
	{"level":"info","ts":"2024-07-31T18:08:42.248973Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2456aadc51424cb5 became candidate at term 3"}
	{"level":"info","ts":"2024-07-31T18:08:42.248979Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2456aadc51424cb5 received MsgVoteResp from 2456aadc51424cb5 at term 3"}
	{"level":"info","ts":"2024-07-31T18:08:42.248987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2456aadc51424cb5 became leader at term 3"}
	{"level":"info","ts":"2024-07-31T18:08:42.248993Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 2456aadc51424cb5 elected leader 2456aadc51424cb5 at term 3"}
	{"level":"info","ts":"2024-07-31T18:08:42.253539Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"2456aadc51424cb5","local-member-attributes":"{Name:no-preload-673754 ClientURLs:[https://192.168.61.126:2379]}","request-path":"/0/members/2456aadc51424cb5/attributes","cluster-id":"c6330389cea17d04","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T18:08:42.253559Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T18:08:42.253683Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T18:08:42.253936Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T18:08:42.253949Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-31T18:08:42.254849Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-31T18:08:42.254872Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-31T18:08:42.255841Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.126:2379"}
	{"level":"info","ts":"2024-07-31T18:08:42.256192Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-31T18:18:42.283805Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":877}
	{"level":"info","ts":"2024-07-31T18:18:42.294193Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":877,"took":"10.010743ms","hash":326909617,"current-db-size-bytes":2834432,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":2834432,"current-db-size-in-use":"2.8 MB"}
	{"level":"info","ts":"2024-07-31T18:18:42.29425Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":326909617,"revision":877,"compact-revision":-1}
	
	
	==> kernel <==
	 18:22:15 up 14 min,  0 users,  load average: 0.02, 0.11, 0.09
	Linux no-preload-673754 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0] <==
	W0731 18:18:44.563747       1 handler_proxy.go:99] no RequestInfo found in the context
	E0731 18:18:44.563817       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0731 18:18:44.564996       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0731 18:18:44.565096       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 18:19:44.565961       1 handler_proxy.go:99] no RequestInfo found in the context
	W0731 18:19:44.566009       1 handler_proxy.go:99] no RequestInfo found in the context
	E0731 18:19:44.566270       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0731 18:19:44.566304       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0731 18:19:44.568252       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0731 18:19:44.568330       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 18:21:44.568457       1 handler_proxy.go:99] no RequestInfo found in the context
	E0731 18:21:44.568579       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0731 18:21:44.568457       1 handler_proxy.go:99] no RequestInfo found in the context
	E0731 18:21:44.568661       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0731 18:21:44.570545       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0731 18:21:44.570628       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c] <==
	E0731 18:16:48.217284       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 18:16:48.330835       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 18:17:18.227048       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 18:17:18.339785       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 18:17:48.233451       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 18:17:48.347871       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 18:18:18.240679       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 18:18:18.355557       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 18:18:48.246493       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 18:18:48.363586       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 18:19:18.252764       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 18:19:18.372550       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0731 18:19:27.624189       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-673754"
	E0731 18:19:48.259750       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 18:19:48.382733       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0731 18:19:48.815704       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="280.579µs"
	I0731 18:20:03.815548       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="175.206µs"
	E0731 18:20:18.266708       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 18:20:18.390687       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 18:20:48.273480       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 18:20:48.398918       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 18:21:18.279700       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 18:21:18.407298       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 18:21:48.286690       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 18:21:48.415988       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0731 18:08:44.346604       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0731 18:08:44.360572       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.61.126"]
	E0731 18:08:44.360653       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0731 18:08:44.391327       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0731 18:08:44.391401       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 18:08:44.391445       1 server_linux.go:170] "Using iptables Proxier"
	I0731 18:08:44.393636       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0731 18:08:44.393963       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0731 18:08:44.394104       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 18:08:44.395646       1 config.go:197] "Starting service config controller"
	I0731 18:08:44.395811       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 18:08:44.395872       1 config.go:104] "Starting endpoint slice config controller"
	I0731 18:08:44.395900       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 18:08:44.396596       1 config.go:326] "Starting node config controller"
	I0731 18:08:44.396646       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 18:08:44.497174       1 shared_informer.go:320] Caches are synced for node config
	I0731 18:08:44.497258       1 shared_informer.go:320] Caches are synced for service config
	I0731 18:08:44.497282       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2] <==
	I0731 18:08:41.422236       1 serving.go:386] Generated self-signed cert in-memory
	W0731 18:08:43.475175       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0731 18:08:43.475293       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 18:08:43.475325       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0731 18:08:43.475388       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0731 18:08:43.553655       1 server.go:164] "Starting Kubernetes Scheduler" version="v1.31.0-beta.0"
	I0731 18:08:43.553713       1 server.go:166] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 18:08:43.556108       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0731 18:08:43.558238       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0731 18:08:43.558272       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 18:08:43.558325       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0731 18:08:43.659235       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 31 18:19:39 no-preload-673754 kubelet[1308]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 18:19:39 no-preload-673754 kubelet[1308]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 18:19:39 no-preload-673754 kubelet[1308]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 18:19:48 no-preload-673754 kubelet[1308]: E0731 18:19:48.797065    1308 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-27pkr" podUID="a63a156b-0446-4bd9-8619-de75edaeb481"
	Jul 31 18:20:03 no-preload-673754 kubelet[1308]: E0731 18:20:03.798940    1308 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-27pkr" podUID="a63a156b-0446-4bd9-8619-de75edaeb481"
	Jul 31 18:20:18 no-preload-673754 kubelet[1308]: E0731 18:20:18.795754    1308 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-27pkr" podUID="a63a156b-0446-4bd9-8619-de75edaeb481"
	Jul 31 18:20:30 no-preload-673754 kubelet[1308]: E0731 18:20:30.795855    1308 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-27pkr" podUID="a63a156b-0446-4bd9-8619-de75edaeb481"
	Jul 31 18:20:39 no-preload-673754 kubelet[1308]: E0731 18:20:39.820969    1308 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 18:20:39 no-preload-673754 kubelet[1308]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 18:20:39 no-preload-673754 kubelet[1308]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 18:20:39 no-preload-673754 kubelet[1308]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 18:20:39 no-preload-673754 kubelet[1308]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 18:20:41 no-preload-673754 kubelet[1308]: E0731 18:20:41.802767    1308 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-27pkr" podUID="a63a156b-0446-4bd9-8619-de75edaeb481"
	Jul 31 18:20:55 no-preload-673754 kubelet[1308]: E0731 18:20:55.797694    1308 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-27pkr" podUID="a63a156b-0446-4bd9-8619-de75edaeb481"
	Jul 31 18:21:06 no-preload-673754 kubelet[1308]: E0731 18:21:06.797304    1308 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-27pkr" podUID="a63a156b-0446-4bd9-8619-de75edaeb481"
	Jul 31 18:21:18 no-preload-673754 kubelet[1308]: E0731 18:21:18.796518    1308 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-27pkr" podUID="a63a156b-0446-4bd9-8619-de75edaeb481"
	Jul 31 18:21:31 no-preload-673754 kubelet[1308]: E0731 18:21:31.798007    1308 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-27pkr" podUID="a63a156b-0446-4bd9-8619-de75edaeb481"
	Jul 31 18:21:39 no-preload-673754 kubelet[1308]: E0731 18:21:39.823031    1308 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 18:21:39 no-preload-673754 kubelet[1308]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 18:21:39 no-preload-673754 kubelet[1308]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 18:21:39 no-preload-673754 kubelet[1308]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 18:21:39 no-preload-673754 kubelet[1308]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 18:21:44 no-preload-673754 kubelet[1308]: E0731 18:21:44.801887    1308 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-27pkr" podUID="a63a156b-0446-4bd9-8619-de75edaeb481"
	Jul 31 18:21:56 no-preload-673754 kubelet[1308]: E0731 18:21:56.797526    1308 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-27pkr" podUID="a63a156b-0446-4bd9-8619-de75edaeb481"
	Jul 31 18:22:07 no-preload-673754 kubelet[1308]: E0731 18:22:07.797218    1308 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-27pkr" podUID="a63a156b-0446-4bd9-8619-de75edaeb481"
	
	
	==> storage-provisioner [6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df] <==
	I0731 18:09:15.070617       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0731 18:09:15.082933       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0731 18:09:15.082991       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0731 18:09:32.482653       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0731 18:09:32.482927       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-673754_47dbf9a2-3899-425f-9beb-6ecf4e744290!
	I0731 18:09:32.483112       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1d1d23c0-0b26-4c8b-b8c6-376b082cbdb2", APIVersion:"v1", ResourceVersion:"657", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-673754_47dbf9a2-3899-425f-9beb-6ecf4e744290 became leader
	I0731 18:09:32.584873       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-673754_47dbf9a2-3899-425f-9beb-6ecf4e744290!
	
	
	==> storage-provisioner [9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c] <==
	I0731 18:08:44.267233       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0731 18:09:14.272114       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-673754 -n no-preload-673754
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-673754 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-78fcd8795b-27pkr
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-673754 describe pod metrics-server-78fcd8795b-27pkr
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-673754 describe pod metrics-server-78fcd8795b-27pkr: exit status 1 (61.768987ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-78fcd8795b-27pkr" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-673754 describe pod metrics-server-78fcd8795b-27pkr: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
E0731 18:16:23.406731   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/kindnet-985288/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
E0731 18:16:59.037563   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/calico-985288/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
E0731 18:17:08.393777   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
E0731 18:17:13.609304   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/bridge-985288/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
E0731 18:17:30.954543   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/custom-flannel-985288/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
E0731 18:17:40.752615   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/flannel-985288/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
E0731 18:17:57.005157   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/functional-496242/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
E0731 18:18:26.934318   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/enable-default-cni-985288/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
E0731 18:18:36.651684   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/bridge-985288/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
E0731 18:19:03.794453   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/flannel-985288/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
E0731 18:19:05.345867   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
E0731 18:19:21.705919   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/auto-985288/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
E0731 18:19:49.977496   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/enable-default-cni-985288/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
E0731 18:20:00.362422   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/kindnet-985288/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
E0731 18:20:35.992295   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/calico-985288/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
E0731 18:21:07.908672   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/custom-flannel-985288/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
E0731 18:22:13.608546   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/bridge-985288/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
E0731 18:22:40.752838   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/flannel-985288/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
E0731 18:22:57.005288   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/functional-496242/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
E0731 18:23:26.934257   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/enable-default-cni-985288/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
E0731 18:24:05.346169   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
E0731 18:24:21.705741   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/auto-985288/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
E0731 18:25:00.361686   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/kindnet-985288/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-276459 -n old-k8s-version-276459
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-276459 -n old-k8s-version-276459: exit status 2 (221.582498ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-276459" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-276459 -n old-k8s-version-276459
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-276459 -n old-k8s-version-276459: exit status 2 (223.43861ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-276459 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-276459 logs -n 25: (1.549286686s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-985288                           | enable-default-cni-985288    | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 17:58 UTC |
	|         | sudo cat                                               |                              |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-985288                           | enable-default-cni-985288    | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 17:58 UTC |
	|         | sudo containerd config dump                            |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-985288                           | enable-default-cni-985288    | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 17:58 UTC |
	|         | sudo systemctl status crio                             |                              |         |         |                     |                     |
	|         | --all --full --no-pager                                |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-985288                           | enable-default-cni-985288    | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 17:58 UTC |
	|         | sudo systemctl cat crio                                |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-985288                           | enable-default-cni-985288    | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 17:58 UTC |
	|         | sudo find /etc/crio -type f                            |                              |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                          |                              |         |         |                     |                     |
	|         | \;                                                     |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-985288                           | enable-default-cni-985288    | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 17:58 UTC |
	|         | sudo crio config                                       |                              |         |         |                     |                     |
	| delete  | -p enable-default-cni-985288                           | enable-default-cni-985288    | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 17:58 UTC |
	| delete  | -p                                                     | disable-driver-mounts-280161 | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 17:58 UTC |
	|         | disable-driver-mounts-280161                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-094310 | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 18:00 UTC |
	|         | default-k8s-diff-port-094310                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-673754             | no-preload-673754            | jenkins | v1.33.1 | 31 Jul 24 17:59 UTC | 31 Jul 24 17:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-673754                                   | no-preload-673754            | jenkins | v1.33.1 | 31 Jul 24 17:59 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-094310  | default-k8s-diff-port-094310 | jenkins | v1.33.1 | 31 Jul 24 18:00 UTC | 31 Jul 24 18:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-094310 | jenkins | v1.33.1 | 31 Jul 24 18:00 UTC |                     |
	|         | default-k8s-diff-port-094310                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-436067            | embed-certs-436067           | jenkins | v1.33.1 | 31 Jul 24 18:00 UTC | 31 Jul 24 18:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-436067                                  | embed-certs-436067           | jenkins | v1.33.1 | 31 Jul 24 18:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-276459        | old-k8s-version-276459       | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-673754                  | no-preload-673754            | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-673754 --memory=2200                     | no-preload-673754            | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC | 31 Jul 24 18:13 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-094310       | default-k8s-diff-port-094310 | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-436067                 | embed-certs-436067           | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-094310 | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC | 31 Jul 24 18:12 UTC |
	|         | default-k8s-diff-port-094310                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-436067                                  | embed-certs-436067           | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC | 31 Jul 24 18:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-276459                              | old-k8s-version-276459       | jenkins | v1.33.1 | 31 Jul 24 18:03 UTC | 31 Jul 24 18:03 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-276459             | old-k8s-version-276459       | jenkins | v1.33.1 | 31 Jul 24 18:03 UTC | 31 Jul 24 18:03 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-276459                              | old-k8s-version-276459       | jenkins | v1.33.1 | 31 Jul 24 18:03 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 18:03:55
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 18:03:55.344211   74203 out.go:291] Setting OutFile to fd 1 ...
	I0731 18:03:55.344313   74203 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:03:55.344321   74203 out.go:304] Setting ErrFile to fd 2...
	I0731 18:03:55.344324   74203 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:03:55.344541   74203 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19349-8084/.minikube/bin
	I0731 18:03:55.345055   74203 out.go:298] Setting JSON to false
	I0731 18:03:55.345905   74203 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6379,"bootTime":1722442656,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 18:03:55.345962   74203 start.go:139] virtualization: kvm guest
	I0731 18:03:55.347848   74203 out.go:177] * [old-k8s-version-276459] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 18:03:55.349045   74203 notify.go:220] Checking for updates...
	I0731 18:03:55.349052   74203 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 18:03:55.350359   74203 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 18:03:55.351583   74203 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 18:03:55.352789   74203 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19349-8084/.minikube
	I0731 18:03:55.354046   74203 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 18:03:55.355244   74203 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 18:03:55.356819   74203 config.go:182] Loaded profile config "old-k8s-version-276459": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0731 18:03:55.357218   74203 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:03:55.357268   74203 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:03:55.372081   74203 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38815
	I0731 18:03:55.372424   74203 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:03:55.372950   74203 main.go:141] libmachine: Using API Version  1
	I0731 18:03:55.372972   74203 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:03:55.373263   74203 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:03:55.373466   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:03:55.375198   74203 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0731 18:03:55.376370   74203 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 18:03:55.376714   74203 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:03:55.376748   74203 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:03:55.390924   74203 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46311
	I0731 18:03:55.391380   74203 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:03:55.391853   74203 main.go:141] libmachine: Using API Version  1
	I0731 18:03:55.391875   74203 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:03:55.392165   74203 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:03:55.392389   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:03:55.425283   74203 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 18:03:55.426485   74203 start.go:297] selected driver: kvm2
	I0731 18:03:55.426517   74203 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-276459 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-276459 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:03:55.426632   74203 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 18:03:55.427322   74203 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 18:03:55.427419   74203 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19349-8084/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 18:03:55.441518   74203 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 18:03:55.441891   74203 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 18:03:55.441921   74203 cni.go:84] Creating CNI manager for ""
	I0731 18:03:55.441928   74203 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:03:55.441970   74203 start.go:340] cluster config:
	{Name:old-k8s-version-276459 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-276459 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:03:55.442088   74203 iso.go:125] acquiring lock: {Name:mk4494bb5a63902bf12a909c3109db85229bc516 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 18:03:55.443745   74203 out.go:177] * Starting "old-k8s-version-276459" primary control-plane node in "old-k8s-version-276459" cluster
	I0731 18:03:55.299338   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:03:55.445026   74203 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 18:03:55.445062   74203 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0731 18:03:55.445085   74203 cache.go:56] Caching tarball of preloaded images
	I0731 18:03:55.445157   74203 preload.go:172] Found /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 18:03:55.445167   74203 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0731 18:03:55.445250   74203 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/config.json ...
	I0731 18:03:55.445412   74203 start.go:360] acquireMachinesLock for old-k8s-version-276459: {Name:mkd78eacfab7d6c3058c6674434b3d889ec957e0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 18:03:58.371340   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:04.451379   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:07.523408   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:13.603407   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:16.675437   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:22.755418   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:25.827434   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:31.907379   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:34.979426   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:41.059417   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:44.131434   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:50.211391   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:53.283445   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:59.363428   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:02.435450   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:08.515394   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:11.587394   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:17.667388   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:20.739413   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:26.819368   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:29.891394   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:35.971391   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:39.043445   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:45.123378   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:48.195378   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:54.275417   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:57.347374   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:03.427390   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:06.499378   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:12.579395   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:15.651447   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:21.731394   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:24.803405   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:30.883468   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:33.955397   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:40.035387   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:43.107448   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:49.187413   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:52.259420   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:58.339413   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:07:01.411396   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:07:04.416121   73696 start.go:364] duration metric: took 4m18.256589549s to acquireMachinesLock for "default-k8s-diff-port-094310"
	I0731 18:07:04.416183   73696 start.go:96] Skipping create...Using existing machine configuration
	I0731 18:07:04.416192   73696 fix.go:54] fixHost starting: 
	I0731 18:07:04.416522   73696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:07:04.416570   73696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:07:04.432249   73696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32817
	I0731 18:07:04.432715   73696 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:07:04.433206   73696 main.go:141] libmachine: Using API Version  1
	I0731 18:07:04.433234   73696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:07:04.433616   73696 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:07:04.433833   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:07:04.434001   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetState
	I0731 18:07:04.436061   73696 fix.go:112] recreateIfNeeded on default-k8s-diff-port-094310: state=Stopped err=<nil>
	I0731 18:07:04.436082   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	W0731 18:07:04.436241   73696 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 18:07:04.438139   73696 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-094310" ...
	I0731 18:07:04.439463   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .Start
	I0731 18:07:04.439678   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Ensuring networks are active...
	I0731 18:07:04.440645   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Ensuring network default is active
	I0731 18:07:04.441067   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Ensuring network mk-default-k8s-diff-port-094310 is active
	I0731 18:07:04.441473   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Getting domain xml...
	I0731 18:07:04.442331   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Creating domain...
	I0731 18:07:05.660745   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting to get IP...
	I0731 18:07:05.661963   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:05.662532   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:05.662620   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:05.662524   74854 retry.go:31] will retry after 294.438382ms: waiting for machine to come up
	I0731 18:07:05.959200   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:05.959668   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:05.959699   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:05.959619   74854 retry.go:31] will retry after 331.316387ms: waiting for machine to come up
	I0731 18:07:04.413166   73479 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 18:07:04.413216   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetMachineName
	I0731 18:07:04.413580   73479 buildroot.go:166] provisioning hostname "no-preload-673754"
	I0731 18:07:04.413609   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetMachineName
	I0731 18:07:04.413827   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:07:04.415964   73479 machine.go:97] duration metric: took 4m37.431900974s to provisionDockerMachine
	I0731 18:07:04.416013   73479 fix.go:56] duration metric: took 4m37.452176305s for fixHost
	I0731 18:07:04.416023   73479 start.go:83] releasing machines lock for "no-preload-673754", held for 4m37.452227129s
	W0731 18:07:04.416048   73479 start.go:714] error starting host: provision: host is not running
	W0731 18:07:04.416143   73479 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0731 18:07:04.416157   73479 start.go:729] Will try again in 5 seconds ...
	I0731 18:07:06.292146   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:06.292533   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:06.292555   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:06.292487   74854 retry.go:31] will retry after 324.512889ms: waiting for machine to come up
	I0731 18:07:06.619045   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:06.619440   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:06.619470   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:06.619404   74854 retry.go:31] will retry after 556.332506ms: waiting for machine to come up
	I0731 18:07:07.177224   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:07.177689   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:07.177722   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:07.177631   74854 retry.go:31] will retry after 599.567638ms: waiting for machine to come up
	I0731 18:07:07.778444   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:07.778848   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:07.778885   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:07.778820   74854 retry.go:31] will retry after 944.17246ms: waiting for machine to come up
	I0731 18:07:08.724983   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:08.725484   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:08.725512   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:08.725433   74854 retry.go:31] will retry after 1.077726279s: waiting for machine to come up
	I0731 18:07:09.805196   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:09.805629   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:09.805667   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:09.805575   74854 retry.go:31] will retry after 1.140059854s: waiting for machine to come up
	I0731 18:07:10.951633   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:10.952066   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:10.952091   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:10.952028   74854 retry.go:31] will retry after 1.691707383s: waiting for machine to come up
	I0731 18:07:09.418606   73479 start.go:360] acquireMachinesLock for no-preload-673754: {Name:mkd78eacfab7d6c3058c6674434b3d889ec957e0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 18:07:12.645970   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:12.646588   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:12.646623   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:12.646525   74854 retry.go:31] will retry after 2.257630784s: waiting for machine to come up
	I0731 18:07:14.905494   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:14.905891   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:14.905922   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:14.905833   74854 retry.go:31] will retry after 2.877713561s: waiting for machine to come up
	I0731 18:07:17.786797   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:17.787173   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:17.787194   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:17.787140   74854 retry.go:31] will retry after 3.028611559s: waiting for machine to come up
	I0731 18:07:20.817593   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:20.817898   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Found IP for machine: 192.168.72.197
	I0731 18:07:20.817921   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Reserving static IP address...
	I0731 18:07:20.817934   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has current primary IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:20.818352   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-094310", mac: "52:54:00:a9:b2:ae", ip: "192.168.72.197"} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:20.818379   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Reserved static IP address: 192.168.72.197
	I0731 18:07:20.818400   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | skip adding static IP to network mk-default-k8s-diff-port-094310 - found existing host DHCP lease matching {name: "default-k8s-diff-port-094310", mac: "52:54:00:a9:b2:ae", ip: "192.168.72.197"}
	I0731 18:07:20.818414   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for SSH to be available...
	I0731 18:07:20.818431   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | Getting to WaitForSSH function...
	I0731 18:07:20.820417   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:20.820731   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:20.820758   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:20.820893   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | Using SSH client type: external
	I0731 18:07:20.820916   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | Using SSH private key: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/default-k8s-diff-port-094310/id_rsa (-rw-------)
	I0731 18:07:20.820940   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.197 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19349-8084/.minikube/machines/default-k8s-diff-port-094310/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 18:07:20.820950   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | About to run SSH command:
	I0731 18:07:20.820959   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | exit 0
	I0731 18:07:20.943348   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | SSH cmd err, output: <nil>: 
	I0731 18:07:20.943708   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetConfigRaw
	I0731 18:07:20.944373   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetIP
	I0731 18:07:20.947080   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:20.947465   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:20.947499   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:20.947731   73696 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/default-k8s-diff-port-094310/config.json ...
	I0731 18:07:20.947909   73696 machine.go:94] provisionDockerMachine start ...
	I0731 18:07:20.947926   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:07:20.948124   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:07:20.950698   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:20.951056   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:20.951083   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:20.951228   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:07:20.951443   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:20.951608   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:20.951780   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:07:20.952016   73696 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:20.952208   73696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.197 22 <nil> <nil>}
	I0731 18:07:20.952220   73696 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 18:07:21.051082   73696 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 18:07:21.051137   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetMachineName
	I0731 18:07:21.051424   73696 buildroot.go:166] provisioning hostname "default-k8s-diff-port-094310"
	I0731 18:07:21.051454   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetMachineName
	I0731 18:07:21.051650   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:07:21.054527   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.054913   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:21.054940   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.055151   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:07:21.055377   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:21.055516   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:21.055670   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:07:21.055838   73696 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:21.056037   73696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.197 22 <nil> <nil>}
	I0731 18:07:21.056051   73696 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-094310 && echo "default-k8s-diff-port-094310" | sudo tee /etc/hostname
	I0731 18:07:22.127802   73800 start.go:364] duration metric: took 4m27.5245732s to acquireMachinesLock for "embed-certs-436067"
	I0731 18:07:22.127861   73800 start.go:96] Skipping create...Using existing machine configuration
	I0731 18:07:22.127871   73800 fix.go:54] fixHost starting: 
	I0731 18:07:22.128296   73800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:07:22.128386   73800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:07:22.144783   73800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34313
	I0731 18:07:22.145111   73800 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:07:22.145531   73800 main.go:141] libmachine: Using API Version  1
	I0731 18:07:22.145549   73800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:07:22.145894   73800 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:07:22.146086   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:07:22.146226   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetState
	I0731 18:07:22.147718   73800 fix.go:112] recreateIfNeeded on embed-certs-436067: state=Stopped err=<nil>
	I0731 18:07:22.147737   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	W0731 18:07:22.147878   73800 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 18:07:22.149896   73800 out.go:177] * Restarting existing kvm2 VM for "embed-certs-436067" ...
	I0731 18:07:21.168797   73696 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-094310
	
	I0731 18:07:21.168828   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:07:21.171672   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.172012   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:21.172043   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.172183   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:07:21.172351   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:21.172510   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:21.172633   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:07:21.172800   73696 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:21.172976   73696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.197 22 <nil> <nil>}
	I0731 18:07:21.173010   73696 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-094310' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-094310/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-094310' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 18:07:21.284583   73696 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 18:07:21.284610   73696 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19349-8084/.minikube CaCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19349-8084/.minikube}
	I0731 18:07:21.284633   73696 buildroot.go:174] setting up certificates
	I0731 18:07:21.284645   73696 provision.go:84] configureAuth start
	I0731 18:07:21.284656   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetMachineName
	I0731 18:07:21.284931   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetIP
	I0731 18:07:21.287526   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.287945   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:21.287973   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.288161   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:07:21.290169   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.290469   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:21.290495   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.290602   73696 provision.go:143] copyHostCerts
	I0731 18:07:21.290661   73696 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem, removing ...
	I0731 18:07:21.290673   73696 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem
	I0731 18:07:21.290757   73696 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem (1082 bytes)
	I0731 18:07:21.290844   73696 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem, removing ...
	I0731 18:07:21.290856   73696 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem
	I0731 18:07:21.290881   73696 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem (1123 bytes)
	I0731 18:07:21.290933   73696 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem, removing ...
	I0731 18:07:21.290939   73696 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem
	I0731 18:07:21.290959   73696 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem (1675 bytes)
	I0731 18:07:21.291005   73696 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-094310 san=[127.0.0.1 192.168.72.197 default-k8s-diff-port-094310 localhost minikube]
	I0731 18:07:21.483241   73696 provision.go:177] copyRemoteCerts
	I0731 18:07:21.483314   73696 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 18:07:21.483343   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:07:21.486231   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.486619   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:21.486659   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.486850   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:07:21.487084   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:21.487285   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:07:21.487443   73696 sshutil.go:53] new ssh client: &{IP:192.168.72.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/default-k8s-diff-port-094310/id_rsa Username:docker}
	I0731 18:07:21.568564   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 18:07:21.598766   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0731 18:07:21.621602   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 18:07:21.643361   73696 provision.go:87] duration metric: took 358.702982ms to configureAuth
	I0731 18:07:21.643393   73696 buildroot.go:189] setting minikube options for container-runtime
	I0731 18:07:21.643598   73696 config.go:182] Loaded profile config "default-k8s-diff-port-094310": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:07:21.643699   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:07:21.646487   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.646921   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:21.646967   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.647126   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:07:21.647331   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:21.647527   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:21.647675   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:07:21.647879   73696 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:21.648051   73696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.197 22 <nil> <nil>}
	I0731 18:07:21.648066   73696 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 18:07:21.896109   73696 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 18:07:21.896138   73696 machine.go:97] duration metric: took 948.216479ms to provisionDockerMachine
	I0731 18:07:21.896152   73696 start.go:293] postStartSetup for "default-k8s-diff-port-094310" (driver="kvm2")
	I0731 18:07:21.896166   73696 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 18:07:21.896185   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:07:21.896500   73696 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 18:07:21.896533   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:07:21.899447   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.899784   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:21.899817   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.899936   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:07:21.900136   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:21.900268   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:07:21.900415   73696 sshutil.go:53] new ssh client: &{IP:192.168.72.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/default-k8s-diff-port-094310/id_rsa Username:docker}
	I0731 18:07:21.981347   73696 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 18:07:21.985297   73696 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 18:07:21.985324   73696 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/addons for local assets ...
	I0731 18:07:21.985397   73696 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/files for local assets ...
	I0731 18:07:21.985513   73696 filesync.go:149] local asset: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem -> 152592.pem in /etc/ssl/certs
	I0731 18:07:21.985646   73696 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 18:07:21.994700   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /etc/ssl/certs/152592.pem (1708 bytes)
	I0731 18:07:22.022005   73696 start.go:296] duration metric: took 125.838186ms for postStartSetup
	I0731 18:07:22.022052   73696 fix.go:56] duration metric: took 17.605858897s for fixHost
	I0731 18:07:22.022075   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:07:22.025151   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:22.025445   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:22.025478   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:22.025622   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:07:22.025829   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:22.026023   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:22.026199   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:07:22.026390   73696 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:22.026632   73696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.197 22 <nil> <nil>}
	I0731 18:07:22.026653   73696 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 18:07:22.127643   73696 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722449242.103036947
	
	I0731 18:07:22.127668   73696 fix.go:216] guest clock: 1722449242.103036947
	I0731 18:07:22.127675   73696 fix.go:229] Guest: 2024-07-31 18:07:22.103036947 +0000 UTC Remote: 2024-07-31 18:07:22.022056299 +0000 UTC m=+275.995802468 (delta=80.980648ms)
	I0731 18:07:22.127698   73696 fix.go:200] guest clock delta is within tolerance: 80.980648ms
	I0731 18:07:22.127704   73696 start.go:83] releasing machines lock for "default-k8s-diff-port-094310", held for 17.711543911s
	I0731 18:07:22.127735   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:07:22.128006   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetIP
	I0731 18:07:22.130905   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:22.131291   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:22.131322   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:22.131568   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:07:22.132072   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:07:22.132244   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:07:22.132334   73696 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 18:07:22.132373   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:07:22.132488   73696 ssh_runner.go:195] Run: cat /version.json
	I0731 18:07:22.132511   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:07:22.134976   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:22.135269   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:22.135350   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:22.135386   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:22.135583   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:07:22.135702   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:22.135735   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:22.135751   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:22.135837   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:07:22.135945   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:07:22.135966   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:22.136068   73696 sshutil.go:53] new ssh client: &{IP:192.168.72.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/default-k8s-diff-port-094310/id_rsa Username:docker}
	I0731 18:07:22.136101   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:07:22.136246   73696 sshutil.go:53] new ssh client: &{IP:192.168.72.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/default-k8s-diff-port-094310/id_rsa Username:docker}
	I0731 18:07:22.245752   73696 ssh_runner.go:195] Run: systemctl --version
	I0731 18:07:22.251574   73696 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 18:07:22.391398   73696 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 18:07:22.396765   73696 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 18:07:22.396842   73696 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 18:07:22.412102   73696 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 18:07:22.412119   73696 start.go:495] detecting cgroup driver to use...
	I0731 18:07:22.412170   73696 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 18:07:22.427198   73696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 18:07:22.441511   73696 docker.go:217] disabling cri-docker service (if available) ...
	I0731 18:07:22.441589   73696 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 18:07:22.455498   73696 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 18:07:22.469702   73696 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 18:07:22.584218   73696 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 18:07:22.719105   73696 docker.go:233] disabling docker service ...
	I0731 18:07:22.719195   73696 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 18:07:22.733625   73696 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 18:07:22.746500   73696 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 18:07:22.893624   73696 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 18:07:23.012965   73696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 18:07:23.027132   73696 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 18:07:23.044766   73696 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 18:07:23.044832   73696 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:23.054276   73696 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 18:07:23.054363   73696 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:23.063873   73696 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:23.073392   73696 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:23.082908   73696 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 18:07:23.093468   73696 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:23.103419   73696 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:23.119920   73696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:23.130427   73696 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 18:07:23.139397   73696 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 18:07:23.139465   73696 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 18:07:23.152275   73696 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 18:07:23.162439   73696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:07:23.280030   73696 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 18:07:23.412019   73696 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 18:07:23.412083   73696 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 18:07:23.416884   73696 start.go:563] Will wait 60s for crictl version
	I0731 18:07:23.416930   73696 ssh_runner.go:195] Run: which crictl
	I0731 18:07:23.420518   73696 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 18:07:23.458895   73696 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 18:07:23.458976   73696 ssh_runner.go:195] Run: crio --version
	I0731 18:07:23.486961   73696 ssh_runner.go:195] Run: crio --version
	I0731 18:07:23.519648   73696 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 18:07:22.151159   73800 main.go:141] libmachine: (embed-certs-436067) Calling .Start
	I0731 18:07:22.151319   73800 main.go:141] libmachine: (embed-certs-436067) Ensuring networks are active...
	I0731 18:07:22.151951   73800 main.go:141] libmachine: (embed-certs-436067) Ensuring network default is active
	I0731 18:07:22.152245   73800 main.go:141] libmachine: (embed-certs-436067) Ensuring network mk-embed-certs-436067 is active
	I0731 18:07:22.152747   73800 main.go:141] libmachine: (embed-certs-436067) Getting domain xml...
	I0731 18:07:22.153446   73800 main.go:141] libmachine: (embed-certs-436067) Creating domain...
	I0731 18:07:23.410530   73800 main.go:141] libmachine: (embed-certs-436067) Waiting to get IP...
	I0731 18:07:23.411687   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:23.412152   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:23.412231   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:23.412133   74994 retry.go:31] will retry after 233.281104ms: waiting for machine to come up
	I0731 18:07:23.646659   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:23.647147   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:23.647174   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:23.647069   74994 retry.go:31] will retry after 307.068766ms: waiting for machine to come up
	I0731 18:07:23.955614   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:23.956140   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:23.956166   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:23.956094   74994 retry.go:31] will retry after 410.095032ms: waiting for machine to come up
	I0731 18:07:24.367793   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:24.368231   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:24.368264   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:24.368188   74994 retry.go:31] will retry after 366.242055ms: waiting for machine to come up
	I0731 18:07:23.520927   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetIP
	I0731 18:07:23.524167   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:23.524615   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:23.524663   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:23.524913   73696 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0731 18:07:23.528924   73696 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:07:23.540496   73696 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-094310 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-094310 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.197 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 18:07:23.540633   73696 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 18:07:23.540681   73696 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 18:07:23.579224   73696 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 18:07:23.579295   73696 ssh_runner.go:195] Run: which lz4
	I0731 18:07:23.583060   73696 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 18:07:23.586888   73696 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 18:07:23.586922   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 18:07:24.864241   73696 crio.go:462] duration metric: took 1.281254602s to copy over tarball
	I0731 18:07:24.864321   73696 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 18:07:24.735741   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:24.736325   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:24.736356   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:24.736275   74994 retry.go:31] will retry after 593.179812ms: waiting for machine to come up
	I0731 18:07:25.331004   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:25.331406   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:25.331470   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:25.331381   74994 retry.go:31] will retry after 778.352855ms: waiting for machine to come up
	I0731 18:07:26.111327   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:26.111828   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:26.111855   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:26.111757   74994 retry.go:31] will retry after 993.157171ms: waiting for machine to come up
	I0731 18:07:27.106111   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:27.106543   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:27.106574   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:27.106507   74994 retry.go:31] will retry after 963.581879ms: waiting for machine to come up
	I0731 18:07:28.072100   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:28.072628   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:28.072657   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:28.072560   74994 retry.go:31] will retry after 1.608497907s: waiting for machine to come up
	I0731 18:07:27.052512   73696 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.188157854s)
	I0731 18:07:27.052542   73696 crio.go:469] duration metric: took 2.188269884s to extract the tarball
	I0731 18:07:27.052557   73696 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 18:07:27.089250   73696 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 18:07:27.130507   73696 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 18:07:27.130536   73696 cache_images.go:84] Images are preloaded, skipping loading
	I0731 18:07:27.130546   73696 kubeadm.go:934] updating node { 192.168.72.197 8444 v1.30.3 crio true true} ...
	I0731 18:07:27.130666   73696 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-094310 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.197
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-094310 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 18:07:27.130751   73696 ssh_runner.go:195] Run: crio config
	I0731 18:07:27.176571   73696 cni.go:84] Creating CNI manager for ""
	I0731 18:07:27.176598   73696 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:07:27.176614   73696 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 18:07:27.176640   73696 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.197 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-094310 NodeName:default-k8s-diff-port-094310 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.197"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.197 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 18:07:27.176821   73696 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.197
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-094310"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.197
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.197"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 18:07:27.176904   73696 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 18:07:27.186582   73696 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 18:07:27.186647   73696 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 18:07:27.195571   73696 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0731 18:07:27.211103   73696 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 18:07:27.226226   73696 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0731 18:07:27.241763   73696 ssh_runner.go:195] Run: grep 192.168.72.197	control-plane.minikube.internal$ /etc/hosts
	I0731 18:07:27.245286   73696 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.197	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:07:27.256317   73696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:07:27.377904   73696 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 18:07:27.394151   73696 certs.go:68] Setting up /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/default-k8s-diff-port-094310 for IP: 192.168.72.197
	I0731 18:07:27.394181   73696 certs.go:194] generating shared ca certs ...
	I0731 18:07:27.394201   73696 certs.go:226] acquiring lock for ca certs: {Name:mk49eed439085f9c30746706460ce213570d1997 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:07:27.394382   73696 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key
	I0731 18:07:27.394451   73696 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key
	I0731 18:07:27.394465   73696 certs.go:256] generating profile certs ...
	I0731 18:07:27.394577   73696 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/default-k8s-diff-port-094310/client.key
	I0731 18:07:27.394656   73696 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/default-k8s-diff-port-094310/apiserver.key.5264b27d
	I0731 18:07:27.394703   73696 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/default-k8s-diff-port-094310/proxy-client.key
	I0731 18:07:27.394851   73696 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem (1338 bytes)
	W0731 18:07:27.394896   73696 certs.go:480] ignoring /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259_empty.pem, impossibly tiny 0 bytes
	I0731 18:07:27.394908   73696 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 18:07:27.394935   73696 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem (1082 bytes)
	I0731 18:07:27.394969   73696 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem (1123 bytes)
	I0731 18:07:27.394990   73696 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem (1675 bytes)
	I0731 18:07:27.395028   73696 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem (1708 bytes)
	I0731 18:07:27.395749   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 18:07:27.425292   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 18:07:27.452753   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 18:07:27.481508   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 18:07:27.506990   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/default-k8s-diff-port-094310/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0731 18:07:27.544385   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/default-k8s-diff-port-094310/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 18:07:27.572947   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/default-k8s-diff-port-094310/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 18:07:27.597895   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/default-k8s-diff-port-094310/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 18:07:27.619324   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /usr/share/ca-certificates/152592.pem (1708 bytes)
	I0731 18:07:27.641000   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 18:07:27.662483   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem --> /usr/share/ca-certificates/15259.pem (1338 bytes)
	I0731 18:07:27.684400   73696 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 18:07:27.700058   73696 ssh_runner.go:195] Run: openssl version
	I0731 18:07:27.705637   73696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/152592.pem && ln -fs /usr/share/ca-certificates/152592.pem /etc/ssl/certs/152592.pem"
	I0731 18:07:27.715558   73696 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/152592.pem
	I0731 18:07:27.719545   73696 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 16:54 /usr/share/ca-certificates/152592.pem
	I0731 18:07:27.719611   73696 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/152592.pem
	I0731 18:07:27.725076   73696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/152592.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 18:07:27.736589   73696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 18:07:27.747908   73696 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:07:27.752392   73696 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 16:42 /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:07:27.752448   73696 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:07:27.757939   73696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 18:07:27.769571   73696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15259.pem && ln -fs /usr/share/ca-certificates/15259.pem /etc/ssl/certs/15259.pem"
	I0731 18:07:27.780730   73696 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15259.pem
	I0731 18:07:27.785059   73696 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 16:54 /usr/share/ca-certificates/15259.pem
	I0731 18:07:27.785112   73696 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15259.pem
	I0731 18:07:27.790477   73696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15259.pem /etc/ssl/certs/51391683.0"
	I0731 18:07:27.801519   73696 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 18:07:27.805654   73696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 18:07:27.811381   73696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 18:07:27.816786   73696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 18:07:27.822643   73696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 18:07:27.828371   73696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 18:07:27.833908   73696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 18:07:27.839455   73696 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-094310 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-094310 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.197 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:07:27.839537   73696 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 18:07:27.839605   73696 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 18:07:27.882993   73696 cri.go:89] found id: ""
	I0731 18:07:27.883055   73696 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 18:07:27.894363   73696 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 18:07:27.894386   73696 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 18:07:27.894431   73696 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 18:07:27.905192   73696 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 18:07:27.906138   73696 kubeconfig.go:125] found "default-k8s-diff-port-094310" server: "https://192.168.72.197:8444"
	I0731 18:07:27.908339   73696 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 18:07:27.918565   73696 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.197
	I0731 18:07:27.918603   73696 kubeadm.go:1160] stopping kube-system containers ...
	I0731 18:07:27.918613   73696 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 18:07:27.918663   73696 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 18:07:27.955675   73696 cri.go:89] found id: ""
	I0731 18:07:27.955744   73696 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 18:07:27.972234   73696 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 18:07:27.981273   73696 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 18:07:27.981289   73696 kubeadm.go:157] found existing configuration files:
	
	I0731 18:07:27.981323   73696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0731 18:07:27.989775   73696 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 18:07:27.989837   73696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 18:07:27.998816   73696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0731 18:07:28.007142   73696 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 18:07:28.007197   73696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 18:07:28.016124   73696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0731 18:07:28.024471   73696 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 18:07:28.024519   73696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 18:07:28.033105   73696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0731 18:07:28.041306   73696 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 18:07:28.041355   73696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 18:07:28.049958   73696 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 18:07:28.058718   73696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:28.167720   73696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:29.013539   73696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:29.225696   73696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:29.300822   73696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:29.403471   73696 api_server.go:52] waiting for apiserver process to appear ...
	I0731 18:07:29.403567   73696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:07:29.903755   73696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:07:30.403896   73696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:07:30.904160   73696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:07:29.683622   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:29.684148   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:29.684180   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:29.684088   74994 retry.go:31] will retry after 1.813922887s: waiting for machine to come up
	I0731 18:07:31.500225   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:31.500738   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:31.500769   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:31.500694   74994 retry.go:31] will retry after 2.381670698s: waiting for machine to come up
	I0731 18:07:33.884129   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:33.884564   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:33.884587   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:33.884539   74994 retry.go:31] will retry after 3.269400744s: waiting for machine to come up
	I0731 18:07:31.404093   73696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:07:31.417483   73696 api_server.go:72] duration metric: took 2.014013675s to wait for apiserver process to appear ...
	I0731 18:07:31.417511   73696 api_server.go:88] waiting for apiserver healthz status ...
	I0731 18:07:31.417533   73696 api_server.go:253] Checking apiserver healthz at https://192.168.72.197:8444/healthz ...
	I0731 18:07:34.340211   73696 api_server.go:279] https://192.168.72.197:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 18:07:34.340240   73696 api_server.go:103] status: https://192.168.72.197:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 18:07:34.340274   73696 api_server.go:253] Checking apiserver healthz at https://192.168.72.197:8444/healthz ...
	I0731 18:07:34.426446   73696 api_server.go:279] https://192.168.72.197:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 18:07:34.426504   73696 api_server.go:103] status: https://192.168.72.197:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 18:07:34.426522   73696 api_server.go:253] Checking apiserver healthz at https://192.168.72.197:8444/healthz ...
	I0731 18:07:34.436383   73696 api_server.go:279] https://192.168.72.197:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 18:07:34.436416   73696 api_server.go:103] status: https://192.168.72.197:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 18:07:34.918371   73696 api_server.go:253] Checking apiserver healthz at https://192.168.72.197:8444/healthz ...
	I0731 18:07:34.922668   73696 api_server.go:279] https://192.168.72.197:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 18:07:34.922699   73696 api_server.go:103] status: https://192.168.72.197:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 18:07:35.418265   73696 api_server.go:253] Checking apiserver healthz at https://192.168.72.197:8444/healthz ...
	I0731 18:07:35.435931   73696 api_server.go:279] https://192.168.72.197:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 18:07:35.435966   73696 api_server.go:103] status: https://192.168.72.197:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 18:07:35.918570   73696 api_server.go:253] Checking apiserver healthz at https://192.168.72.197:8444/healthz ...
	I0731 18:07:35.923674   73696 api_server.go:279] https://192.168.72.197:8444/healthz returned 200:
	ok
	I0731 18:07:35.929781   73696 api_server.go:141] control plane version: v1.30.3
	I0731 18:07:35.929809   73696 api_server.go:131] duration metric: took 4.512290009s to wait for apiserver health ...
	I0731 18:07:35.929820   73696 cni.go:84] Creating CNI manager for ""
	I0731 18:07:35.929827   73696 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:07:35.931827   73696 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 18:07:35.933104   73696 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 18:07:35.943548   73696 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 18:07:35.961932   73696 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 18:07:35.977855   73696 system_pods.go:59] 8 kube-system pods found
	I0731 18:07:35.977894   73696 system_pods.go:61] "coredns-7db6d8ff4d-kvxmb" [df8cf19b-5e62-4c38-9124-3257fea48fbb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:07:35.977905   73696 system_pods.go:61] "etcd-default-k8s-diff-port-094310" [fe526f06-bd6c-4708-a0f3-e49b731e3a61] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 18:07:35.977915   73696 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-094310" [f0191941-87ad-4934-a02a-75b07649d5dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 18:07:35.977924   73696 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-094310" [28b4bdc4-4eea-41c0-9182-b07034d7363e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 18:07:35.977936   73696 system_pods.go:61] "kube-proxy-8bgl7" [577052d5-fe7d-4547-bfbf-d3c938884767] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 18:07:35.977946   73696 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-094310" [df25971f-b25a-4344-a91e-c4b0c9ee5282] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 18:07:35.977964   73696 system_pods.go:61] "metrics-server-569cc877fc-64hp4" [847243bf-6568-41ff-a1e4-70b0a89c63dd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 18:07:35.977978   73696 system_pods.go:61] "storage-provisioner" [6493bfa6-e40b-405c-93b6-ee5053efbdf6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 18:07:35.977991   73696 system_pods.go:74] duration metric: took 16.038231ms to wait for pod list to return data ...
	I0731 18:07:35.978003   73696 node_conditions.go:102] verifying NodePressure condition ...
	I0731 18:07:35.983206   73696 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 18:07:35.983234   73696 node_conditions.go:123] node cpu capacity is 2
	I0731 18:07:35.983251   73696 node_conditions.go:105] duration metric: took 5.239492ms to run NodePressure ...
	I0731 18:07:35.983270   73696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:37.155307   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:37.155787   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:37.155822   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:37.155717   74994 retry.go:31] will retry after 3.095991533s: waiting for machine to come up
	I0731 18:07:36.249072   73696 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 18:07:36.253639   73696 kubeadm.go:739] kubelet initialised
	I0731 18:07:36.253661   73696 kubeadm.go:740] duration metric: took 4.559461ms waiting for restarted kubelet to initialise ...
	I0731 18:07:36.253669   73696 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:07:36.258632   73696 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-kvxmb" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:36.262785   73696 pod_ready.go:97] node "default-k8s-diff-port-094310" hosting pod "coredns-7db6d8ff4d-kvxmb" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-094310" has status "Ready":"False"
	I0731 18:07:36.262811   73696 pod_ready.go:81] duration metric: took 4.157359ms for pod "coredns-7db6d8ff4d-kvxmb" in "kube-system" namespace to be "Ready" ...
	E0731 18:07:36.262823   73696 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-094310" hosting pod "coredns-7db6d8ff4d-kvxmb" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-094310" has status "Ready":"False"
	I0731 18:07:36.262831   73696 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:36.269224   73696 pod_ready.go:97] node "default-k8s-diff-port-094310" hosting pod "etcd-default-k8s-diff-port-094310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-094310" has status "Ready":"False"
	I0731 18:07:36.269250   73696 pod_ready.go:81] duration metric: took 6.406018ms for pod "etcd-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	E0731 18:07:36.269263   73696 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-094310" hosting pod "etcd-default-k8s-diff-port-094310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-094310" has status "Ready":"False"
	I0731 18:07:36.269270   73696 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:36.273379   73696 pod_ready.go:97] node "default-k8s-diff-port-094310" hosting pod "kube-apiserver-default-k8s-diff-port-094310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-094310" has status "Ready":"False"
	I0731 18:07:36.273400   73696 pod_ready.go:81] duration metric: took 4.119945ms for pod "kube-apiserver-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	E0731 18:07:36.273408   73696 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-094310" hosting pod "kube-apiserver-default-k8s-diff-port-094310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-094310" has status "Ready":"False"
	I0731 18:07:36.273414   73696 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:36.365153   73696 pod_ready.go:97] node "default-k8s-diff-port-094310" hosting pod "kube-controller-manager-default-k8s-diff-port-094310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-094310" has status "Ready":"False"
	I0731 18:07:36.365183   73696 pod_ready.go:81] duration metric: took 91.758203ms for pod "kube-controller-manager-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	E0731 18:07:36.365195   73696 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-094310" hosting pod "kube-controller-manager-default-k8s-diff-port-094310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-094310" has status "Ready":"False"
	I0731 18:07:36.365201   73696 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8bgl7" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:36.765371   73696 pod_ready.go:92] pod "kube-proxy-8bgl7" in "kube-system" namespace has status "Ready":"True"
	I0731 18:07:36.765393   73696 pod_ready.go:81] duration metric: took 400.181854ms for pod "kube-proxy-8bgl7" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:36.765405   73696 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:38.770757   73696 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-094310" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:40.772702   73696 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-094310" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:41.552094   74203 start.go:364] duration metric: took 3m46.106649241s to acquireMachinesLock for "old-k8s-version-276459"
	I0731 18:07:41.552166   74203 start.go:96] Skipping create...Using existing machine configuration
	I0731 18:07:41.552174   74203 fix.go:54] fixHost starting: 
	I0731 18:07:41.552553   74203 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:07:41.552595   74203 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:07:41.569965   74203 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43329
	I0731 18:07:41.570361   74203 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:07:41.570884   74203 main.go:141] libmachine: Using API Version  1
	I0731 18:07:41.570905   74203 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:07:41.571247   74203 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:07:41.571454   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:07:41.571605   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetState
	I0731 18:07:41.573081   74203 fix.go:112] recreateIfNeeded on old-k8s-version-276459: state=Stopped err=<nil>
	I0731 18:07:41.573114   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	W0731 18:07:41.573276   74203 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 18:07:41.575254   74203 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-276459" ...
	I0731 18:07:40.254868   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.255367   73800 main.go:141] libmachine: (embed-certs-436067) Found IP for machine: 192.168.50.86
	I0731 18:07:40.255385   73800 main.go:141] libmachine: (embed-certs-436067) Reserving static IP address...
	I0731 18:07:40.255405   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has current primary IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.255798   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "embed-certs-436067", mac: "52:54:00:87:1e:25", ip: "192.168.50.86"} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:40.255822   73800 main.go:141] libmachine: (embed-certs-436067) Reserved static IP address: 192.168.50.86
	I0731 18:07:40.255839   73800 main.go:141] libmachine: (embed-certs-436067) DBG | skip adding static IP to network mk-embed-certs-436067 - found existing host DHCP lease matching {name: "embed-certs-436067", mac: "52:54:00:87:1e:25", ip: "192.168.50.86"}
	I0731 18:07:40.255853   73800 main.go:141] libmachine: (embed-certs-436067) DBG | Getting to WaitForSSH function...
	I0731 18:07:40.255865   73800 main.go:141] libmachine: (embed-certs-436067) Waiting for SSH to be available...
	I0731 18:07:40.257994   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.258304   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:40.258331   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.258475   73800 main.go:141] libmachine: (embed-certs-436067) DBG | Using SSH client type: external
	I0731 18:07:40.258492   73800 main.go:141] libmachine: (embed-certs-436067) DBG | Using SSH private key: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/embed-certs-436067/id_rsa (-rw-------)
	I0731 18:07:40.258594   73800 main.go:141] libmachine: (embed-certs-436067) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.86 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19349-8084/.minikube/machines/embed-certs-436067/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 18:07:40.258625   73800 main.go:141] libmachine: (embed-certs-436067) DBG | About to run SSH command:
	I0731 18:07:40.258644   73800 main.go:141] libmachine: (embed-certs-436067) DBG | exit 0
	I0731 18:07:40.387051   73800 main.go:141] libmachine: (embed-certs-436067) DBG | SSH cmd err, output: <nil>: 
	I0731 18:07:40.387459   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetConfigRaw
	I0731 18:07:40.388093   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetIP
	I0731 18:07:40.390805   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.391260   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:40.391306   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.391534   73800 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/embed-certs-436067/config.json ...
	I0731 18:07:40.391769   73800 machine.go:94] provisionDockerMachine start ...
	I0731 18:07:40.391793   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:07:40.392012   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:07:40.394412   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.394809   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:40.394839   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.395029   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:07:40.395209   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:40.395372   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:40.395480   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:07:40.395624   73800 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:40.395808   73800 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.86 22 <nil> <nil>}
	I0731 18:07:40.395817   73800 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 18:07:40.503041   73800 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 18:07:40.503073   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetMachineName
	I0731 18:07:40.503326   73800 buildroot.go:166] provisioning hostname "embed-certs-436067"
	I0731 18:07:40.503352   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetMachineName
	I0731 18:07:40.503539   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:07:40.506604   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.506940   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:40.506967   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.507124   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:07:40.507296   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:40.507438   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:40.507577   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:07:40.507752   73800 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:40.507912   73800 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.86 22 <nil> <nil>}
	I0731 18:07:40.507927   73800 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-436067 && echo "embed-certs-436067" | sudo tee /etc/hostname
	I0731 18:07:40.632627   73800 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-436067
	
	I0731 18:07:40.632678   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:07:40.635632   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.635989   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:40.636017   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.636168   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:07:40.636386   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:40.636554   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:40.636751   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:07:40.636963   73800 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:40.637192   73800 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.86 22 <nil> <nil>}
	I0731 18:07:40.637213   73800 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-436067' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-436067/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-436067' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 18:07:40.755249   73800 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 18:07:40.755273   73800 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19349-8084/.minikube CaCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19349-8084/.minikube}
	I0731 18:07:40.755291   73800 buildroot.go:174] setting up certificates
	I0731 18:07:40.755301   73800 provision.go:84] configureAuth start
	I0731 18:07:40.755310   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetMachineName
	I0731 18:07:40.755602   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetIP
	I0731 18:07:40.758306   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.758705   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:40.758731   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.758865   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:07:40.760790   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.761061   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:40.761090   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.761244   73800 provision.go:143] copyHostCerts
	I0731 18:07:40.761299   73800 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem, removing ...
	I0731 18:07:40.761323   73800 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem
	I0731 18:07:40.761376   73800 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem (1082 bytes)
	I0731 18:07:40.761479   73800 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem, removing ...
	I0731 18:07:40.761488   73800 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem
	I0731 18:07:40.761509   73800 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem (1123 bytes)
	I0731 18:07:40.761562   73800 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem, removing ...
	I0731 18:07:40.761569   73800 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem
	I0731 18:07:40.761586   73800 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem (1675 bytes)
	I0731 18:07:40.761635   73800 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem org=jenkins.embed-certs-436067 san=[127.0.0.1 192.168.50.86 embed-certs-436067 localhost minikube]
	I0731 18:07:40.874612   73800 provision.go:177] copyRemoteCerts
	I0731 18:07:40.874666   73800 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 18:07:40.874691   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:07:40.877623   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.878044   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:40.878075   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.878206   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:07:40.878403   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:40.878556   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:07:40.878706   73800 sshutil.go:53] new ssh client: &{IP:192.168.50.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/embed-certs-436067/id_rsa Username:docker}
	I0731 18:07:40.965720   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 18:07:40.987836   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0731 18:07:41.012423   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 18:07:41.036366   73800 provision.go:87] duration metric: took 281.054266ms to configureAuth
	I0731 18:07:41.036392   73800 buildroot.go:189] setting minikube options for container-runtime
	I0731 18:07:41.036561   73800 config.go:182] Loaded profile config "embed-certs-436067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:07:41.036626   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:07:41.039204   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.039587   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:41.039615   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.039814   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:07:41.040021   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:41.040162   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:41.040293   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:07:41.040462   73800 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:41.040642   73800 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.86 22 <nil> <nil>}
	I0731 18:07:41.040663   73800 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 18:07:41.307915   73800 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 18:07:41.307945   73800 machine.go:97] duration metric: took 916.161297ms to provisionDockerMachine
	I0731 18:07:41.307958   73800 start.go:293] postStartSetup for "embed-certs-436067" (driver="kvm2")
	I0731 18:07:41.307971   73800 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 18:07:41.307990   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:07:41.308383   73800 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 18:07:41.308409   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:07:41.311172   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.311532   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:41.311559   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.311712   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:07:41.311940   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:41.312132   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:07:41.312251   73800 sshutil.go:53] new ssh client: &{IP:192.168.50.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/embed-certs-436067/id_rsa Username:docker}
	I0731 18:07:41.397229   73800 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 18:07:41.401356   73800 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 18:07:41.401380   73800 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/addons for local assets ...
	I0731 18:07:41.401458   73800 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/files for local assets ...
	I0731 18:07:41.401571   73800 filesync.go:149] local asset: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem -> 152592.pem in /etc/ssl/certs
	I0731 18:07:41.401696   73800 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 18:07:41.410540   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /etc/ssl/certs/152592.pem (1708 bytes)
	I0731 18:07:41.434298   73800 start.go:296] duration metric: took 126.324424ms for postStartSetup
	I0731 18:07:41.434342   73800 fix.go:56] duration metric: took 19.306472215s for fixHost
	I0731 18:07:41.434363   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:07:41.437502   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.438007   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:41.438038   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.438221   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:07:41.438435   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:41.438613   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:41.438752   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:07:41.438932   73800 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:41.439086   73800 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.86 22 <nil> <nil>}
	I0731 18:07:41.439095   73800 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 18:07:41.551915   73800 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722449261.529568895
	
	I0731 18:07:41.551937   73800 fix.go:216] guest clock: 1722449261.529568895
	I0731 18:07:41.551944   73800 fix.go:229] Guest: 2024-07-31 18:07:41.529568895 +0000 UTC Remote: 2024-07-31 18:07:41.434346377 +0000 UTC m=+286.960766339 (delta=95.222518ms)
	I0731 18:07:41.551999   73800 fix.go:200] guest clock delta is within tolerance: 95.222518ms
	I0731 18:07:41.552010   73800 start.go:83] releasing machines lock for "embed-certs-436067", held for 19.42417291s
	I0731 18:07:41.552036   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:07:41.552377   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetIP
	I0731 18:07:41.554945   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.555385   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:41.555415   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.555583   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:07:41.556139   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:07:41.556362   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:07:41.556448   73800 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 18:07:41.556507   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:07:41.556619   73800 ssh_runner.go:195] Run: cat /version.json
	I0731 18:07:41.556634   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:07:41.559700   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.559847   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.560160   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:41.560227   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.560277   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:07:41.560374   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:41.560440   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.560582   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:41.560652   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:07:41.560697   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:07:41.560745   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:41.560833   73800 sshutil.go:53] new ssh client: &{IP:192.168.50.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/embed-certs-436067/id_rsa Username:docker}
	I0731 18:07:41.560909   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:07:41.561060   73800 sshutil.go:53] new ssh client: &{IP:192.168.50.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/embed-certs-436067/id_rsa Username:docker}
	I0731 18:07:41.640796   73800 ssh_runner.go:195] Run: systemctl --version
	I0731 18:07:41.671461   73800 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 18:07:41.820881   73800 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 18:07:41.826610   73800 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 18:07:41.826673   73800 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 18:07:41.841766   73800 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 18:07:41.841789   73800 start.go:495] detecting cgroup driver to use...
	I0731 18:07:41.841872   73800 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 18:07:41.858636   73800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 18:07:41.873090   73800 docker.go:217] disabling cri-docker service (if available) ...
	I0731 18:07:41.873152   73800 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 18:07:41.890967   73800 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 18:07:41.907886   73800 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 18:07:42.022724   73800 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 18:07:42.173885   73800 docker.go:233] disabling docker service ...
	I0731 18:07:42.173969   73800 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 18:07:42.190959   73800 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 18:07:42.205274   73800 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 18:07:42.358130   73800 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 18:07:42.497981   73800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 18:07:42.513774   73800 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 18:07:42.532713   73800 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 18:07:42.532808   73800 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:42.544367   73800 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 18:07:42.544427   73800 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:42.556427   73800 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:42.566399   73800 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:42.576633   73800 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 18:07:42.588508   73800 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:42.600011   73800 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:42.618858   73800 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:42.630437   73800 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 18:07:42.641459   73800 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 18:07:42.641528   73800 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 18:07:42.655000   73800 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 18:07:42.664912   73800 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:07:42.791781   73800 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 18:07:42.936709   73800 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 18:07:42.936778   73800 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 18:07:42.941132   73800 start.go:563] Will wait 60s for crictl version
	I0731 18:07:42.941189   73800 ssh_runner.go:195] Run: which crictl
	I0731 18:07:42.944870   73800 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 18:07:42.983069   73800 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 18:07:42.983181   73800 ssh_runner.go:195] Run: crio --version
	I0731 18:07:43.011636   73800 ssh_runner.go:195] Run: crio --version
	I0731 18:07:43.043295   73800 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 18:07:43.044545   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetIP
	I0731 18:07:43.047635   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:43.048049   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:43.048080   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:43.048330   73800 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0731 18:07:43.052269   73800 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:07:43.064116   73800 kubeadm.go:883] updating cluster {Name:embed-certs-436067 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-436067 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.86 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 18:07:43.064283   73800 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 18:07:43.064361   73800 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 18:07:43.100437   73800 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 18:07:43.100516   73800 ssh_runner.go:195] Run: which lz4
	I0731 18:07:43.104627   73800 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 18:07:43.108552   73800 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 18:07:43.108586   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 18:07:44.368238   73800 crio.go:462] duration metric: took 1.263636259s to copy over tarball
	I0731 18:07:44.368322   73800 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 18:07:41.576648   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .Start
	I0731 18:07:41.576823   74203 main.go:141] libmachine: (old-k8s-version-276459) Ensuring networks are active...
	I0731 18:07:41.577511   74203 main.go:141] libmachine: (old-k8s-version-276459) Ensuring network default is active
	I0731 18:07:41.578015   74203 main.go:141] libmachine: (old-k8s-version-276459) Ensuring network mk-old-k8s-version-276459 is active
	I0731 18:07:41.578469   74203 main.go:141] libmachine: (old-k8s-version-276459) Getting domain xml...
	I0731 18:07:41.579474   74203 main.go:141] libmachine: (old-k8s-version-276459) Creating domain...
	I0731 18:07:42.876409   74203 main.go:141] libmachine: (old-k8s-version-276459) Waiting to get IP...
	I0731 18:07:42.877345   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:42.877788   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:42.877841   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:42.877763   75164 retry.go:31] will retry after 218.764988ms: waiting for machine to come up
	I0731 18:07:43.098230   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:43.098697   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:43.098722   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:43.098650   75164 retry.go:31] will retry after 285.579707ms: waiting for machine to come up
	I0731 18:07:43.386356   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:43.386897   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:43.386928   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:43.386852   75164 retry.go:31] will retry after 389.197253ms: waiting for machine to come up
	I0731 18:07:43.778183   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:43.778672   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:43.778698   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:43.778622   75164 retry.go:31] will retry after 484.5108ms: waiting for machine to come up
	I0731 18:07:44.264412   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:44.265042   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:44.265073   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:44.264955   75164 retry.go:31] will retry after 621.551625ms: waiting for machine to come up
	I0731 18:07:44.887986   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:44.888534   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:44.888563   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:44.888489   75164 retry.go:31] will retry after 610.567971ms: waiting for machine to come up
	I0731 18:07:42.773583   73696 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-094310" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:44.272853   73696 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-094310" in "kube-system" namespace has status "Ready":"True"
	I0731 18:07:44.272874   73696 pod_ready.go:81] duration metric: took 7.507462023s for pod "kube-scheduler-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:44.272886   73696 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:46.689701   73800 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.321340678s)
	I0731 18:07:46.689730   73800 crio.go:469] duration metric: took 2.321463484s to extract the tarball
	I0731 18:07:46.689738   73800 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 18:07:46.749205   73800 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 18:07:46.805950   73800 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 18:07:46.805979   73800 cache_images.go:84] Images are preloaded, skipping loading
	I0731 18:07:46.805990   73800 kubeadm.go:934] updating node { 192.168.50.86 8443 v1.30.3 crio true true} ...
	I0731 18:07:46.806135   73800 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-436067 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.86
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-436067 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 18:07:46.806233   73800 ssh_runner.go:195] Run: crio config
	I0731 18:07:46.865815   73800 cni.go:84] Creating CNI manager for ""
	I0731 18:07:46.865838   73800 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:07:46.865852   73800 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 18:07:46.865873   73800 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.86 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-436067 NodeName:embed-certs-436067 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.86"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.86 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 18:07:46.866048   73800 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.86
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-436067"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.86
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.86"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 18:07:46.866121   73800 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 18:07:46.875722   73800 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 18:07:46.875786   73800 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 18:07:46.885107   73800 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0731 18:07:46.903868   73800 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 18:07:46.919585   73800 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0731 18:07:46.939034   73800 ssh_runner.go:195] Run: grep 192.168.50.86	control-plane.minikube.internal$ /etc/hosts
	I0731 18:07:46.943460   73800 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.86	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:07:46.957699   73800 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:07:47.065714   73800 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 18:07:47.080655   73800 certs.go:68] Setting up /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/embed-certs-436067 for IP: 192.168.50.86
	I0731 18:07:47.080681   73800 certs.go:194] generating shared ca certs ...
	I0731 18:07:47.080717   73800 certs.go:226] acquiring lock for ca certs: {Name:mk49eed439085f9c30746706460ce213570d1997 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:07:47.080879   73800 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key
	I0731 18:07:47.080938   73800 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key
	I0731 18:07:47.080950   73800 certs.go:256] generating profile certs ...
	I0731 18:07:47.081046   73800 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/embed-certs-436067/client.key
	I0731 18:07:47.081113   73800 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/embed-certs-436067/apiserver.key.7b8160da
	I0731 18:07:47.081168   73800 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/embed-certs-436067/proxy-client.key
	I0731 18:07:47.081312   73800 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem (1338 bytes)
	W0731 18:07:47.081367   73800 certs.go:480] ignoring /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259_empty.pem, impossibly tiny 0 bytes
	I0731 18:07:47.081380   73800 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 18:07:47.081413   73800 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem (1082 bytes)
	I0731 18:07:47.081438   73800 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem (1123 bytes)
	I0731 18:07:47.081468   73800 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem (1675 bytes)
	I0731 18:07:47.081508   73800 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem (1708 bytes)
	I0731 18:07:47.082355   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 18:07:47.130037   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 18:07:47.171218   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 18:07:47.215745   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 18:07:47.244883   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/embed-certs-436067/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0731 18:07:47.270032   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/embed-certs-436067/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 18:07:47.294900   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/embed-certs-436067/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 18:07:47.317285   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/embed-certs-436067/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 18:07:47.343000   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 18:07:47.369906   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem --> /usr/share/ca-certificates/15259.pem (1338 bytes)
	I0731 18:07:47.392022   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /usr/share/ca-certificates/152592.pem (1708 bytes)
	I0731 18:07:47.414219   73800 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 18:07:47.431931   73800 ssh_runner.go:195] Run: openssl version
	I0731 18:07:47.437602   73800 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 18:07:47.447585   73800 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:07:47.451779   73800 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 16:42 /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:07:47.451833   73800 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:07:47.457309   73800 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 18:07:47.466917   73800 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15259.pem && ln -fs /usr/share/ca-certificates/15259.pem /etc/ssl/certs/15259.pem"
	I0731 18:07:47.476211   73800 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15259.pem
	I0731 18:07:47.480149   73800 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 16:54 /usr/share/ca-certificates/15259.pem
	I0731 18:07:47.480215   73800 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15259.pem
	I0731 18:07:47.485412   73800 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15259.pem /etc/ssl/certs/51391683.0"
	I0731 18:07:47.494852   73800 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/152592.pem && ln -fs /usr/share/ca-certificates/152592.pem /etc/ssl/certs/152592.pem"
	I0731 18:07:47.504407   73800 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/152592.pem
	I0731 18:07:47.509594   73800 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 16:54 /usr/share/ca-certificates/152592.pem
	I0731 18:07:47.509658   73800 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/152592.pem
	I0731 18:07:47.515728   73800 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/152592.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 18:07:47.525660   73800 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 18:07:47.529953   73800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 18:07:47.535576   73800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 18:07:47.541158   73800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 18:07:47.546633   73800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 18:07:47.551827   73800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 18:07:47.557100   73800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 18:07:47.562447   73800 kubeadm.go:392] StartCluster: {Name:embed-certs-436067 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-436067 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.86 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:07:47.562551   73800 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 18:07:47.562616   73800 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 18:07:47.610318   73800 cri.go:89] found id: ""
	I0731 18:07:47.610382   73800 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 18:07:47.623036   73800 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 18:07:47.623053   73800 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 18:07:47.623101   73800 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 18:07:47.631709   73800 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 18:07:47.632699   73800 kubeconfig.go:125] found "embed-certs-436067" server: "https://192.168.50.86:8443"
	I0731 18:07:47.634724   73800 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 18:07:47.643183   73800 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.86
	I0731 18:07:47.643207   73800 kubeadm.go:1160] stopping kube-system containers ...
	I0731 18:07:47.643218   73800 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 18:07:47.643264   73800 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 18:07:47.677438   73800 cri.go:89] found id: ""
	I0731 18:07:47.677527   73800 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 18:07:47.693427   73800 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 18:07:47.702889   73800 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 18:07:47.702907   73800 kubeadm.go:157] found existing configuration files:
	
	I0731 18:07:47.702956   73800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 18:07:47.713958   73800 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 18:07:47.714017   73800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 18:07:47.723931   73800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 18:07:47.732615   73800 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 18:07:47.732673   73800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 18:07:47.741168   73800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 18:07:47.749164   73800 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 18:07:47.749217   73800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 18:07:47.757691   73800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 18:07:47.765479   73800 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 18:07:47.765530   73800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 18:07:47.774002   73800 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 18:07:47.783757   73800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:47.890835   73800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:48.951421   73800 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.060547503s)
	I0731 18:07:48.951466   73800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:49.152745   73800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:49.224334   73800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:49.341066   73800 api_server.go:52] waiting for apiserver process to appear ...
	I0731 18:07:49.341147   73800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:07:45.500400   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:45.500938   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:45.500966   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:45.500890   75164 retry.go:31] will retry after 1.069889786s: waiting for machine to come up
	I0731 18:07:46.572634   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:46.573085   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:46.573128   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:46.572979   75164 retry.go:31] will retry after 1.047722466s: waiting for machine to come up
	I0731 18:07:47.622035   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:47.622479   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:47.622507   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:47.622435   75164 retry.go:31] will retry after 1.292658555s: waiting for machine to come up
	I0731 18:07:48.916255   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:48.916755   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:48.916778   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:48.916701   75164 retry.go:31] will retry after 2.006539925s: waiting for machine to come up
	I0731 18:07:46.281654   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:49.189881   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:49.841397   73800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:07:50.341264   73800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:07:50.409398   73800 api_server.go:72] duration metric: took 1.068329172s to wait for apiserver process to appear ...
	I0731 18:07:50.409432   73800 api_server.go:88] waiting for apiserver healthz status ...
	I0731 18:07:50.409457   73800 api_server.go:253] Checking apiserver healthz at https://192.168.50.86:8443/healthz ...
	I0731 18:07:50.410135   73800 api_server.go:269] stopped: https://192.168.50.86:8443/healthz: Get "https://192.168.50.86:8443/healthz": dial tcp 192.168.50.86:8443: connect: connection refused
	I0731 18:07:50.909802   73800 api_server.go:253] Checking apiserver healthz at https://192.168.50.86:8443/healthz ...
	I0731 18:07:52.636930   73800 api_server.go:279] https://192.168.50.86:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 18:07:52.636972   73800 api_server.go:103] status: https://192.168.50.86:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 18:07:52.636989   73800 api_server.go:253] Checking apiserver healthz at https://192.168.50.86:8443/healthz ...
	I0731 18:07:52.666947   73800 api_server.go:279] https://192.168.50.86:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 18:07:52.666980   73800 api_server.go:103] status: https://192.168.50.86:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 18:07:52.910391   73800 api_server.go:253] Checking apiserver healthz at https://192.168.50.86:8443/healthz ...
	I0731 18:07:52.916305   73800 api_server.go:279] https://192.168.50.86:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 18:07:52.916342   73800 api_server.go:103] status: https://192.168.50.86:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 18:07:53.409623   73800 api_server.go:253] Checking apiserver healthz at https://192.168.50.86:8443/healthz ...
	I0731 18:07:53.419159   73800 api_server.go:279] https://192.168.50.86:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 18:07:53.419205   73800 api_server.go:103] status: https://192.168.50.86:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 18:07:53.909654   73800 api_server.go:253] Checking apiserver healthz at https://192.168.50.86:8443/healthz ...
	I0731 18:07:53.913518   73800 api_server.go:279] https://192.168.50.86:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 18:07:53.913541   73800 api_server.go:103] status: https://192.168.50.86:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 18:07:54.409879   73800 api_server.go:253] Checking apiserver healthz at https://192.168.50.86:8443/healthz ...
	I0731 18:07:54.413948   73800 api_server.go:279] https://192.168.50.86:8443/healthz returned 200:
	ok
	I0731 18:07:54.422414   73800 api_server.go:141] control plane version: v1.30.3
	I0731 18:07:54.422444   73800 api_server.go:131] duration metric: took 4.013003689s to wait for apiserver health ...
	I0731 18:07:54.422458   73800 cni.go:84] Creating CNI manager for ""
	I0731 18:07:54.422467   73800 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:07:54.424680   73800 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 18:07:54.425887   73800 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 18:07:54.436394   73800 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 18:07:54.454533   73800 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 18:07:54.464268   73800 system_pods.go:59] 8 kube-system pods found
	I0731 18:07:54.464304   73800 system_pods.go:61] "coredns-7db6d8ff4d-h6ckp" [84faf557-0c8d-4026-b620-37265e017ed0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:07:54.464315   73800 system_pods.go:61] "etcd-embed-certs-436067" [787466df-6e3f-4209-a996-037875d63dc8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 18:07:54.464326   73800 system_pods.go:61] "kube-apiserver-embed-certs-436067" [6366e38e-21f3-41a4-af7a-433953b70eaa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 18:07:54.464335   73800 system_pods.go:61] "kube-controller-manager-embed-certs-436067" [a97f6a49-40cf-433a-8196-c433e3cda8e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 18:07:54.464341   73800 system_pods.go:61] "kube-proxy-tl9pj" [0124eb62-5c00-4f75-a73f-c3e92ddc4a42] Running
	I0731 18:07:54.464354   73800 system_pods.go:61] "kube-scheduler-embed-certs-436067" [afbb9117-f229-44ea-8939-d28c4a402c6b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 18:07:54.464366   73800 system_pods.go:61] "metrics-server-569cc877fc-fzxrw" [2ecdab2a-8ce8-4771-bd94-4e24dee34386] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 18:07:54.464374   73800 system_pods.go:61] "storage-provisioner" [29b17f6d-f9e4-4272-b6da-368431264701] Running
	I0731 18:07:54.464382   73800 system_pods.go:74] duration metric: took 9.82125ms to wait for pod list to return data ...
	I0731 18:07:54.464395   73800 node_conditions.go:102] verifying NodePressure condition ...
	I0731 18:07:54.467718   73800 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 18:07:54.467748   73800 node_conditions.go:123] node cpu capacity is 2
	I0731 18:07:54.467761   73800 node_conditions.go:105] duration metric: took 3.3602ms to run NodePressure ...
	I0731 18:07:54.467779   73800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:50.925369   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:50.925835   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:50.925856   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:50.925790   75164 retry.go:31] will retry after 2.875577792s: waiting for machine to come up
	I0731 18:07:53.802729   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:53.803164   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:53.803192   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:53.803122   75164 retry.go:31] will retry after 2.352020729s: waiting for machine to come up
	I0731 18:07:51.279883   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:53.279992   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:55.778812   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:54.732921   73800 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 18:07:54.736779   73800 kubeadm.go:739] kubelet initialised
	I0731 18:07:54.736798   73800 kubeadm.go:740] duration metric: took 3.850446ms waiting for restarted kubelet to initialise ...
	I0731 18:07:54.736809   73800 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:07:54.741733   73800 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-h6ckp" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:54.745722   73800 pod_ready.go:97] node "embed-certs-436067" hosting pod "coredns-7db6d8ff4d-h6ckp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436067" has status "Ready":"False"
	I0731 18:07:54.745742   73800 pod_ready.go:81] duration metric: took 3.986968ms for pod "coredns-7db6d8ff4d-h6ckp" in "kube-system" namespace to be "Ready" ...
	E0731 18:07:54.745751   73800 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-436067" hosting pod "coredns-7db6d8ff4d-h6ckp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436067" has status "Ready":"False"
	I0731 18:07:54.745757   73800 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:54.749650   73800 pod_ready.go:97] node "embed-certs-436067" hosting pod "etcd-embed-certs-436067" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436067" has status "Ready":"False"
	I0731 18:07:54.749666   73800 pod_ready.go:81] duration metric: took 3.895483ms for pod "etcd-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	E0731 18:07:54.749673   73800 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-436067" hosting pod "etcd-embed-certs-436067" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436067" has status "Ready":"False"
	I0731 18:07:54.749679   73800 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:54.753326   73800 pod_ready.go:97] node "embed-certs-436067" hosting pod "kube-apiserver-embed-certs-436067" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436067" has status "Ready":"False"
	I0731 18:07:54.753351   73800 pod_ready.go:81] duration metric: took 3.66496ms for pod "kube-apiserver-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	E0731 18:07:54.753362   73800 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-436067" hosting pod "kube-apiserver-embed-certs-436067" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436067" has status "Ready":"False"
	I0731 18:07:54.753370   73800 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:54.857956   73800 pod_ready.go:97] node "embed-certs-436067" hosting pod "kube-controller-manager-embed-certs-436067" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436067" has status "Ready":"False"
	I0731 18:07:54.857978   73800 pod_ready.go:81] duration metric: took 104.599259ms for pod "kube-controller-manager-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	E0731 18:07:54.857988   73800 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-436067" hosting pod "kube-controller-manager-embed-certs-436067" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436067" has status "Ready":"False"
	I0731 18:07:54.857995   73800 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-tl9pj" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:55.257589   73800 pod_ready.go:92] pod "kube-proxy-tl9pj" in "kube-system" namespace has status "Ready":"True"
	I0731 18:07:55.257621   73800 pod_ready.go:81] duration metric: took 399.617003ms for pod "kube-proxy-tl9pj" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:55.257630   73800 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:57.262770   73800 pod_ready.go:102] pod "kube-scheduler-embed-certs-436067" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:59.271094   73800 pod_ready.go:102] pod "kube-scheduler-embed-certs-436067" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:56.157721   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:56.158176   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:56.158216   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:56.158110   75164 retry.go:31] will retry after 3.552824334s: waiting for machine to come up
	I0731 18:07:59.712249   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.712759   74203 main.go:141] libmachine: (old-k8s-version-276459) Found IP for machine: 192.168.39.26
	I0731 18:07:59.712783   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has current primary IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.712793   74203 main.go:141] libmachine: (old-k8s-version-276459) Reserving static IP address...
	I0731 18:07:59.713268   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "old-k8s-version-276459", mac: "52:54:00:79:9d:96", ip: "192.168.39.26"} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:07:59.713297   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | skip adding static IP to network mk-old-k8s-version-276459 - found existing host DHCP lease matching {name: "old-k8s-version-276459", mac: "52:54:00:79:9d:96", ip: "192.168.39.26"}
	I0731 18:07:59.713320   74203 main.go:141] libmachine: (old-k8s-version-276459) Reserved static IP address: 192.168.39.26
	I0731 18:07:59.713343   74203 main.go:141] libmachine: (old-k8s-version-276459) Waiting for SSH to be available...
	I0731 18:07:59.713355   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | Getting to WaitForSSH function...
	I0731 18:07:59.716068   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.716456   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:07:59.716490   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.716701   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | Using SSH client type: external
	I0731 18:07:59.716725   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | Using SSH private key: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459/id_rsa (-rw-------)
	I0731 18:07:59.716762   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.26 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 18:07:59.716776   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | About to run SSH command:
	I0731 18:07:59.716792   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | exit 0
	I0731 18:07:59.847720   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | SSH cmd err, output: <nil>: 
	I0731 18:07:59.848089   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetConfigRaw
	I0731 18:07:59.848847   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetIP
	I0731 18:07:59.851632   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.852004   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:07:59.852030   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.852321   74203 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/config.json ...
	I0731 18:07:59.852505   74203 machine.go:94] provisionDockerMachine start ...
	I0731 18:07:59.852524   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:07:59.852752   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:07:59.855198   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.855596   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:07:59.855626   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.855756   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:07:59.855920   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:07:59.856071   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:07:59.856208   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:07:59.856372   74203 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:59.856601   74203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0731 18:07:59.856614   74203 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 18:07:59.963492   74203 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 18:07:59.963524   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetMachineName
	I0731 18:07:59.963762   74203 buildroot.go:166] provisioning hostname "old-k8s-version-276459"
	I0731 18:07:59.963794   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetMachineName
	I0731 18:07:59.963992   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:07:59.967261   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.967725   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:07:59.967762   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.967938   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:07:59.968131   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:07:59.968316   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:07:59.968487   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:07:59.968687   74203 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:59.968872   74203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0731 18:07:59.968890   74203 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-276459 && echo "old-k8s-version-276459" | sudo tee /etc/hostname
	I0731 18:08:00.084360   74203 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-276459
	
	I0731 18:08:00.084390   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:08:00.087433   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.087833   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.087862   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.088016   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:08:00.088187   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.088371   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.088521   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:08:00.088719   74203 main.go:141] libmachine: Using SSH client type: native
	I0731 18:08:00.088893   74203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0731 18:08:00.088915   74203 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-276459' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-276459/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-276459' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 18:08:00.200012   74203 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 18:08:00.200038   74203 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19349-8084/.minikube CaCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19349-8084/.minikube}
	I0731 18:08:00.200069   74203 buildroot.go:174] setting up certificates
	I0731 18:08:00.200081   74203 provision.go:84] configureAuth start
	I0731 18:08:00.200093   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetMachineName
	I0731 18:08:00.200360   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetIP
	I0731 18:08:00.203352   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.203694   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.203721   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.203951   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:08:00.206061   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.206398   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.206432   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.206510   74203 provision.go:143] copyHostCerts
	I0731 18:08:00.206580   74203 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem, removing ...
	I0731 18:08:00.206591   74203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem
	I0731 18:08:00.206654   74203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem (1082 bytes)
	I0731 18:08:00.206759   74203 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem, removing ...
	I0731 18:08:00.206769   74203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem
	I0731 18:08:00.206799   74203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem (1123 bytes)
	I0731 18:08:00.206876   74203 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem, removing ...
	I0731 18:08:00.206885   74203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem
	I0731 18:08:00.206913   74203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem (1675 bytes)
	I0731 18:08:00.207047   74203 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-276459 san=[127.0.0.1 192.168.39.26 localhost minikube old-k8s-version-276459]
	I0731 18:08:00.279363   74203 provision.go:177] copyRemoteCerts
	I0731 18:08:00.279423   74203 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 18:08:00.279456   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:08:00.282234   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.282601   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.282630   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.282751   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:08:00.283004   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.283178   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:08:00.283361   74203 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459/id_rsa Username:docker}
	I0731 18:08:00.935990   73479 start.go:364] duration metric: took 51.517312901s to acquireMachinesLock for "no-preload-673754"
	I0731 18:08:00.936054   73479 start.go:96] Skipping create...Using existing machine configuration
	I0731 18:08:00.936066   73479 fix.go:54] fixHost starting: 
	I0731 18:08:00.936534   73479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:08:00.936589   73479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:08:00.954868   73479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39013
	I0731 18:08:00.955405   73479 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:08:00.955980   73479 main.go:141] libmachine: Using API Version  1
	I0731 18:08:00.956012   73479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:08:00.956386   73479 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:08:00.956589   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 18:08:00.956752   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetState
	I0731 18:08:00.958461   73479 fix.go:112] recreateIfNeeded on no-preload-673754: state=Stopped err=<nil>
	I0731 18:08:00.958485   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	W0731 18:08:00.958655   73479 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 18:08:00.960117   73479 out.go:177] * Restarting existing kvm2 VM for "no-preload-673754" ...
	I0731 18:07:57.779258   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:59.780834   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:00.961340   73479 main.go:141] libmachine: (no-preload-673754) Calling .Start
	I0731 18:08:00.961543   73479 main.go:141] libmachine: (no-preload-673754) Ensuring networks are active...
	I0731 18:08:00.962332   73479 main.go:141] libmachine: (no-preload-673754) Ensuring network default is active
	I0731 18:08:00.962661   73479 main.go:141] libmachine: (no-preload-673754) Ensuring network mk-no-preload-673754 is active
	I0731 18:08:00.963165   73479 main.go:141] libmachine: (no-preload-673754) Getting domain xml...
	I0731 18:08:00.963982   73479 main.go:141] libmachine: (no-preload-673754) Creating domain...
	I0731 18:08:00.365254   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 18:08:00.389729   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0731 18:08:00.413143   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 18:08:00.436040   74203 provision.go:87] duration metric: took 235.932619ms to configureAuth
	I0731 18:08:00.436080   74203 buildroot.go:189] setting minikube options for container-runtime
	I0731 18:08:00.436288   74203 config.go:182] Loaded profile config "old-k8s-version-276459": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0731 18:08:00.436403   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:08:00.439184   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.439543   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.439575   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.439734   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:08:00.439898   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.440093   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.440271   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:08:00.440450   74203 main.go:141] libmachine: Using SSH client type: native
	I0731 18:08:00.440661   74203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0731 18:08:00.440679   74203 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 18:08:00.707438   74203 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 18:08:00.707467   74203 machine.go:97] duration metric: took 854.948491ms to provisionDockerMachine
	I0731 18:08:00.707482   74203 start.go:293] postStartSetup for "old-k8s-version-276459" (driver="kvm2")
	I0731 18:08:00.707494   74203 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 18:08:00.707510   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:08:00.707811   74203 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 18:08:00.707837   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:08:00.710726   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.711285   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.711315   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.711458   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:08:00.711703   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.711895   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:08:00.712049   74203 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459/id_rsa Username:docker}
	I0731 18:08:00.793719   74203 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 18:08:00.797858   74203 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 18:08:00.797888   74203 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/addons for local assets ...
	I0731 18:08:00.797960   74203 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/files for local assets ...
	I0731 18:08:00.798038   74203 filesync.go:149] local asset: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem -> 152592.pem in /etc/ssl/certs
	I0731 18:08:00.798130   74203 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 18:08:00.807013   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /etc/ssl/certs/152592.pem (1708 bytes)
	I0731 18:08:00.829440   74203 start.go:296] duration metric: took 121.944271ms for postStartSetup
	I0731 18:08:00.829487   74203 fix.go:56] duration metric: took 19.277312964s for fixHost
	I0731 18:08:00.829518   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:08:00.832718   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.833048   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.833082   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.833317   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:08:00.833533   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.833752   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.833887   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:08:00.834189   74203 main.go:141] libmachine: Using SSH client type: native
	I0731 18:08:00.834364   74203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0731 18:08:00.834377   74203 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 18:08:00.935834   74203 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722449280.899364873
	
	I0731 18:08:00.935853   74203 fix.go:216] guest clock: 1722449280.899364873
	I0731 18:08:00.935860   74203 fix.go:229] Guest: 2024-07-31 18:08:00.899364873 +0000 UTC Remote: 2024-07-31 18:08:00.829491013 +0000 UTC m=+245.518063325 (delta=69.87386ms)
	I0731 18:08:00.935894   74203 fix.go:200] guest clock delta is within tolerance: 69.87386ms
	I0731 18:08:00.935899   74203 start.go:83] releasing machines lock for "old-k8s-version-276459", held for 19.38376262s
	I0731 18:08:00.935937   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:08:00.936220   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetIP
	I0731 18:08:00.939282   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.939691   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.939722   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.939911   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:08:00.940506   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:08:00.940704   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:08:00.940790   74203 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 18:08:00.940831   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:08:00.940960   74203 ssh_runner.go:195] Run: cat /version.json
	I0731 18:08:00.941043   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:08:00.943883   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.943909   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.944361   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.944405   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.944429   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.944442   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.944542   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:08:00.944639   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:08:00.944766   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.944817   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.944899   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:08:00.944979   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:08:00.945039   74203 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459/id_rsa Username:docker}
	I0731 18:08:00.945110   74203 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459/id_rsa Username:docker}
	I0731 18:08:01.023818   74203 ssh_runner.go:195] Run: systemctl --version
	I0731 18:08:01.063390   74203 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 18:08:01.205084   74203 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 18:08:01.210972   74203 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 18:08:01.211049   74203 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 18:08:01.226156   74203 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 18:08:01.226180   74203 start.go:495] detecting cgroup driver to use...
	I0731 18:08:01.226257   74203 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 18:08:01.241506   74203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 18:08:01.256615   74203 docker.go:217] disabling cri-docker service (if available) ...
	I0731 18:08:01.256671   74203 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 18:08:01.271515   74203 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 18:08:01.287213   74203 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 18:08:01.415827   74203 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 18:08:01.578122   74203 docker.go:233] disabling docker service ...
	I0731 18:08:01.578208   74203 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 18:08:01.596564   74203 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 18:08:01.611984   74203 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 18:08:01.748972   74203 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 18:08:01.896911   74203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 18:08:01.912921   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 18:08:01.931671   74203 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0731 18:08:01.931749   74203 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:01.943737   74203 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 18:08:01.943798   74203 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:01.954571   74203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:01.964733   74203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:01.976087   74203 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 18:08:01.987193   74203 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 18:08:01.996620   74203 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 18:08:01.996670   74203 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 18:08:02.011046   74203 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 18:08:02.022199   74203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:08:02.147855   74203 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 18:08:02.309868   74203 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 18:08:02.309940   74203 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 18:08:02.314966   74203 start.go:563] Will wait 60s for crictl version
	I0731 18:08:02.315031   74203 ssh_runner.go:195] Run: which crictl
	I0731 18:08:02.318685   74203 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 18:08:02.359361   74203 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 18:08:02.359460   74203 ssh_runner.go:195] Run: crio --version
	I0731 18:08:02.387053   74203 ssh_runner.go:195] Run: crio --version
	I0731 18:08:02.417054   74203 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0731 18:08:01.265323   73800 pod_ready.go:92] pod "kube-scheduler-embed-certs-436067" in "kube-system" namespace has status "Ready":"True"
	I0731 18:08:01.265363   73800 pod_ready.go:81] duration metric: took 6.007715949s for pod "kube-scheduler-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:01.265376   73800 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:03.271693   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:02.418272   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetIP
	I0731 18:08:02.421211   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:02.421714   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:02.421743   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:02.421949   74203 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 18:08:02.425878   74203 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:08:02.438082   74203 kubeadm.go:883] updating cluster {Name:old-k8s-version-276459 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-276459 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 18:08:02.438222   74203 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 18:08:02.438293   74203 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 18:08:02.484113   74203 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0731 18:08:02.484189   74203 ssh_runner.go:195] Run: which lz4
	I0731 18:08:02.488365   74203 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 18:08:02.492321   74203 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 18:08:02.492352   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0731 18:08:03.946187   74203 crio.go:462] duration metric: took 1.457852426s to copy over tarball
	I0731 18:08:03.946261   74203 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 18:08:01.781606   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:03.781786   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:02.287159   73479 main.go:141] libmachine: (no-preload-673754) Waiting to get IP...
	I0731 18:08:02.288338   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:02.288812   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:02.288879   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:02.288799   75356 retry.go:31] will retry after 229.074083ms: waiting for machine to come up
	I0731 18:08:02.519266   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:02.519697   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:02.519720   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:02.519663   75356 retry.go:31] will retry after 328.345922ms: waiting for machine to come up
	I0731 18:08:02.849290   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:02.849839   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:02.849871   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:02.849787   75356 retry.go:31] will retry after 339.030371ms: waiting for machine to come up
	I0731 18:08:03.190065   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:03.190587   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:03.190620   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:03.190539   75356 retry.go:31] will retry after 514.955663ms: waiting for machine to come up
	I0731 18:08:03.707808   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:03.708382   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:03.708418   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:03.708349   75356 retry.go:31] will retry after 543.558992ms: waiting for machine to come up
	I0731 18:08:04.253224   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:04.253760   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:04.253781   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:04.253708   75356 retry.go:31] will retry after 925.348689ms: waiting for machine to come up
	I0731 18:08:05.180439   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:05.180833   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:05.180857   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:05.180786   75356 retry.go:31] will retry after 1.014666798s: waiting for machine to come up
	I0731 18:08:06.196879   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:06.197321   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:06.197355   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:06.197258   75356 retry.go:31] will retry after 1.163649074s: waiting for machine to come up
	I0731 18:08:05.278001   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:07.771870   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:06.945760   74203 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.99946679s)
	I0731 18:08:06.945790   74203 crio.go:469] duration metric: took 2.999576832s to extract the tarball
	I0731 18:08:06.945800   74203 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 18:08:06.989081   74203 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 18:08:07.024521   74203 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0731 18:08:07.024545   74203 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 18:08:07.024615   74203 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:08:07.024645   74203 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 18:08:07.024695   74203 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 18:08:07.024729   74203 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 18:08:07.024718   74203 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0731 18:08:07.024780   74203 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0731 18:08:07.024822   74203 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0731 18:08:07.024716   74203 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 18:08:07.026228   74203 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 18:08:07.026237   74203 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 18:08:07.026242   74203 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0731 18:08:07.026263   74203 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:08:07.026231   74203 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 18:08:07.026231   74203 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0731 18:08:07.026863   74203 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 18:08:07.027091   74203 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0731 18:08:07.282735   74203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0731 18:08:07.284464   74203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0731 18:08:07.287001   74203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 18:08:07.305873   74203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0731 18:08:07.307144   74203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0731 18:08:07.311401   74203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0731 18:08:07.318119   74203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0731 18:08:07.366929   74203 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0731 18:08:07.366979   74203 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0731 18:08:07.367041   74203 ssh_runner.go:195] Run: which crictl
	I0731 18:08:07.393481   74203 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0731 18:08:07.393534   74203 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0731 18:08:07.393594   74203 ssh_runner.go:195] Run: which crictl
	I0731 18:08:07.441987   74203 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0731 18:08:07.442036   74203 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 18:08:07.442083   74203 ssh_runner.go:195] Run: which crictl
	I0731 18:08:07.449033   74203 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0731 18:08:07.449085   74203 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 18:08:07.449137   74203 ssh_runner.go:195] Run: which crictl
	I0731 18:08:07.465248   74203 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0731 18:08:07.465291   74203 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 18:08:07.465341   74203 ssh_runner.go:195] Run: which crictl
	I0731 18:08:07.476013   74203 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0731 18:08:07.476053   74203 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0731 18:08:07.476074   74203 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 18:08:07.476090   74203 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0731 18:08:07.476111   74203 ssh_runner.go:195] Run: which crictl
	I0731 18:08:07.476129   74203 ssh_runner.go:195] Run: which crictl
	I0731 18:08:07.476146   74203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0731 18:08:07.476111   74203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0731 18:08:07.476196   74203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 18:08:07.476220   74203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0731 18:08:07.476273   74203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0731 18:08:07.592532   74203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0731 18:08:07.592677   74203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0731 18:08:07.592709   74203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0731 18:08:07.592797   74203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0731 18:08:07.637254   74203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0731 18:08:07.637276   74203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0731 18:08:07.637288   74203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0731 18:08:07.637292   74203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0731 18:08:07.640419   74203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0731 18:08:07.860814   74203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:08:08.002115   74203 cache_images.go:92] duration metric: took 977.553376ms to LoadCachedImages
	W0731 18:08:08.002248   74203 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0731 18:08:08.002267   74203 kubeadm.go:934] updating node { 192.168.39.26 8443 v1.20.0 crio true true} ...
	I0731 18:08:08.002404   74203 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-276459 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.26
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-276459 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 18:08:08.002500   74203 ssh_runner.go:195] Run: crio config
	I0731 18:08:08.059237   74203 cni.go:84] Creating CNI manager for ""
	I0731 18:08:08.059264   74203 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:08:08.059281   74203 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 18:08:08.059313   74203 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.26 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-276459 NodeName:old-k8s-version-276459 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.26"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.26 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0731 18:08:08.059503   74203 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.26
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-276459"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.26
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.26"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 18:08:08.059575   74203 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0731 18:08:08.070299   74203 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 18:08:08.070388   74203 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 18:08:08.082083   74203 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0731 18:08:08.101728   74203 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 18:08:08.120721   74203 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0731 18:08:08.137997   74203 ssh_runner.go:195] Run: grep 192.168.39.26	control-plane.minikube.internal$ /etc/hosts
	I0731 18:08:08.141797   74203 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.26	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:08:08.156861   74203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:08:08.287700   74203 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 18:08:08.307598   74203 certs.go:68] Setting up /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459 for IP: 192.168.39.26
	I0731 18:08:08.307623   74203 certs.go:194] generating shared ca certs ...
	I0731 18:08:08.307644   74203 certs.go:226] acquiring lock for ca certs: {Name:mk49eed439085f9c30746706460ce213570d1997 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:08:08.307811   74203 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key
	I0731 18:08:08.307855   74203 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key
	I0731 18:08:08.307868   74203 certs.go:256] generating profile certs ...
	I0731 18:08:08.307987   74203 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/client.key
	I0731 18:08:08.308062   74203 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/apiserver.key.7c620cac
	I0731 18:08:08.308123   74203 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/proxy-client.key
	I0731 18:08:08.308283   74203 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem (1338 bytes)
	W0731 18:08:08.308315   74203 certs.go:480] ignoring /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259_empty.pem, impossibly tiny 0 bytes
	I0731 18:08:08.308324   74203 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 18:08:08.308362   74203 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem (1082 bytes)
	I0731 18:08:08.308382   74203 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem (1123 bytes)
	I0731 18:08:08.308402   74203 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem (1675 bytes)
	I0731 18:08:08.308438   74203 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem (1708 bytes)
	I0731 18:08:08.309095   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 18:08:08.355508   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 18:08:08.391999   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 18:08:08.427937   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 18:08:08.456268   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0731 18:08:08.486991   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 18:08:08.519564   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 18:08:08.557029   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 18:08:08.583971   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /usr/share/ca-certificates/152592.pem (1708 bytes)
	I0731 18:08:08.608505   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 18:08:08.630279   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem --> /usr/share/ca-certificates/15259.pem (1338 bytes)
	I0731 18:08:08.655012   74203 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 18:08:08.671907   74203 ssh_runner.go:195] Run: openssl version
	I0731 18:08:08.677538   74203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/152592.pem && ln -fs /usr/share/ca-certificates/152592.pem /etc/ssl/certs/152592.pem"
	I0731 18:08:08.687877   74203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/152592.pem
	I0731 18:08:08.692201   74203 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 16:54 /usr/share/ca-certificates/152592.pem
	I0731 18:08:08.692258   74203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/152592.pem
	I0731 18:08:08.698563   74203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/152592.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 18:08:08.708986   74203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 18:08:08.719132   74203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:08:08.723242   74203 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 16:42 /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:08:08.723299   74203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:08:08.729032   74203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 18:08:08.739306   74203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15259.pem && ln -fs /usr/share/ca-certificates/15259.pem /etc/ssl/certs/15259.pem"
	I0731 18:08:08.749759   74203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15259.pem
	I0731 18:08:08.754167   74203 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 16:54 /usr/share/ca-certificates/15259.pem
	I0731 18:08:08.754228   74203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15259.pem
	I0731 18:08:08.759786   74203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15259.pem /etc/ssl/certs/51391683.0"
	I0731 18:08:08.770180   74203 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 18:08:08.775414   74203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 18:08:08.781830   74203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 18:08:08.787876   74203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 18:08:08.793927   74203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 18:08:08.800090   74203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 18:08:08.806169   74203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 18:08:08.811895   74203 kubeadm.go:392] StartCluster: {Name:old-k8s-version-276459 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-276459 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:08:08.811983   74203 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 18:08:08.812040   74203 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 18:08:08.853889   74203 cri.go:89] found id: ""
	I0731 18:08:08.853989   74203 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 18:08:08.863817   74203 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 18:08:08.863837   74203 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 18:08:08.863887   74203 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 18:08:08.873411   74203 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 18:08:08.874616   74203 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-276459" does not appear in /home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 18:08:08.875356   74203 kubeconfig.go:62] /home/jenkins/minikube-integration/19349-8084/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-276459" cluster setting kubeconfig missing "old-k8s-version-276459" context setting]
	I0731 18:08:08.876650   74203 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/kubeconfig: {Name:mkf2340bc1ada3c683f132aca37228d7588e1fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:08:08.918433   74203 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 18:08:08.931013   74203 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.26
	I0731 18:08:08.931067   74203 kubeadm.go:1160] stopping kube-system containers ...
	I0731 18:08:08.931083   74203 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 18:08:08.931163   74203 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 18:08:08.964683   74203 cri.go:89] found id: ""
	I0731 18:08:08.964759   74203 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 18:08:08.980459   74203 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 18:08:08.989969   74203 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 18:08:08.989997   74203 kubeadm.go:157] found existing configuration files:
	
	I0731 18:08:08.990049   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 18:08:08.999015   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 18:08:08.999074   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 18:08:09.008055   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 18:08:09.016532   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 18:08:09.016599   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 18:08:09.025791   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 18:08:09.034160   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 18:08:09.034227   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 18:08:09.043381   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 18:08:09.053419   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 18:08:09.053832   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 18:08:09.064966   74203 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 18:08:09.073962   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:09.198503   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:10.048258   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:10.283812   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:06.285091   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:08.779998   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:10.780198   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:07.362756   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:07.363299   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:07.363328   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:07.363231   75356 retry.go:31] will retry after 1.508296616s: waiting for machine to come up
	I0731 18:08:08.873528   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:08.874013   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:08.874051   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:08.873971   75356 retry.go:31] will retry after 2.281343566s: waiting for machine to come up
	I0731 18:08:11.157083   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:11.157578   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:11.157609   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:11.157537   75356 retry.go:31] will retry after 2.49049752s: waiting for machine to come up
	I0731 18:08:09.802010   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:12.271900   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:10.390012   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:10.477969   74203 api_server.go:52] waiting for apiserver process to appear ...
	I0731 18:08:10.478093   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:10.978427   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:11.478715   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:11.978685   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:12.478211   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:12.978218   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:13.478493   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:13.978778   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:14.478489   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:14.978983   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:13.278943   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:15.778760   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:13.650131   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:13.650459   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:13.650480   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:13.650428   75356 retry.go:31] will retry after 3.437877467s: waiting for machine to come up
	I0731 18:08:14.771879   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:17.272673   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:15.478444   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:15.978399   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:16.478641   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:16.979036   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:17.479053   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:17.978819   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:18.478280   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:18.978448   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:19.479056   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:19.978969   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:18.279604   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:20.778532   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:17.089986   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:17.090556   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:17.090590   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:17.090509   75356 retry.go:31] will retry after 2.95036051s: waiting for machine to come up
	I0731 18:08:20.044455   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.044914   73479 main.go:141] libmachine: (no-preload-673754) Found IP for machine: 192.168.61.126
	I0731 18:08:20.044935   73479 main.go:141] libmachine: (no-preload-673754) Reserving static IP address...
	I0731 18:08:20.044948   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has current primary IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.045286   73479 main.go:141] libmachine: (no-preload-673754) Reserved static IP address: 192.168.61.126
	I0731 18:08:20.045308   73479 main.go:141] libmachine: (no-preload-673754) Waiting for SSH to be available...
	I0731 18:08:20.045331   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "no-preload-673754", mac: "52:54:00:5a:ec:78", ip: "192.168.61.126"} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:20.045352   73479 main.go:141] libmachine: (no-preload-673754) DBG | skip adding static IP to network mk-no-preload-673754 - found existing host DHCP lease matching {name: "no-preload-673754", mac: "52:54:00:5a:ec:78", ip: "192.168.61.126"}
	I0731 18:08:20.045367   73479 main.go:141] libmachine: (no-preload-673754) DBG | Getting to WaitForSSH function...
	I0731 18:08:20.047574   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.047913   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:20.047939   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.048069   73479 main.go:141] libmachine: (no-preload-673754) DBG | Using SSH client type: external
	I0731 18:08:20.048106   73479 main.go:141] libmachine: (no-preload-673754) DBG | Using SSH private key: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/no-preload-673754/id_rsa (-rw-------)
	I0731 18:08:20.048150   73479 main.go:141] libmachine: (no-preload-673754) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.126 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19349-8084/.minikube/machines/no-preload-673754/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 18:08:20.048168   73479 main.go:141] libmachine: (no-preload-673754) DBG | About to run SSH command:
	I0731 18:08:20.048181   73479 main.go:141] libmachine: (no-preload-673754) DBG | exit 0
	I0731 18:08:20.175606   73479 main.go:141] libmachine: (no-preload-673754) DBG | SSH cmd err, output: <nil>: 
	I0731 18:08:20.175917   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetConfigRaw
	I0731 18:08:20.176508   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetIP
	I0731 18:08:20.179035   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.179374   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:20.179404   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.179686   73479 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/no-preload-673754/config.json ...
	I0731 18:08:20.179869   73479 machine.go:94] provisionDockerMachine start ...
	I0731 18:08:20.179885   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 18:08:20.180088   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:20.182345   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.182702   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:20.182727   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.182848   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:20.183060   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:20.183227   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:20.183414   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:20.183572   73479 main.go:141] libmachine: Using SSH client type: native
	I0731 18:08:20.183747   73479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I0731 18:08:20.183757   73479 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 18:08:20.295090   73479 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 18:08:20.295149   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetMachineName
	I0731 18:08:20.295424   73479 buildroot.go:166] provisioning hostname "no-preload-673754"
	I0731 18:08:20.295454   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetMachineName
	I0731 18:08:20.295631   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:20.298467   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.298771   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:20.298815   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.298897   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:20.299094   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:20.299276   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:20.299462   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:20.299652   73479 main.go:141] libmachine: Using SSH client type: native
	I0731 18:08:20.299806   73479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I0731 18:08:20.299817   73479 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-673754 && echo "no-preload-673754" | sudo tee /etc/hostname
	I0731 18:08:20.424901   73479 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-673754
	
	I0731 18:08:20.424951   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:20.427679   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.428049   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:20.428083   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.428230   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:20.428419   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:20.428601   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:20.428767   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:20.428965   73479 main.go:141] libmachine: Using SSH client type: native
	I0731 18:08:20.429127   73479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I0731 18:08:20.429142   73479 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-673754' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-673754/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-673754' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 18:08:20.546853   73479 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 18:08:20.546884   73479 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19349-8084/.minikube CaCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19349-8084/.minikube}
	I0731 18:08:20.546938   73479 buildroot.go:174] setting up certificates
	I0731 18:08:20.546955   73479 provision.go:84] configureAuth start
	I0731 18:08:20.546971   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetMachineName
	I0731 18:08:20.547275   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetIP
	I0731 18:08:20.550019   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.550372   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:20.550400   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.550525   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:20.552914   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.553261   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:20.553290   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.553416   73479 provision.go:143] copyHostCerts
	I0731 18:08:20.553479   73479 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem, removing ...
	I0731 18:08:20.553490   73479 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem
	I0731 18:08:20.553547   73479 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem (1082 bytes)
	I0731 18:08:20.553675   73479 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem, removing ...
	I0731 18:08:20.553687   73479 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem
	I0731 18:08:20.553718   73479 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem (1123 bytes)
	I0731 18:08:20.553796   73479 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem, removing ...
	I0731 18:08:20.553806   73479 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem
	I0731 18:08:20.553826   73479 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem (1675 bytes)
	I0731 18:08:20.553883   73479 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem org=jenkins.no-preload-673754 san=[127.0.0.1 192.168.61.126 localhost minikube no-preload-673754]
	I0731 18:08:20.878891   73479 provision.go:177] copyRemoteCerts
	I0731 18:08:20.878963   73479 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 18:08:20.878990   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:20.881529   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.881868   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:20.881900   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.882053   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:20.882245   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:20.882450   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:20.882617   73479 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/no-preload-673754/id_rsa Username:docker}
	I0731 18:08:20.968757   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 18:08:20.992136   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0731 18:08:21.013768   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 18:08:21.035808   73479 provision.go:87] duration metric: took 488.837788ms to configureAuth
	I0731 18:08:21.035839   73479 buildroot.go:189] setting minikube options for container-runtime
	I0731 18:08:21.036018   73479 config.go:182] Loaded profile config "no-preload-673754": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 18:08:21.036099   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:21.038949   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.039335   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:21.039363   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.039556   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:21.039756   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:21.039960   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:21.040071   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:21.040219   73479 main.go:141] libmachine: Using SSH client type: native
	I0731 18:08:21.040380   73479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I0731 18:08:21.040396   73479 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 18:08:21.319623   73479 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 18:08:21.319657   73479 machine.go:97] duration metric: took 1.139776085s to provisionDockerMachine
	I0731 18:08:21.319672   73479 start.go:293] postStartSetup for "no-preload-673754" (driver="kvm2")
	I0731 18:08:21.319689   73479 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 18:08:21.319710   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 18:08:21.320049   73479 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 18:08:21.320076   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:21.322963   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.323436   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:21.323465   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.323634   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:21.323809   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:21.324003   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:21.324127   73479 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/no-preload-673754/id_rsa Username:docker}
	I0731 18:08:21.409076   73479 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 18:08:21.412884   73479 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 18:08:21.412917   73479 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/addons for local assets ...
	I0731 18:08:21.413020   73479 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/files for local assets ...
	I0731 18:08:21.413108   73479 filesync.go:149] local asset: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem -> 152592.pem in /etc/ssl/certs
	I0731 18:08:21.413233   73479 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 18:08:21.421812   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /etc/ssl/certs/152592.pem (1708 bytes)
	I0731 18:08:21.447124   73479 start.go:296] duration metric: took 127.423498ms for postStartSetup
	I0731 18:08:21.447196   73479 fix.go:56] duration metric: took 20.511108968s for fixHost
	I0731 18:08:21.447226   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:21.450022   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.450408   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:21.450431   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.450628   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:21.450846   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:21.451009   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:21.451161   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:21.451327   73479 main.go:141] libmachine: Using SSH client type: native
	I0731 18:08:21.451527   73479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I0731 18:08:21.451541   73479 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 18:08:21.563653   73479 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722449301.536356236
	
	I0731 18:08:21.563672   73479 fix.go:216] guest clock: 1722449301.536356236
	I0731 18:08:21.563679   73479 fix.go:229] Guest: 2024-07-31 18:08:21.536356236 +0000 UTC Remote: 2024-07-31 18:08:21.447206545 +0000 UTC m=+354.621330953 (delta=89.149691ms)
	I0731 18:08:21.563702   73479 fix.go:200] guest clock delta is within tolerance: 89.149691ms
	I0731 18:08:21.563709   73479 start.go:83] releasing machines lock for "no-preload-673754", held for 20.627680156s
	I0731 18:08:21.563734   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 18:08:21.563992   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetIP
	I0731 18:08:21.566875   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.567265   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:21.567290   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.567505   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 18:08:21.568045   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 18:08:21.568237   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 18:08:21.568368   73479 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 18:08:21.568408   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:21.568465   73479 ssh_runner.go:195] Run: cat /version.json
	I0731 18:08:21.568492   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:21.571178   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.571554   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:21.571603   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.571653   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.571729   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:21.571902   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:21.572071   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:21.572213   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:21.572240   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.572256   73479 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/no-preload-673754/id_rsa Username:docker}
	I0731 18:08:21.572373   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:21.572505   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:21.572609   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:21.572739   73479 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/no-preload-673754/id_rsa Username:docker}
	I0731 18:08:21.682894   73479 ssh_runner.go:195] Run: systemctl --version
	I0731 18:08:21.689126   73479 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 18:08:21.829572   73479 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 18:08:21.836507   73479 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 18:08:21.836589   73479 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 18:08:21.855127   73479 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 18:08:21.855176   73479 start.go:495] detecting cgroup driver to use...
	I0731 18:08:21.855256   73479 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 18:08:21.870886   73479 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 18:08:21.884762   73479 docker.go:217] disabling cri-docker service (if available) ...
	I0731 18:08:21.884833   73479 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 18:08:21.899480   73479 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 18:08:21.912438   73479 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 18:08:22.024528   73479 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 18:08:22.177400   73479 docker.go:233] disabling docker service ...
	I0731 18:08:22.177500   73479 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 18:08:22.191225   73479 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 18:08:22.204004   73479 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 18:08:22.327408   73479 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 18:08:22.449116   73479 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 18:08:22.463031   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 18:08:22.481864   73479 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0731 18:08:22.481935   73479 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:22.491687   73479 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 18:08:22.491768   73479 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:22.501686   73479 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:22.511207   73479 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:22.521390   73479 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 18:08:22.531355   73479 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:22.541544   73479 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:22.556829   73479 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:22.566012   73479 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 18:08:22.574865   73479 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 18:08:22.574938   73479 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 18:08:22.588125   73479 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 18:08:22.597257   73479 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:08:22.716379   73479 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 18:08:22.855465   73479 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 18:08:22.855526   73479 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 18:08:22.860016   73479 start.go:563] Will wait 60s for crictl version
	I0731 18:08:22.860088   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:08:22.863395   73479 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 18:08:22.904523   73479 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 18:08:22.904611   73479 ssh_runner.go:195] Run: crio --version
	I0731 18:08:22.934571   73479 ssh_runner.go:195] Run: crio --version
	I0731 18:08:22.965884   73479 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0731 18:08:19.771740   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:22.272491   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:20.478866   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:20.978311   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:21.478333   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:21.978289   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:22.479138   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:22.978406   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:23.478138   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:23.979189   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:24.478688   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:24.978795   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:22.779215   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:24.782366   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:22.967087   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetIP
	I0731 18:08:22.969442   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:22.969722   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:22.969746   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:22.970005   73479 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0731 18:08:22.974229   73479 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:08:22.986153   73479 kubeadm.go:883] updating cluster {Name:no-preload-673754 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-673754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.126 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 18:08:22.986292   73479 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0731 18:08:22.986321   73479 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 18:08:23.020129   73479 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0731 18:08:23.020153   73479 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 18:08:23.020215   73479 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:08:23.020234   73479 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 18:08:23.020266   73479 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 18:08:23.020322   73479 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 18:08:23.020337   73479 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0731 18:08:23.020390   73479 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0731 18:08:23.020431   73479 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 18:08:23.020457   73479 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 18:08:23.021820   73479 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:08:23.021821   73479 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0731 18:08:23.021820   73479 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 18:08:23.021901   73479 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0731 18:08:23.021978   73479 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 18:08:23.021833   73479 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 18:08:23.021826   73479 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 18:08:23.021821   73479 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 18:08:23.254700   73479 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 18:08:23.268999   73479 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0731 18:08:23.271466   73479 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0731 18:08:23.272011   73479 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 18:08:23.275695   73479 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 18:08:23.298363   73479 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 18:08:23.320031   73479 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0731 18:08:23.340960   73479 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0731 18:08:23.341004   73479 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 18:08:23.341050   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:08:23.381391   73479 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0731 18:08:23.381441   73479 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 18:08:23.381511   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:08:23.508590   73479 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0731 18:08:23.508650   73479 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 18:08:23.508676   73479 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0731 18:08:23.508702   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:08:23.508716   73479 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 18:08:23.508729   73479 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0731 18:08:23.508751   73479 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 18:08:23.508772   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:08:23.508781   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:08:23.508800   73479 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0731 18:08:23.508830   73479 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0731 18:08:23.508838   73479 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 18:08:23.508860   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:08:23.508879   73479 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0731 18:08:23.519809   73479 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 18:08:23.519834   73479 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 18:08:23.519907   73479 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 18:08:23.595474   73479 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0731 18:08:23.595484   73479 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0731 18:08:23.595590   73479 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0731 18:08:23.595628   73479 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0731 18:08:23.595683   73479 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0731 18:08:23.622893   73479 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0731 18:08:23.623024   73479 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0731 18:08:23.629140   73479 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0731 18:08:23.629173   73479 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0731 18:08:23.629242   73479 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0731 18:08:23.629246   73479 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0731 18:08:23.659281   73479 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0731 18:08:23.659321   73479 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0731 18:08:23.659336   73479 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0731 18:08:23.659379   73479 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0731 18:08:23.659385   73479 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0731 18:08:23.659425   73479 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0731 18:08:23.659381   73479 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0731 18:08:23.659465   73479 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0731 18:08:23.659494   73479 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0731 18:08:23.857129   73479 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:08:26.136212   73479 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.476802709s)
	I0731 18:08:26.136251   73479 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0731 18:08:26.136264   73479 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0: (2.476807388s)
	I0731 18:08:26.136276   73479 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0731 18:08:26.136293   73479 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0731 18:08:26.136329   73479 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0731 18:08:26.136366   73479 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.279204335s)
	I0731 18:08:26.136423   73479 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0731 18:08:26.136474   73479 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:08:26.136521   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:08:24.770974   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:26.771954   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:29.274931   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:25.478432   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:25.978823   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:26.478416   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:26.979075   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:27.478228   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:27.978970   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:28.478319   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:28.979028   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:29.479060   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:29.978544   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:27.278482   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:29.279820   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:27.993828   73479 ssh_runner.go:235] Completed: which crictl: (1.857279777s)
	I0731 18:08:27.993908   73479 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:08:27.993918   73479 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.857561411s)
	I0731 18:08:27.993947   73479 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0731 18:08:27.993981   73479 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0731 18:08:27.994029   73479 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0731 18:08:28.037163   73479 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0731 18:08:28.037288   73479 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0731 18:08:29.880343   73479 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.843037657s)
	I0731 18:08:29.880392   73479 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0731 18:08:29.880339   73479 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.886261639s)
	I0731 18:08:29.880412   73479 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0731 18:08:29.880442   73479 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0731 18:08:29.880509   73479 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0731 18:08:31.229448   73479 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.348909634s)
	I0731 18:08:31.229478   73479 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0731 18:08:31.229512   73479 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0731 18:08:31.229575   73479 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0731 18:08:31.771695   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:34.271817   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:30.478387   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:30.978443   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:31.478484   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:31.979231   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:32.478928   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:32.978790   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:33.478426   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:33.978839   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:34.478411   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:34.978378   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:31.280261   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:33.780411   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:35.783181   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:33.084098   73479 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.854499641s)
	I0731 18:08:33.084136   73479 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0731 18:08:33.084175   73479 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0731 18:08:33.084255   73479 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0731 18:08:36.378466   73479 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.294181026s)
	I0731 18:08:36.378501   73479 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0731 18:08:36.378530   73479 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0731 18:08:36.378575   73479 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0731 18:08:36.772963   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:39.270915   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:35.478287   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:35.978546   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:36.479138   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:36.979173   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:37.478319   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:37.978768   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:38.479161   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:38.979129   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:39.478128   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:39.979147   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:38.278970   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:40.279298   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:37.022757   73479 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0731 18:08:37.022807   73479 cache_images.go:123] Successfully loaded all cached images
	I0731 18:08:37.022815   73479 cache_images.go:92] duration metric: took 14.002647196s to LoadCachedImages
	I0731 18:08:37.022829   73479 kubeadm.go:934] updating node { 192.168.61.126 8443 v1.31.0-beta.0 crio true true} ...
	I0731 18:08:37.022954   73479 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-673754 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.126
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-673754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 18:08:37.023035   73479 ssh_runner.go:195] Run: crio config
	I0731 18:08:37.064803   73479 cni.go:84] Creating CNI manager for ""
	I0731 18:08:37.064825   73479 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:08:37.064834   73479 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 18:08:37.064856   73479 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.126 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-673754 NodeName:no-preload-673754 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.126"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.126 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 18:08:37.065028   73479 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.126
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-673754"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.126
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.126"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 18:08:37.065108   73479 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0731 18:08:37.077141   73479 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 18:08:37.077215   73479 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 18:08:37.086553   73479 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0731 18:08:37.102646   73479 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0731 18:08:37.118113   73479 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0731 18:08:37.134702   73479 ssh_runner.go:195] Run: grep 192.168.61.126	control-plane.minikube.internal$ /etc/hosts
	I0731 18:08:37.138593   73479 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.126	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:08:37.151319   73479 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:08:37.270019   73479 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 18:08:37.287378   73479 certs.go:68] Setting up /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/no-preload-673754 for IP: 192.168.61.126
	I0731 18:08:37.287400   73479 certs.go:194] generating shared ca certs ...
	I0731 18:08:37.287413   73479 certs.go:226] acquiring lock for ca certs: {Name:mk49eed439085f9c30746706460ce213570d1997 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:08:37.287540   73479 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key
	I0731 18:08:37.287577   73479 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key
	I0731 18:08:37.287584   73479 certs.go:256] generating profile certs ...
	I0731 18:08:37.287692   73479 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/no-preload-673754/client.key
	I0731 18:08:37.287761   73479 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/no-preload-673754/apiserver.key.3fff3ffc
	I0731 18:08:37.287803   73479 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/no-preload-673754/proxy-client.key
	I0731 18:08:37.287938   73479 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem (1338 bytes)
	W0731 18:08:37.287973   73479 certs.go:480] ignoring /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259_empty.pem, impossibly tiny 0 bytes
	I0731 18:08:37.287985   73479 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 18:08:37.288020   73479 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem (1082 bytes)
	I0731 18:08:37.288049   73479 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem (1123 bytes)
	I0731 18:08:37.288079   73479 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem (1675 bytes)
	I0731 18:08:37.288143   73479 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem (1708 bytes)
	I0731 18:08:37.288831   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 18:08:37.334317   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 18:08:37.370553   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 18:08:37.403436   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 18:08:37.449133   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/no-preload-673754/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0731 18:08:37.486169   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/no-preload-673754/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 18:08:37.517241   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/no-preload-673754/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 18:08:37.541089   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/no-preload-673754/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 18:08:37.563068   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem --> /usr/share/ca-certificates/15259.pem (1338 bytes)
	I0731 18:08:37.585396   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /usr/share/ca-certificates/152592.pem (1708 bytes)
	I0731 18:08:37.608142   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 18:08:37.630178   73479 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 18:08:37.645994   73479 ssh_runner.go:195] Run: openssl version
	I0731 18:08:37.651663   73479 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15259.pem && ln -fs /usr/share/ca-certificates/15259.pem /etc/ssl/certs/15259.pem"
	I0731 18:08:37.661494   73479 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15259.pem
	I0731 18:08:37.665519   73479 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 16:54 /usr/share/ca-certificates/15259.pem
	I0731 18:08:37.665575   73479 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15259.pem
	I0731 18:08:37.671143   73479 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15259.pem /etc/ssl/certs/51391683.0"
	I0731 18:08:37.681076   73479 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/152592.pem && ln -fs /usr/share/ca-certificates/152592.pem /etc/ssl/certs/152592.pem"
	I0731 18:08:37.692253   73479 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/152592.pem
	I0731 18:08:37.696802   73479 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 16:54 /usr/share/ca-certificates/152592.pem
	I0731 18:08:37.696850   73479 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/152592.pem
	I0731 18:08:37.702282   73479 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/152592.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 18:08:37.713051   73479 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 18:08:37.723644   73479 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:08:37.728170   73479 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 16:42 /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:08:37.728225   73479 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:08:37.733912   73479 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 18:08:37.744004   73479 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 18:08:37.748076   73479 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 18:08:37.753645   73479 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 18:08:37.759077   73479 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 18:08:37.764344   73479 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 18:08:37.769735   73479 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 18:08:37.775894   73479 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 18:08:37.781699   73479 kubeadm.go:392] StartCluster: {Name:no-preload-673754 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-673754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.126 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:08:37.781771   73479 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 18:08:37.781833   73479 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 18:08:37.825614   73479 cri.go:89] found id: ""
	I0731 18:08:37.825685   73479 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 18:08:37.835584   73479 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 18:08:37.835604   73479 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 18:08:37.835659   73479 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 18:08:37.844529   73479 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 18:08:37.845534   73479 kubeconfig.go:125] found "no-preload-673754" server: "https://192.168.61.126:8443"
	I0731 18:08:37.847698   73479 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 18:08:37.856360   73479 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.126
	I0731 18:08:37.856386   73479 kubeadm.go:1160] stopping kube-system containers ...
	I0731 18:08:37.856396   73479 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 18:08:37.856440   73479 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 18:08:37.894614   73479 cri.go:89] found id: ""
	I0731 18:08:37.894689   73479 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 18:08:37.910921   73479 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 18:08:37.919796   73479 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 18:08:37.919814   73479 kubeadm.go:157] found existing configuration files:
	
	I0731 18:08:37.919859   73479 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 18:08:37.928562   73479 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 18:08:37.928617   73479 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 18:08:37.937099   73479 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 18:08:37.945298   73479 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 18:08:37.945378   73479 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 18:08:37.953976   73479 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 18:08:37.962069   73479 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 18:08:37.962119   73479 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 18:08:37.970719   73479 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 18:08:37.979265   73479 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 18:08:37.979318   73479 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 18:08:37.988286   73479 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 18:08:37.997742   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:38.105503   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:39.403672   73479 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.298131314s)
	I0731 18:08:39.403710   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:39.609739   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:39.677484   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:39.773387   73479 api_server.go:52] waiting for apiserver process to appear ...
	I0731 18:08:39.773469   73479 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:40.274185   73479 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:40.774562   73479 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:40.792346   73479 api_server.go:72] duration metric: took 1.018961231s to wait for apiserver process to appear ...
	I0731 18:08:40.792368   73479 api_server.go:88] waiting for apiserver healthz status ...
	I0731 18:08:40.792384   73479 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0731 18:08:41.271890   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:43.771546   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:43.476911   73479 api_server.go:279] https://192.168.61.126:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 18:08:43.476938   73479 api_server.go:103] status: https://192.168.61.126:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 18:08:43.476952   73479 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0731 18:08:43.536762   73479 api_server.go:279] https://192.168.61.126:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 18:08:43.536794   73479 api_server.go:103] status: https://192.168.61.126:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 18:08:43.793157   73479 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0731 18:08:43.798895   73479 api_server.go:279] https://192.168.61.126:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 18:08:43.798924   73479 api_server.go:103] status: https://192.168.61.126:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 18:08:44.292527   73479 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0731 18:08:44.300596   73479 api_server.go:279] https://192.168.61.126:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 18:08:44.300632   73479 api_server.go:103] status: https://192.168.61.126:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 18:08:44.793206   73479 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0731 18:08:44.797982   73479 api_server.go:279] https://192.168.61.126:8443/healthz returned 200:
	ok
	I0731 18:08:44.806150   73479 api_server.go:141] control plane version: v1.31.0-beta.0
	I0731 18:08:44.806172   73479 api_server.go:131] duration metric: took 4.013797537s to wait for apiserver health ...
	I0731 18:08:44.806183   73479 cni.go:84] Creating CNI manager for ""
	I0731 18:08:44.806191   73479 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:08:44.807774   73479 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 18:08:40.478967   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:40.978610   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:41.479192   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:41.978737   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:42.479051   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:42.978274   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:43.478957   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:43.978973   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:44.478269   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:44.978737   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:42.778330   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:44.779163   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:44.809068   73479 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 18:08:44.823284   73479 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 18:08:44.878894   73479 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 18:08:44.892969   73479 system_pods.go:59] 8 kube-system pods found
	I0731 18:08:44.893020   73479 system_pods.go:61] "coredns-5cfdc65f69-k7clq" [e4be77b6-aa7c-45e2-90a1-6a8264fd5101] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:08:44.893031   73479 system_pods.go:61] "etcd-no-preload-673754" [d64e9194-dd33-4a28-b8c3-d25462f1dae8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 18:08:44.893042   73479 system_pods.go:61] "kube-apiserver-no-preload-673754" [4497614d-51de-4a5f-93dc-1397446fe4c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 18:08:44.893055   73479 system_pods.go:61] "kube-controller-manager-no-preload-673754" [fe7f714e-d778-49e7-b4ef-44e6efb5bc33] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 18:08:44.893067   73479 system_pods.go:61] "kube-proxy-hqxh6" [4fb623c0-4be3-421c-85d6-1d76a90b874f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 18:08:44.893078   73479 system_pods.go:61] "kube-scheduler-no-preload-673754" [558e9b78-a6a9-46b8-9db8-004126359c74] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 18:08:44.893088   73479 system_pods.go:61] "metrics-server-78fcd8795b-27pkr" [a63a156b-0446-4bd9-8619-de75edaeb481] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 18:08:44.893098   73479 system_pods.go:61] "storage-provisioner" [a9f0ab70-18f4-4fac-b858-a9177077fe29] Running
	I0731 18:08:44.893109   73479 system_pods.go:74] duration metric: took 14.191984ms to wait for pod list to return data ...
	I0731 18:08:44.893120   73479 node_conditions.go:102] verifying NodePressure condition ...
	I0731 18:08:44.908236   73479 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 18:08:44.908270   73479 node_conditions.go:123] node cpu capacity is 2
	I0731 18:08:44.908283   73479 node_conditions.go:105] duration metric: took 15.154491ms to run NodePressure ...
	I0731 18:08:44.908307   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:45.248571   73479 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 18:08:45.252305   73479 kubeadm.go:739] kubelet initialised
	I0731 18:08:45.252332   73479 kubeadm.go:740] duration metric: took 3.734022ms waiting for restarted kubelet to initialise ...
	I0731 18:08:45.252342   73479 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:08:45.256748   73479 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-k7clq" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:45.261130   73479 pod_ready.go:97] node "no-preload-673754" hosting pod "coredns-5cfdc65f69-k7clq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:45.261149   73479 pod_ready.go:81] duration metric: took 4.373068ms for pod "coredns-5cfdc65f69-k7clq" in "kube-system" namespace to be "Ready" ...
	E0731 18:08:45.261157   73479 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-673754" hosting pod "coredns-5cfdc65f69-k7clq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:45.261162   73479 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:45.265115   73479 pod_ready.go:97] node "no-preload-673754" hosting pod "etcd-no-preload-673754" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:45.265135   73479 pod_ready.go:81] duration metric: took 3.965586ms for pod "etcd-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	E0731 18:08:45.265142   73479 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-673754" hosting pod "etcd-no-preload-673754" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:45.265147   73479 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:45.269566   73479 pod_ready.go:97] node "no-preload-673754" hosting pod "kube-apiserver-no-preload-673754" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:45.269585   73479 pod_ready.go:81] duration metric: took 4.431367ms for pod "kube-apiserver-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	E0731 18:08:45.269595   73479 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-673754" hosting pod "kube-apiserver-no-preload-673754" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:45.269603   73479 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:45.281026   73479 pod_ready.go:97] node "no-preload-673754" hosting pod "kube-controller-manager-no-preload-673754" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:45.281048   73479 pod_ready.go:81] duration metric: took 11.435327ms for pod "kube-controller-manager-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	E0731 18:08:45.281057   73479 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-673754" hosting pod "kube-controller-manager-no-preload-673754" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:45.281065   73479 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-hqxh6" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:45.684313   73479 pod_ready.go:97] node "no-preload-673754" hosting pod "kube-proxy-hqxh6" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:45.684347   73479 pod_ready.go:81] duration metric: took 403.272559ms for pod "kube-proxy-hqxh6" in "kube-system" namespace to be "Ready" ...
	E0731 18:08:45.684356   73479 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-673754" hosting pod "kube-proxy-hqxh6" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:45.684362   73479 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:46.082388   73479 pod_ready.go:97] node "no-preload-673754" hosting pod "kube-scheduler-no-preload-673754" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:46.082419   73479 pod_ready.go:81] duration metric: took 398.048808ms for pod "kube-scheduler-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	E0731 18:08:46.082432   73479 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-673754" hosting pod "kube-scheduler-no-preload-673754" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:46.082442   73479 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:46.482445   73479 pod_ready.go:97] node "no-preload-673754" hosting pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:46.482472   73479 pod_ready.go:81] duration metric: took 400.02111ms for pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace to be "Ready" ...
	E0731 18:08:46.482486   73479 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-673754" hosting pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:46.482493   73479 pod_ready.go:38] duration metric: took 1.230141723s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:08:46.482509   73479 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 18:08:46.495481   73479 ops.go:34] apiserver oom_adj: -16
	I0731 18:08:46.495502   73479 kubeadm.go:597] duration metric: took 8.65989212s to restartPrimaryControlPlane
	I0731 18:08:46.495513   73479 kubeadm.go:394] duration metric: took 8.71382049s to StartCluster
	I0731 18:08:46.495533   73479 settings.go:142] acquiring lock: {Name:mk8ac8269296bf0b887a00ff74caaad180005f89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:08:46.495615   73479 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 18:08:46.497426   73479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/kubeconfig: {Name:mkf2340bc1ada3c683f132aca37228d7588e1fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:08:46.497742   73479 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.126 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 18:08:46.497816   73479 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 18:08:46.497911   73479 addons.go:69] Setting storage-provisioner=true in profile "no-preload-673754"
	I0731 18:08:46.497929   73479 addons.go:69] Setting default-storageclass=true in profile "no-preload-673754"
	I0731 18:08:46.497956   73479 addons.go:69] Setting metrics-server=true in profile "no-preload-673754"
	I0731 18:08:46.497973   73479 config.go:182] Loaded profile config "no-preload-673754": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 18:08:46.497979   73479 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-673754"
	I0731 18:08:46.497988   73479 addons.go:234] Setting addon metrics-server=true in "no-preload-673754"
	W0731 18:08:46.498008   73479 addons.go:243] addon metrics-server should already be in state true
	I0731 18:08:46.497946   73479 addons.go:234] Setting addon storage-provisioner=true in "no-preload-673754"
	I0731 18:08:46.498056   73479 host.go:66] Checking if "no-preload-673754" exists ...
	W0731 18:08:46.498064   73479 addons.go:243] addon storage-provisioner should already be in state true
	I0731 18:08:46.498109   73479 host.go:66] Checking if "no-preload-673754" exists ...
	I0731 18:08:46.498315   73479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:08:46.498315   73479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:08:46.498333   73479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:08:46.498340   73479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:08:46.498448   73479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:08:46.498470   73479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:08:46.501144   73479 out.go:177] * Verifying Kubernetes components...
	I0731 18:08:46.502755   73479 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:08:46.514922   73479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46797
	I0731 18:08:46.514923   73479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44241
	I0731 18:08:46.515418   73479 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:08:46.515618   73479 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:08:46.515928   73479 main.go:141] libmachine: Using API Version  1
	I0731 18:08:46.515950   73479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:08:46.516066   73479 main.go:141] libmachine: Using API Version  1
	I0731 18:08:46.516089   73479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:08:46.516370   73479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40219
	I0731 18:08:46.516440   73479 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:08:46.516663   73479 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:08:46.516809   73479 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:08:46.516811   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetState
	I0731 18:08:46.517213   73479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:08:46.517247   73479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:08:46.517280   73479 main.go:141] libmachine: Using API Version  1
	I0731 18:08:46.517302   73479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:08:46.517618   73479 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:08:46.518191   73479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:08:46.518220   73479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:08:46.520511   73479 addons.go:234] Setting addon default-storageclass=true in "no-preload-673754"
	W0731 18:08:46.520536   73479 addons.go:243] addon default-storageclass should already be in state true
	I0731 18:08:46.520566   73479 host.go:66] Checking if "no-preload-673754" exists ...
	I0731 18:08:46.520917   73479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:08:46.520968   73479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:08:46.533349   73479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38669
	I0731 18:08:46.533802   73479 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:08:46.534250   73479 main.go:141] libmachine: Using API Version  1
	I0731 18:08:46.534272   73479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:08:46.534582   73479 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:08:46.534720   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetState
	I0731 18:08:46.535556   73479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46055
	I0731 18:08:46.535979   73479 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:08:46.536648   73479 main.go:141] libmachine: Using API Version  1
	I0731 18:08:46.536667   73479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:08:46.537080   73479 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:08:46.537331   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 18:08:46.537398   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetState
	I0731 18:08:46.538365   73479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39435
	I0731 18:08:46.538929   73479 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:08:46.539194   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 18:08:46.539401   73479 main.go:141] libmachine: Using API Version  1
	I0731 18:08:46.539419   73479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:08:46.539766   73479 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:08:46.540360   73479 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:08:46.540447   73479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:08:46.540801   73479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:08:46.541139   73479 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0731 18:08:46.541916   73479 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 18:08:46.541932   73479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 18:08:46.541952   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:46.542506   73479 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 18:08:46.542524   73479 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 18:08:46.542541   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:46.545293   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:46.545631   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:46.545759   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:46.545829   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:46.545985   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:46.546116   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:46.546256   73479 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/no-preload-673754/id_rsa Username:docker}
	I0731 18:08:46.546384   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:46.546888   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:46.546907   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:46.546924   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:46.547090   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:46.547256   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:46.547434   73479 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/no-preload-673754/id_rsa Username:docker}
	I0731 18:08:46.570759   73479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40485
	I0731 18:08:46.571222   73479 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:08:46.571668   73479 main.go:141] libmachine: Using API Version  1
	I0731 18:08:46.571688   73479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:08:46.572207   73479 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:08:46.572367   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetState
	I0731 18:08:46.574368   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 18:08:46.574582   73479 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 18:08:46.574607   73479 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 18:08:46.574627   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:46.577768   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:46.578542   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:46.578567   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:46.578741   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:46.578911   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:46.579047   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:46.579459   73479 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/no-preload-673754/id_rsa Username:docker}
	I0731 18:08:46.700752   73479 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 18:08:46.720967   73479 node_ready.go:35] waiting up to 6m0s for node "no-preload-673754" to be "Ready" ...
	I0731 18:08:46.798188   73479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 18:08:46.802534   73479 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 18:08:46.802564   73479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0731 18:08:46.828038   73479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 18:08:46.859309   73479 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 18:08:46.859337   73479 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 18:08:46.921507   73479 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 18:08:46.921536   73479 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 18:08:46.958759   73479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 18:08:48.106542   73479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.278462071s)
	I0731 18:08:48.106599   73479 main.go:141] libmachine: Making call to close driver server
	I0731 18:08:48.106608   73479 main.go:141] libmachine: (no-preload-673754) Calling .Close
	I0731 18:08:48.107151   73479 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:08:48.107177   73479 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:08:48.107187   73479 main.go:141] libmachine: Making call to close driver server
	I0731 18:08:48.107196   73479 main.go:141] libmachine: (no-preload-673754) Calling .Close
	I0731 18:08:48.107601   73479 main.go:141] libmachine: (no-preload-673754) DBG | Closing plugin on server side
	I0731 18:08:48.107604   73479 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:08:48.107631   73479 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:08:48.107831   73479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.309610972s)
	I0731 18:08:48.107872   73479 main.go:141] libmachine: Making call to close driver server
	I0731 18:08:48.107882   73479 main.go:141] libmachine: (no-preload-673754) Calling .Close
	I0731 18:08:48.108105   73479 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:08:48.108119   73479 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:08:48.108138   73479 main.go:141] libmachine: Making call to close driver server
	I0731 18:08:48.108150   73479 main.go:141] libmachine: (no-preload-673754) Calling .Close
	I0731 18:08:48.108351   73479 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:08:48.108367   73479 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:08:48.118038   73479 main.go:141] libmachine: Making call to close driver server
	I0731 18:08:48.118055   73479 main.go:141] libmachine: (no-preload-673754) Calling .Close
	I0731 18:08:48.118329   73479 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:08:48.118349   73479 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:08:48.128563   73479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.169765123s)
	I0731 18:08:48.128606   73479 main.go:141] libmachine: Making call to close driver server
	I0731 18:08:48.128619   73479 main.go:141] libmachine: (no-preload-673754) Calling .Close
	I0731 18:08:48.128901   73479 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:08:48.128915   73479 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:08:48.128924   73479 main.go:141] libmachine: Making call to close driver server
	I0731 18:08:48.128932   73479 main.go:141] libmachine: (no-preload-673754) Calling .Close
	I0731 18:08:48.129137   73479 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:08:48.129152   73479 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:08:48.129162   73479 addons.go:475] Verifying addon metrics-server=true in "no-preload-673754"
	I0731 18:08:48.129174   73479 main.go:141] libmachine: (no-preload-673754) DBG | Closing plugin on server side
	I0731 18:08:48.130887   73479 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0731 18:08:46.271648   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:48.271754   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:45.478411   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:45.978802   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:46.478407   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:46.978134   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:47.479125   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:47.978991   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:48.478597   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:48.978742   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:49.479320   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:49.978288   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:46.779263   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:48.779361   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:48.131964   73479 addons.go:510] duration metric: took 1.634151286s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0731 18:08:48.725682   73479 node_ready.go:53] node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:51.231081   73479 node_ready.go:53] node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:50.771387   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:52.771438   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:50.478112   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:50.978272   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:51.478319   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:51.978880   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:52.479176   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:52.979001   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:53.478508   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:53.978517   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:54.478857   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:54.978290   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:51.278348   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:53.278456   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:55.278495   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:53.725153   73479 node_ready.go:53] node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:54.224475   73479 node_ready.go:49] node "no-preload-673754" has status "Ready":"True"
	I0731 18:08:54.224505   73479 node_ready.go:38] duration metric: took 7.503503116s for node "no-preload-673754" to be "Ready" ...
	I0731 18:08:54.224517   73479 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:08:54.231434   73479 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-k7clq" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:56.237804   73479 pod_ready.go:102] pod "coredns-5cfdc65f69-k7clq" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:54.772597   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:57.271778   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:55.478727   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:55.978552   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:56.478246   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:56.978732   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:57.478262   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:57.978216   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:58.478212   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:58.978406   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:59.478270   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:59.978221   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:57.781459   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:00.278913   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:58.740148   73479 pod_ready.go:102] pod "coredns-5cfdc65f69-k7clq" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:01.237849   73479 pod_ready.go:92] pod "coredns-5cfdc65f69-k7clq" in "kube-system" namespace has status "Ready":"True"
	I0731 18:09:01.237874   73479 pod_ready.go:81] duration metric: took 7.00641308s for pod "coredns-5cfdc65f69-k7clq" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.237887   73479 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.242105   73479 pod_ready.go:92] pod "etcd-no-preload-673754" in "kube-system" namespace has status "Ready":"True"
	I0731 18:09:01.242122   73479 pod_ready.go:81] duration metric: took 4.229266ms for pod "etcd-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.242133   73479 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.246652   73479 pod_ready.go:92] pod "kube-apiserver-no-preload-673754" in "kube-system" namespace has status "Ready":"True"
	I0731 18:09:01.246674   73479 pod_ready.go:81] duration metric: took 4.534937ms for pod "kube-apiserver-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.246686   73479 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.251284   73479 pod_ready.go:92] pod "kube-controller-manager-no-preload-673754" in "kube-system" namespace has status "Ready":"True"
	I0731 18:09:01.251302   73479 pod_ready.go:81] duration metric: took 4.608584ms for pod "kube-controller-manager-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.251321   73479 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hqxh6" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.255030   73479 pod_ready.go:92] pod "kube-proxy-hqxh6" in "kube-system" namespace has status "Ready":"True"
	I0731 18:09:01.255045   73479 pod_ready.go:81] duration metric: took 3.718917ms for pod "kube-proxy-hqxh6" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.255052   73479 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.636799   73479 pod_ready.go:92] pod "kube-scheduler-no-preload-673754" in "kube-system" namespace has status "Ready":"True"
	I0731 18:09:01.636826   73479 pod_ready.go:81] duration metric: took 381.767881ms for pod "kube-scheduler-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.636835   73479 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:59.771686   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:02.271396   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:00.478785   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:00.978350   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:01.478635   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:01.978192   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:02.478480   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:02.979021   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:03.478366   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:03.978984   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:04.479143   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:04.978913   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:02.279613   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:04.778482   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:03.642978   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:05.644941   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:04.771938   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:07.271165   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:05.478608   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:05.978345   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:06.478435   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:06.978551   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:07.478131   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:07.978354   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:08.478977   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:08.979122   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:09.478279   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:09.978350   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:06.780364   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:09.278573   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:08.142974   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:10.643136   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:09.771950   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:11.772464   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:13.773164   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:10.479086   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:10.479175   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:10.516364   74203 cri.go:89] found id: ""
	I0731 18:09:10.516389   74203 logs.go:276] 0 containers: []
	W0731 18:09:10.516405   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:10.516411   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:10.516464   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:10.549398   74203 cri.go:89] found id: ""
	I0731 18:09:10.549422   74203 logs.go:276] 0 containers: []
	W0731 18:09:10.549433   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:10.549440   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:10.549503   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:10.584290   74203 cri.go:89] found id: ""
	I0731 18:09:10.584314   74203 logs.go:276] 0 containers: []
	W0731 18:09:10.584322   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:10.584327   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:10.584381   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:10.615832   74203 cri.go:89] found id: ""
	I0731 18:09:10.615860   74203 logs.go:276] 0 containers: []
	W0731 18:09:10.615871   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:10.615878   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:10.615941   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:10.647597   74203 cri.go:89] found id: ""
	I0731 18:09:10.647617   74203 logs.go:276] 0 containers: []
	W0731 18:09:10.647624   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:10.647629   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:10.647686   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:10.680981   74203 cri.go:89] found id: ""
	I0731 18:09:10.681016   74203 logs.go:276] 0 containers: []
	W0731 18:09:10.681027   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:10.681033   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:10.681093   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:10.713798   74203 cri.go:89] found id: ""
	I0731 18:09:10.713839   74203 logs.go:276] 0 containers: []
	W0731 18:09:10.713851   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:10.713865   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:10.713937   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:10.746378   74203 cri.go:89] found id: ""
	I0731 18:09:10.746405   74203 logs.go:276] 0 containers: []
	W0731 18:09:10.746413   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:10.746423   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:10.746439   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:10.799156   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:10.799187   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:10.812388   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:10.812413   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:10.932251   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:10.932271   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:10.932285   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:10.996810   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:10.996840   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:13.533936   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:13.549194   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:13.549250   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:13.599350   74203 cri.go:89] found id: ""
	I0731 18:09:13.599389   74203 logs.go:276] 0 containers: []
	W0731 18:09:13.599400   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:13.599407   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:13.599466   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:13.651736   74203 cri.go:89] found id: ""
	I0731 18:09:13.651771   74203 logs.go:276] 0 containers: []
	W0731 18:09:13.651791   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:13.651798   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:13.651855   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:13.699804   74203 cri.go:89] found id: ""
	I0731 18:09:13.699832   74203 logs.go:276] 0 containers: []
	W0731 18:09:13.699841   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:13.699846   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:13.699906   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:13.732760   74203 cri.go:89] found id: ""
	I0731 18:09:13.732781   74203 logs.go:276] 0 containers: []
	W0731 18:09:13.732788   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:13.732794   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:13.732849   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:13.766865   74203 cri.go:89] found id: ""
	I0731 18:09:13.766892   74203 logs.go:276] 0 containers: []
	W0731 18:09:13.766902   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:13.766910   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:13.766964   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:13.804706   74203 cri.go:89] found id: ""
	I0731 18:09:13.804733   74203 logs.go:276] 0 containers: []
	W0731 18:09:13.804743   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:13.804757   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:13.804821   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:13.838432   74203 cri.go:89] found id: ""
	I0731 18:09:13.838461   74203 logs.go:276] 0 containers: []
	W0731 18:09:13.838472   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:13.838479   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:13.838534   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:13.870455   74203 cri.go:89] found id: ""
	I0731 18:09:13.870480   74203 logs.go:276] 0 containers: []
	W0731 18:09:13.870490   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:13.870498   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:13.870510   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:13.922911   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:13.922947   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:13.936075   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:13.936098   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:14.006766   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:14.006790   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:14.006810   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:14.071066   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:14.071100   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:11.278892   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:13.279644   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:15.280298   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:12.643341   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:14.643636   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:16.280976   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:18.772338   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:16.615212   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:16.627439   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:16.627499   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:16.660764   74203 cri.go:89] found id: ""
	I0731 18:09:16.660785   74203 logs.go:276] 0 containers: []
	W0731 18:09:16.660792   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:16.660798   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:16.660842   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:16.697154   74203 cri.go:89] found id: ""
	I0731 18:09:16.697182   74203 logs.go:276] 0 containers: []
	W0731 18:09:16.697196   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:16.697201   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:16.697259   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:16.730263   74203 cri.go:89] found id: ""
	I0731 18:09:16.730284   74203 logs.go:276] 0 containers: []
	W0731 18:09:16.730291   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:16.730318   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:16.730369   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:16.765226   74203 cri.go:89] found id: ""
	I0731 18:09:16.765249   74203 logs.go:276] 0 containers: []
	W0731 18:09:16.765257   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:16.765262   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:16.765336   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:16.800502   74203 cri.go:89] found id: ""
	I0731 18:09:16.800528   74203 logs.go:276] 0 containers: []
	W0731 18:09:16.800535   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:16.800541   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:16.800599   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:16.837391   74203 cri.go:89] found id: ""
	I0731 18:09:16.837418   74203 logs.go:276] 0 containers: []
	W0731 18:09:16.837427   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:16.837435   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:16.837490   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:16.867606   74203 cri.go:89] found id: ""
	I0731 18:09:16.867628   74203 logs.go:276] 0 containers: []
	W0731 18:09:16.867637   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:16.867642   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:16.867696   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:16.901639   74203 cri.go:89] found id: ""
	I0731 18:09:16.901669   74203 logs.go:276] 0 containers: []
	W0731 18:09:16.901681   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:16.901693   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:16.901707   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:16.951692   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:16.951729   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:16.965069   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:16.965101   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:17.040337   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:17.040358   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:17.040371   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:17.115058   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:17.115093   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:19.651538   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:19.663682   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:19.663739   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:19.697851   74203 cri.go:89] found id: ""
	I0731 18:09:19.697879   74203 logs.go:276] 0 containers: []
	W0731 18:09:19.697894   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:19.697900   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:19.697996   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:19.732745   74203 cri.go:89] found id: ""
	I0731 18:09:19.732772   74203 logs.go:276] 0 containers: []
	W0731 18:09:19.732783   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:19.732790   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:19.732855   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:19.763843   74203 cri.go:89] found id: ""
	I0731 18:09:19.763865   74203 logs.go:276] 0 containers: []
	W0731 18:09:19.763873   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:19.763878   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:19.763934   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:19.797398   74203 cri.go:89] found id: ""
	I0731 18:09:19.797422   74203 logs.go:276] 0 containers: []
	W0731 18:09:19.797429   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:19.797434   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:19.797504   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:19.833239   74203 cri.go:89] found id: ""
	I0731 18:09:19.833268   74203 logs.go:276] 0 containers: []
	W0731 18:09:19.833278   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:19.833284   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:19.833340   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:19.866135   74203 cri.go:89] found id: ""
	I0731 18:09:19.866163   74203 logs.go:276] 0 containers: []
	W0731 18:09:19.866173   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:19.866181   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:19.866242   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:19.900581   74203 cri.go:89] found id: ""
	I0731 18:09:19.900606   74203 logs.go:276] 0 containers: []
	W0731 18:09:19.900615   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:19.900621   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:19.900720   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:19.936451   74203 cri.go:89] found id: ""
	I0731 18:09:19.936475   74203 logs.go:276] 0 containers: []
	W0731 18:09:19.936487   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:19.936496   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:19.936508   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:19.990522   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:19.990559   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:20.003460   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:20.003487   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:20.070869   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:20.070893   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:20.070912   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:20.148316   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:20.148354   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:17.779144   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:19.781539   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:17.143894   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:19.642139   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:21.642234   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:21.271074   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:23.771002   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:22.685964   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:22.698740   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:22.698814   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:22.735321   74203 cri.go:89] found id: ""
	I0731 18:09:22.735350   74203 logs.go:276] 0 containers: []
	W0731 18:09:22.735360   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:22.735367   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:22.735428   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:22.767689   74203 cri.go:89] found id: ""
	I0731 18:09:22.767718   74203 logs.go:276] 0 containers: []
	W0731 18:09:22.767729   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:22.767736   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:22.767795   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:22.804010   74203 cri.go:89] found id: ""
	I0731 18:09:22.804036   74203 logs.go:276] 0 containers: []
	W0731 18:09:22.804045   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:22.804050   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:22.804101   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:22.836820   74203 cri.go:89] found id: ""
	I0731 18:09:22.836847   74203 logs.go:276] 0 containers: []
	W0731 18:09:22.836858   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:22.836874   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:22.836933   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:22.870163   74203 cri.go:89] found id: ""
	I0731 18:09:22.870187   74203 logs.go:276] 0 containers: []
	W0731 18:09:22.870194   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:22.870199   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:22.870270   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:22.905926   74203 cri.go:89] found id: ""
	I0731 18:09:22.905951   74203 logs.go:276] 0 containers: []
	W0731 18:09:22.905959   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:22.905965   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:22.906020   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:22.938926   74203 cri.go:89] found id: ""
	I0731 18:09:22.938949   74203 logs.go:276] 0 containers: []
	W0731 18:09:22.938957   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:22.938963   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:22.939008   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:22.975150   74203 cri.go:89] found id: ""
	I0731 18:09:22.975185   74203 logs.go:276] 0 containers: []
	W0731 18:09:22.975194   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:22.975204   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:22.975219   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:23.043265   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:23.043290   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:23.043302   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:23.122681   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:23.122717   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:23.161745   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:23.161769   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:23.211274   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:23.211305   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:22.278664   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:24.778771   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:23.643871   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:26.143509   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:25.771922   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:27.772156   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:25.724702   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:25.739335   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:25.739415   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:25.778238   74203 cri.go:89] found id: ""
	I0731 18:09:25.778264   74203 logs.go:276] 0 containers: []
	W0731 18:09:25.778274   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:25.778282   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:25.778349   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:25.816530   74203 cri.go:89] found id: ""
	I0731 18:09:25.816566   74203 logs.go:276] 0 containers: []
	W0731 18:09:25.816579   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:25.816587   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:25.816652   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:25.853524   74203 cri.go:89] found id: ""
	I0731 18:09:25.853562   74203 logs.go:276] 0 containers: []
	W0731 18:09:25.853575   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:25.853583   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:25.853661   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:25.889690   74203 cri.go:89] found id: ""
	I0731 18:09:25.889719   74203 logs.go:276] 0 containers: []
	W0731 18:09:25.889728   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:25.889734   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:25.889803   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:25.922409   74203 cri.go:89] found id: ""
	I0731 18:09:25.922441   74203 logs.go:276] 0 containers: []
	W0731 18:09:25.922452   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:25.922459   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:25.922512   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:25.956849   74203 cri.go:89] found id: ""
	I0731 18:09:25.956876   74203 logs.go:276] 0 containers: []
	W0731 18:09:25.956886   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:25.956893   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:25.956958   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:25.994190   74203 cri.go:89] found id: ""
	I0731 18:09:25.994212   74203 logs.go:276] 0 containers: []
	W0731 18:09:25.994220   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:25.994225   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:25.994270   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:26.027980   74203 cri.go:89] found id: ""
	I0731 18:09:26.028005   74203 logs.go:276] 0 containers: []
	W0731 18:09:26.028014   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:26.028025   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:26.028044   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:26.076627   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:26.076661   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:26.089439   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:26.089464   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:26.167298   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:26.167319   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:26.167333   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:26.244611   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:26.244644   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:28.787238   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:28.800136   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:28.800221   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:28.843038   74203 cri.go:89] found id: ""
	I0731 18:09:28.843062   74203 logs.go:276] 0 containers: []
	W0731 18:09:28.843070   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:28.843076   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:28.843154   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:28.876979   74203 cri.go:89] found id: ""
	I0731 18:09:28.877010   74203 logs.go:276] 0 containers: []
	W0731 18:09:28.877021   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:28.877028   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:28.877095   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:28.913105   74203 cri.go:89] found id: ""
	I0731 18:09:28.913137   74203 logs.go:276] 0 containers: []
	W0731 18:09:28.913147   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:28.913155   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:28.913216   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:28.949113   74203 cri.go:89] found id: ""
	I0731 18:09:28.949144   74203 logs.go:276] 0 containers: []
	W0731 18:09:28.949153   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:28.949160   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:28.949208   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:28.983159   74203 cri.go:89] found id: ""
	I0731 18:09:28.983187   74203 logs.go:276] 0 containers: []
	W0731 18:09:28.983195   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:28.983200   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:28.983276   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:29.016316   74203 cri.go:89] found id: ""
	I0731 18:09:29.016356   74203 logs.go:276] 0 containers: []
	W0731 18:09:29.016364   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:29.016370   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:29.016419   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:29.050015   74203 cri.go:89] found id: ""
	I0731 18:09:29.050047   74203 logs.go:276] 0 containers: []
	W0731 18:09:29.050058   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:29.050069   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:29.050124   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:29.084711   74203 cri.go:89] found id: ""
	I0731 18:09:29.084739   74203 logs.go:276] 0 containers: []
	W0731 18:09:29.084749   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:29.084760   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:29.084777   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:29.135474   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:29.135516   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:29.149989   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:29.150022   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:29.223652   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:29.223676   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:29.223688   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:29.307949   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:29.307983   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:26.779082   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:29.280030   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:28.143957   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:30.643349   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:30.271524   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:32.271862   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:31.848760   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:31.861409   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:31.861470   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:31.894485   74203 cri.go:89] found id: ""
	I0731 18:09:31.894505   74203 logs.go:276] 0 containers: []
	W0731 18:09:31.894513   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:31.894518   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:31.894563   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:31.926760   74203 cri.go:89] found id: ""
	I0731 18:09:31.926784   74203 logs.go:276] 0 containers: []
	W0731 18:09:31.926791   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:31.926797   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:31.926857   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:31.963010   74203 cri.go:89] found id: ""
	I0731 18:09:31.963042   74203 logs.go:276] 0 containers: []
	W0731 18:09:31.963055   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:31.963062   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:31.963165   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:31.995221   74203 cri.go:89] found id: ""
	I0731 18:09:31.995249   74203 logs.go:276] 0 containers: []
	W0731 18:09:31.995260   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:31.995268   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:31.995333   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:32.033912   74203 cri.go:89] found id: ""
	I0731 18:09:32.033942   74203 logs.go:276] 0 containers: []
	W0731 18:09:32.033955   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:32.033963   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:32.034038   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:32.066416   74203 cri.go:89] found id: ""
	I0731 18:09:32.066446   74203 logs.go:276] 0 containers: []
	W0731 18:09:32.066477   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:32.066486   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:32.066549   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:32.100097   74203 cri.go:89] found id: ""
	I0731 18:09:32.100121   74203 logs.go:276] 0 containers: []
	W0731 18:09:32.100129   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:32.100135   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:32.100191   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:32.133061   74203 cri.go:89] found id: ""
	I0731 18:09:32.133088   74203 logs.go:276] 0 containers: []
	W0731 18:09:32.133096   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:32.133106   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:32.133120   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:32.169869   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:32.169897   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:32.218668   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:32.218707   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:32.231016   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:32.231039   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:32.304319   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:32.304342   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:32.304353   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:34.880423   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:34.893775   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:34.893853   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:34.925073   74203 cri.go:89] found id: ""
	I0731 18:09:34.925101   74203 logs.go:276] 0 containers: []
	W0731 18:09:34.925109   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:34.925115   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:34.925178   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:34.960870   74203 cri.go:89] found id: ""
	I0731 18:09:34.960896   74203 logs.go:276] 0 containers: []
	W0731 18:09:34.960904   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:34.960910   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:34.960961   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:34.996290   74203 cri.go:89] found id: ""
	I0731 18:09:34.996332   74203 logs.go:276] 0 containers: []
	W0731 18:09:34.996341   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:34.996347   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:34.996401   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:35.027900   74203 cri.go:89] found id: ""
	I0731 18:09:35.027932   74203 logs.go:276] 0 containers: []
	W0731 18:09:35.027940   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:35.027945   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:35.028004   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:35.060533   74203 cri.go:89] found id: ""
	I0731 18:09:35.060562   74203 logs.go:276] 0 containers: []
	W0731 18:09:35.060579   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:35.060586   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:35.060653   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:35.095307   74203 cri.go:89] found id: ""
	I0731 18:09:35.095339   74203 logs.go:276] 0 containers: []
	W0731 18:09:35.095348   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:35.095355   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:35.095421   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:35.127060   74203 cri.go:89] found id: ""
	I0731 18:09:35.127082   74203 logs.go:276] 0 containers: []
	W0731 18:09:35.127090   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:35.127095   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:35.127169   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:35.161300   74203 cri.go:89] found id: ""
	I0731 18:09:35.161328   74203 logs.go:276] 0 containers: []
	W0731 18:09:35.161339   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:35.161350   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:35.161369   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:35.233033   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:35.233060   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:35.233074   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:35.313279   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:35.313311   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:31.779160   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:33.779209   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:32.644329   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:35.143744   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:34.774758   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:37.271690   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:35.356120   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:35.356145   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:35.408231   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:35.408263   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:37.921242   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:37.933986   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:37.934044   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:37.964524   74203 cri.go:89] found id: ""
	I0731 18:09:37.964558   74203 logs.go:276] 0 containers: []
	W0731 18:09:37.964567   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:37.964574   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:37.964632   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:37.998157   74203 cri.go:89] found id: ""
	I0731 18:09:37.998183   74203 logs.go:276] 0 containers: []
	W0731 18:09:37.998191   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:37.998196   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:37.998257   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:38.034611   74203 cri.go:89] found id: ""
	I0731 18:09:38.034637   74203 logs.go:276] 0 containers: []
	W0731 18:09:38.034645   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:38.034650   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:38.034708   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:38.068005   74203 cri.go:89] found id: ""
	I0731 18:09:38.068029   74203 logs.go:276] 0 containers: []
	W0731 18:09:38.068039   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:38.068047   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:38.068104   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:38.106110   74203 cri.go:89] found id: ""
	I0731 18:09:38.106133   74203 logs.go:276] 0 containers: []
	W0731 18:09:38.106141   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:38.106146   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:38.106192   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:38.138337   74203 cri.go:89] found id: ""
	I0731 18:09:38.138364   74203 logs.go:276] 0 containers: []
	W0731 18:09:38.138375   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:38.138383   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:38.138440   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:38.171517   74203 cri.go:89] found id: ""
	I0731 18:09:38.171546   74203 logs.go:276] 0 containers: []
	W0731 18:09:38.171557   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:38.171564   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:38.171643   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:38.208708   74203 cri.go:89] found id: ""
	I0731 18:09:38.208733   74203 logs.go:276] 0 containers: []
	W0731 18:09:38.208741   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:38.208750   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:38.208760   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:38.243711   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:38.243736   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:38.298673   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:38.298705   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:38.311936   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:38.311962   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:38.384023   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:38.384049   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:38.384067   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:36.278948   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:38.279423   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:40.281213   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:37.644041   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:40.143131   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:39.772098   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:42.272096   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:40.959426   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:40.972581   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:40.972645   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:41.008917   74203 cri.go:89] found id: ""
	I0731 18:09:41.008941   74203 logs.go:276] 0 containers: []
	W0731 18:09:41.008950   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:41.008957   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:41.009018   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:41.045342   74203 cri.go:89] found id: ""
	I0731 18:09:41.045375   74203 logs.go:276] 0 containers: []
	W0731 18:09:41.045384   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:41.045390   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:41.045454   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:41.081385   74203 cri.go:89] found id: ""
	I0731 18:09:41.081409   74203 logs.go:276] 0 containers: []
	W0731 18:09:41.081417   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:41.081423   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:41.081469   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:41.118028   74203 cri.go:89] found id: ""
	I0731 18:09:41.118051   74203 logs.go:276] 0 containers: []
	W0731 18:09:41.118062   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:41.118067   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:41.118114   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:41.154162   74203 cri.go:89] found id: ""
	I0731 18:09:41.154190   74203 logs.go:276] 0 containers: []
	W0731 18:09:41.154201   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:41.154209   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:41.154271   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:41.190789   74203 cri.go:89] found id: ""
	I0731 18:09:41.190814   74203 logs.go:276] 0 containers: []
	W0731 18:09:41.190822   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:41.190827   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:41.190887   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:41.226281   74203 cri.go:89] found id: ""
	I0731 18:09:41.226312   74203 logs.go:276] 0 containers: []
	W0731 18:09:41.226321   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:41.226327   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:41.226382   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:41.258270   74203 cri.go:89] found id: ""
	I0731 18:09:41.258299   74203 logs.go:276] 0 containers: []
	W0731 18:09:41.258309   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:41.258321   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:41.258335   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:41.342713   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:41.342749   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:41.389772   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:41.389795   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:41.442645   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:41.442676   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:41.455850   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:41.455874   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:41.522017   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:44.022439   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:44.035190   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:44.035258   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:44.070759   74203 cri.go:89] found id: ""
	I0731 18:09:44.070783   74203 logs.go:276] 0 containers: []
	W0731 18:09:44.070790   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:44.070796   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:44.070857   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:44.105313   74203 cri.go:89] found id: ""
	I0731 18:09:44.105350   74203 logs.go:276] 0 containers: []
	W0731 18:09:44.105358   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:44.105364   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:44.105416   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:44.140159   74203 cri.go:89] found id: ""
	I0731 18:09:44.140208   74203 logs.go:276] 0 containers: []
	W0731 18:09:44.140220   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:44.140229   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:44.140301   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:44.176407   74203 cri.go:89] found id: ""
	I0731 18:09:44.176429   74203 logs.go:276] 0 containers: []
	W0731 18:09:44.176437   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:44.176442   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:44.176490   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:44.210875   74203 cri.go:89] found id: ""
	I0731 18:09:44.210899   74203 logs.go:276] 0 containers: []
	W0731 18:09:44.210907   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:44.210916   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:44.210969   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:44.247021   74203 cri.go:89] found id: ""
	I0731 18:09:44.247045   74203 logs.go:276] 0 containers: []
	W0731 18:09:44.247055   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:44.247061   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:44.247141   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:44.282983   74203 cri.go:89] found id: ""
	I0731 18:09:44.283011   74203 logs.go:276] 0 containers: []
	W0731 18:09:44.283021   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:44.283029   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:44.283092   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:44.319717   74203 cri.go:89] found id: ""
	I0731 18:09:44.319742   74203 logs.go:276] 0 containers: []
	W0731 18:09:44.319750   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:44.319766   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:44.319781   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:44.398602   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:44.398636   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:44.435350   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:44.435384   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:44.488021   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:44.488053   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:44.501790   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:44.501813   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:44.578374   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:42.779304   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:45.279008   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:42.143287   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:44.144123   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:46.643499   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:44.771059   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:46.771846   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:48.772300   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:47.079192   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:47.093516   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:47.093597   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:47.132872   74203 cri.go:89] found id: ""
	I0731 18:09:47.132899   74203 logs.go:276] 0 containers: []
	W0731 18:09:47.132907   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:47.132913   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:47.132969   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:47.167428   74203 cri.go:89] found id: ""
	I0731 18:09:47.167460   74203 logs.go:276] 0 containers: []
	W0731 18:09:47.167472   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:47.167480   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:47.167551   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:47.202206   74203 cri.go:89] found id: ""
	I0731 18:09:47.202237   74203 logs.go:276] 0 containers: []
	W0731 18:09:47.202250   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:47.202256   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:47.202308   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:47.238513   74203 cri.go:89] found id: ""
	I0731 18:09:47.238537   74203 logs.go:276] 0 containers: []
	W0731 18:09:47.238545   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:47.238551   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:47.238604   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:47.271732   74203 cri.go:89] found id: ""
	I0731 18:09:47.271755   74203 logs.go:276] 0 containers: []
	W0731 18:09:47.271764   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:47.271770   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:47.271828   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:47.305906   74203 cri.go:89] found id: ""
	I0731 18:09:47.305932   74203 logs.go:276] 0 containers: []
	W0731 18:09:47.305943   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:47.305948   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:47.305996   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:47.338427   74203 cri.go:89] found id: ""
	I0731 18:09:47.338452   74203 logs.go:276] 0 containers: []
	W0731 18:09:47.338461   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:47.338468   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:47.338526   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:47.374909   74203 cri.go:89] found id: ""
	I0731 18:09:47.374943   74203 logs.go:276] 0 containers: []
	W0731 18:09:47.374954   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:47.374963   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:47.374976   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:47.387739   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:47.387765   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:47.480479   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:47.480505   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:47.480519   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:47.562857   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:47.562890   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:47.608435   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:47.608466   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:50.164351   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:50.177485   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:50.177546   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:50.211474   74203 cri.go:89] found id: ""
	I0731 18:09:50.211502   74203 logs.go:276] 0 containers: []
	W0731 18:09:50.211512   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:50.211520   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:50.211583   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:50.248167   74203 cri.go:89] found id: ""
	I0731 18:09:50.248190   74203 logs.go:276] 0 containers: []
	W0731 18:09:50.248197   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:50.248203   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:50.248250   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:50.286323   74203 cri.go:89] found id: ""
	I0731 18:09:50.286358   74203 logs.go:276] 0 containers: []
	W0731 18:09:50.286366   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:50.286372   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:50.286420   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:50.316634   74203 cri.go:89] found id: ""
	I0731 18:09:50.316661   74203 logs.go:276] 0 containers: []
	W0731 18:09:50.316670   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:50.316675   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:50.316726   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:47.279198   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:49.280511   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:49.144581   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:51.642915   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:51.272079   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:53.272815   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:50.349881   74203 cri.go:89] found id: ""
	I0731 18:09:50.349909   74203 logs.go:276] 0 containers: []
	W0731 18:09:50.349919   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:50.349926   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:50.349989   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:50.384147   74203 cri.go:89] found id: ""
	I0731 18:09:50.384181   74203 logs.go:276] 0 containers: []
	W0731 18:09:50.384194   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:50.384203   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:50.384272   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:50.418024   74203 cri.go:89] found id: ""
	I0731 18:09:50.418052   74203 logs.go:276] 0 containers: []
	W0731 18:09:50.418062   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:50.418069   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:50.418130   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:50.454484   74203 cri.go:89] found id: ""
	I0731 18:09:50.454517   74203 logs.go:276] 0 containers: []
	W0731 18:09:50.454525   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:50.454533   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:50.454544   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:50.505508   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:50.505545   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:50.518504   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:50.518529   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:50.587950   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:50.587974   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:50.587989   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:50.669268   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:50.669302   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:53.209229   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:53.222114   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:53.222175   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:53.255330   74203 cri.go:89] found id: ""
	I0731 18:09:53.255356   74203 logs.go:276] 0 containers: []
	W0731 18:09:53.255365   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:53.255371   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:53.255432   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:53.290354   74203 cri.go:89] found id: ""
	I0731 18:09:53.290375   74203 logs.go:276] 0 containers: []
	W0731 18:09:53.290382   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:53.290387   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:53.290438   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:53.323621   74203 cri.go:89] found id: ""
	I0731 18:09:53.323645   74203 logs.go:276] 0 containers: []
	W0731 18:09:53.323653   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:53.323658   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:53.323718   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:53.355850   74203 cri.go:89] found id: ""
	I0731 18:09:53.355877   74203 logs.go:276] 0 containers: []
	W0731 18:09:53.355887   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:53.355894   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:53.355957   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:53.388686   74203 cri.go:89] found id: ""
	I0731 18:09:53.388716   74203 logs.go:276] 0 containers: []
	W0731 18:09:53.388726   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:53.388733   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:53.388785   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:53.426924   74203 cri.go:89] found id: ""
	I0731 18:09:53.426952   74203 logs.go:276] 0 containers: []
	W0731 18:09:53.426961   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:53.426967   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:53.427019   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:53.462041   74203 cri.go:89] found id: ""
	I0731 18:09:53.462067   74203 logs.go:276] 0 containers: []
	W0731 18:09:53.462078   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:53.462084   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:53.462145   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:53.493810   74203 cri.go:89] found id: ""
	I0731 18:09:53.493833   74203 logs.go:276] 0 containers: []
	W0731 18:09:53.493842   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:53.493852   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:53.493867   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:53.530019   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:53.530053   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:53.580749   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:53.580782   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:53.594457   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:53.594482   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:53.662096   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:53.662116   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:53.662134   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:51.778292   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:53.779043   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:53.643914   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:56.142699   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:55.772106   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:58.271063   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:56.238479   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:56.251272   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:56.251350   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:56.287380   74203 cri.go:89] found id: ""
	I0731 18:09:56.287406   74203 logs.go:276] 0 containers: []
	W0731 18:09:56.287414   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:56.287419   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:56.287471   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:56.322490   74203 cri.go:89] found id: ""
	I0731 18:09:56.322512   74203 logs.go:276] 0 containers: []
	W0731 18:09:56.322520   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:56.322526   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:56.322572   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:56.355845   74203 cri.go:89] found id: ""
	I0731 18:09:56.355874   74203 logs.go:276] 0 containers: []
	W0731 18:09:56.355885   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:56.355895   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:56.355958   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:56.388304   74203 cri.go:89] found id: ""
	I0731 18:09:56.388330   74203 logs.go:276] 0 containers: []
	W0731 18:09:56.388340   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:56.388348   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:56.388411   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:56.420837   74203 cri.go:89] found id: ""
	I0731 18:09:56.420867   74203 logs.go:276] 0 containers: []
	W0731 18:09:56.420877   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:56.420884   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:56.420950   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:56.453095   74203 cri.go:89] found id: ""
	I0731 18:09:56.453135   74203 logs.go:276] 0 containers: []
	W0731 18:09:56.453146   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:56.453155   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:56.453214   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:56.484245   74203 cri.go:89] found id: ""
	I0731 18:09:56.484272   74203 logs.go:276] 0 containers: []
	W0731 18:09:56.484282   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:56.484296   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:56.484366   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:56.519473   74203 cri.go:89] found id: ""
	I0731 18:09:56.519501   74203 logs.go:276] 0 containers: []
	W0731 18:09:56.519508   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:56.519516   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:56.519530   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:56.532178   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:56.532203   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:56.600092   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:56.600122   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:56.600137   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:56.679176   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:56.679208   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:56.715464   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:56.715499   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:59.267214   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:59.280666   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:59.280740   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:59.312898   74203 cri.go:89] found id: ""
	I0731 18:09:59.312928   74203 logs.go:276] 0 containers: []
	W0731 18:09:59.312940   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:59.312947   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:59.313013   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:59.347881   74203 cri.go:89] found id: ""
	I0731 18:09:59.347907   74203 logs.go:276] 0 containers: []
	W0731 18:09:59.347915   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:59.347919   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:59.347978   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:59.382566   74203 cri.go:89] found id: ""
	I0731 18:09:59.382603   74203 logs.go:276] 0 containers: []
	W0731 18:09:59.382615   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:59.382629   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:59.382691   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:59.417123   74203 cri.go:89] found id: ""
	I0731 18:09:59.417148   74203 logs.go:276] 0 containers: []
	W0731 18:09:59.417157   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:59.417163   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:59.417220   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:59.452674   74203 cri.go:89] found id: ""
	I0731 18:09:59.452699   74203 logs.go:276] 0 containers: []
	W0731 18:09:59.452709   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:59.452715   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:59.452775   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:59.488879   74203 cri.go:89] found id: ""
	I0731 18:09:59.488905   74203 logs.go:276] 0 containers: []
	W0731 18:09:59.488913   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:59.488921   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:59.488981   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:59.521773   74203 cri.go:89] found id: ""
	I0731 18:09:59.521801   74203 logs.go:276] 0 containers: []
	W0731 18:09:59.521809   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:59.521816   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:59.521876   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:59.566619   74203 cri.go:89] found id: ""
	I0731 18:09:59.566649   74203 logs.go:276] 0 containers: []
	W0731 18:09:59.566659   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:59.566670   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:59.566687   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:59.638301   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:59.638351   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:59.638367   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:59.721561   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:59.721597   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:59.759371   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:59.759402   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:59.811223   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:59.811255   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:56.280351   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:58.777896   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:00.779028   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:58.144006   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:00.643536   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:00.772456   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:03.270710   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:02.325339   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:02.337908   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:02.337963   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:02.369343   74203 cri.go:89] found id: ""
	I0731 18:10:02.369369   74203 logs.go:276] 0 containers: []
	W0731 18:10:02.369378   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:02.369384   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:02.369442   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:02.406207   74203 cri.go:89] found id: ""
	I0731 18:10:02.406234   74203 logs.go:276] 0 containers: []
	W0731 18:10:02.406242   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:02.406247   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:02.406297   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:02.442001   74203 cri.go:89] found id: ""
	I0731 18:10:02.442031   74203 logs.go:276] 0 containers: []
	W0731 18:10:02.442041   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:02.442049   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:02.442109   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:02.478407   74203 cri.go:89] found id: ""
	I0731 18:10:02.478431   74203 logs.go:276] 0 containers: []
	W0731 18:10:02.478439   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:02.478444   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:02.478491   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:02.513832   74203 cri.go:89] found id: ""
	I0731 18:10:02.513875   74203 logs.go:276] 0 containers: []
	W0731 18:10:02.513888   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:02.513896   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:02.513962   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:02.550830   74203 cri.go:89] found id: ""
	I0731 18:10:02.550856   74203 logs.go:276] 0 containers: []
	W0731 18:10:02.550867   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:02.550874   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:02.550937   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:02.584649   74203 cri.go:89] found id: ""
	I0731 18:10:02.584676   74203 logs.go:276] 0 containers: []
	W0731 18:10:02.584684   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:02.584691   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:02.584752   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:02.617436   74203 cri.go:89] found id: ""
	I0731 18:10:02.617464   74203 logs.go:276] 0 containers: []
	W0731 18:10:02.617475   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:02.617485   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:02.617500   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:02.671571   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:02.671609   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:02.686657   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:02.686694   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:02.755974   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:02.756008   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:02.756025   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:02.837976   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:02.838012   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:02.779666   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:04.779994   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:02.644075   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:05.142859   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:05.272500   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:07.771599   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:05.375212   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:05.388635   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:05.388703   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:05.427583   74203 cri.go:89] found id: ""
	I0731 18:10:05.427610   74203 logs.go:276] 0 containers: []
	W0731 18:10:05.427617   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:05.427622   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:05.427673   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:05.462550   74203 cri.go:89] found id: ""
	I0731 18:10:05.462575   74203 logs.go:276] 0 containers: []
	W0731 18:10:05.462583   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:05.462589   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:05.462645   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:05.501768   74203 cri.go:89] found id: ""
	I0731 18:10:05.501790   74203 logs.go:276] 0 containers: []
	W0731 18:10:05.501797   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:05.501802   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:05.501860   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:05.539692   74203 cri.go:89] found id: ""
	I0731 18:10:05.539719   74203 logs.go:276] 0 containers: []
	W0731 18:10:05.539731   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:05.539737   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:05.539798   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:05.573844   74203 cri.go:89] found id: ""
	I0731 18:10:05.573872   74203 logs.go:276] 0 containers: []
	W0731 18:10:05.573884   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:05.573891   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:05.573953   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:05.607827   74203 cri.go:89] found id: ""
	I0731 18:10:05.607848   74203 logs.go:276] 0 containers: []
	W0731 18:10:05.607858   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:05.607863   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:05.607913   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:05.639644   74203 cri.go:89] found id: ""
	I0731 18:10:05.639673   74203 logs.go:276] 0 containers: []
	W0731 18:10:05.639684   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:05.639691   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:05.639753   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:05.673164   74203 cri.go:89] found id: ""
	I0731 18:10:05.673188   74203 logs.go:276] 0 containers: []
	W0731 18:10:05.673195   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:05.673203   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:05.673215   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:05.755189   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:05.755221   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:05.793686   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:05.793715   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:05.844930   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:05.844965   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:05.859150   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:05.859176   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:05.929945   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:08.430669   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:08.444918   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:08.444989   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:08.482598   74203 cri.go:89] found id: ""
	I0731 18:10:08.482625   74203 logs.go:276] 0 containers: []
	W0731 18:10:08.482635   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:08.482642   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:08.482708   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:08.519687   74203 cri.go:89] found id: ""
	I0731 18:10:08.519717   74203 logs.go:276] 0 containers: []
	W0731 18:10:08.519726   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:08.519734   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:08.519795   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:08.551600   74203 cri.go:89] found id: ""
	I0731 18:10:08.551638   74203 logs.go:276] 0 containers: []
	W0731 18:10:08.551649   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:08.551657   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:08.551713   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:08.585233   74203 cri.go:89] found id: ""
	I0731 18:10:08.585263   74203 logs.go:276] 0 containers: []
	W0731 18:10:08.585274   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:08.585282   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:08.585343   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:08.622464   74203 cri.go:89] found id: ""
	I0731 18:10:08.622492   74203 logs.go:276] 0 containers: []
	W0731 18:10:08.622502   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:08.622510   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:08.622569   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:08.658360   74203 cri.go:89] found id: ""
	I0731 18:10:08.658390   74203 logs.go:276] 0 containers: []
	W0731 18:10:08.658402   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:08.658410   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:08.658471   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:08.692076   74203 cri.go:89] found id: ""
	I0731 18:10:08.692100   74203 logs.go:276] 0 containers: []
	W0731 18:10:08.692109   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:08.692116   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:08.692179   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:08.729584   74203 cri.go:89] found id: ""
	I0731 18:10:08.729612   74203 logs.go:276] 0 containers: []
	W0731 18:10:08.729622   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:08.729633   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:08.729647   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:08.806395   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:08.806457   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:08.806485   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:08.884008   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:08.884046   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:08.924359   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:08.924398   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:08.978161   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:08.978195   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:07.279327   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:09.281214   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:07.143145   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:09.143995   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:11.643254   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:09.773024   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:12.272862   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:14.273615   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:11.491784   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:11.504711   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:11.504784   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:11.541314   74203 cri.go:89] found id: ""
	I0731 18:10:11.541353   74203 logs.go:276] 0 containers: []
	W0731 18:10:11.541361   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:11.541366   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:11.541424   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:11.576481   74203 cri.go:89] found id: ""
	I0731 18:10:11.576509   74203 logs.go:276] 0 containers: []
	W0731 18:10:11.576527   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:11.576535   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:11.576597   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:11.610370   74203 cri.go:89] found id: ""
	I0731 18:10:11.610395   74203 logs.go:276] 0 containers: []
	W0731 18:10:11.610404   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:11.610412   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:11.610470   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:11.645559   74203 cri.go:89] found id: ""
	I0731 18:10:11.645586   74203 logs.go:276] 0 containers: []
	W0731 18:10:11.645593   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:11.645598   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:11.645654   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:11.677576   74203 cri.go:89] found id: ""
	I0731 18:10:11.677613   74203 logs.go:276] 0 containers: []
	W0731 18:10:11.677624   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:11.677631   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:11.677681   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:11.710173   74203 cri.go:89] found id: ""
	I0731 18:10:11.710199   74203 logs.go:276] 0 containers: []
	W0731 18:10:11.710208   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:11.710215   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:11.710273   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:11.743722   74203 cri.go:89] found id: ""
	I0731 18:10:11.743752   74203 logs.go:276] 0 containers: []
	W0731 18:10:11.743763   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:11.743782   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:11.743857   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:11.776730   74203 cri.go:89] found id: ""
	I0731 18:10:11.776752   74203 logs.go:276] 0 containers: []
	W0731 18:10:11.776759   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:11.776766   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:11.776777   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:11.846385   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:11.846404   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:11.846415   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:11.923748   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:11.923779   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:11.959700   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:11.959734   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:12.009971   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:12.010002   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:14.524097   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:14.537349   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:14.537449   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:14.569907   74203 cri.go:89] found id: ""
	I0731 18:10:14.569934   74203 logs.go:276] 0 containers: []
	W0731 18:10:14.569941   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:14.569947   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:14.569999   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:14.605058   74203 cri.go:89] found id: ""
	I0731 18:10:14.605085   74203 logs.go:276] 0 containers: []
	W0731 18:10:14.605095   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:14.605102   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:14.605155   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:14.640941   74203 cri.go:89] found id: ""
	I0731 18:10:14.640964   74203 logs.go:276] 0 containers: []
	W0731 18:10:14.640975   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:14.640982   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:14.641039   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:14.678774   74203 cri.go:89] found id: ""
	I0731 18:10:14.678803   74203 logs.go:276] 0 containers: []
	W0731 18:10:14.678814   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:14.678822   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:14.678880   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:14.714123   74203 cri.go:89] found id: ""
	I0731 18:10:14.714152   74203 logs.go:276] 0 containers: []
	W0731 18:10:14.714163   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:14.714171   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:14.714230   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:14.750212   74203 cri.go:89] found id: ""
	I0731 18:10:14.750243   74203 logs.go:276] 0 containers: []
	W0731 18:10:14.750255   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:14.750262   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:14.750322   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:14.786820   74203 cri.go:89] found id: ""
	I0731 18:10:14.786842   74203 logs.go:276] 0 containers: []
	W0731 18:10:14.786850   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:14.786856   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:14.786904   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:14.819667   74203 cri.go:89] found id: ""
	I0731 18:10:14.819689   74203 logs.go:276] 0 containers: []
	W0731 18:10:14.819697   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:14.819705   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:14.819725   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:14.832525   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:14.832550   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:14.901190   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:14.901216   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:14.901229   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:14.977123   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:14.977158   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:15.014882   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:15.014912   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:11.779007   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:14.279638   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:14.142303   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:16.143713   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:16.770910   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:18.771058   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:17.564989   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:17.578676   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:17.578740   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:17.610077   74203 cri.go:89] found id: ""
	I0731 18:10:17.610103   74203 logs.go:276] 0 containers: []
	W0731 18:10:17.610112   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:17.610117   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:17.610169   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:17.643143   74203 cri.go:89] found id: ""
	I0731 18:10:17.643166   74203 logs.go:276] 0 containers: []
	W0731 18:10:17.643173   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:17.643179   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:17.643225   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:17.677979   74203 cri.go:89] found id: ""
	I0731 18:10:17.678002   74203 logs.go:276] 0 containers: []
	W0731 18:10:17.678010   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:17.678016   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:17.678086   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:17.711905   74203 cri.go:89] found id: ""
	I0731 18:10:17.711941   74203 logs.go:276] 0 containers: []
	W0731 18:10:17.711953   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:17.711960   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:17.712013   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:17.745842   74203 cri.go:89] found id: ""
	I0731 18:10:17.745870   74203 logs.go:276] 0 containers: []
	W0731 18:10:17.745881   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:17.745889   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:17.745949   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:17.778170   74203 cri.go:89] found id: ""
	I0731 18:10:17.778242   74203 logs.go:276] 0 containers: []
	W0731 18:10:17.778260   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:17.778272   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:17.778340   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:17.810717   74203 cri.go:89] found id: ""
	I0731 18:10:17.810744   74203 logs.go:276] 0 containers: []
	W0731 18:10:17.810755   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:17.810762   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:17.810823   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:17.843237   74203 cri.go:89] found id: ""
	I0731 18:10:17.843268   74203 logs.go:276] 0 containers: []
	W0731 18:10:17.843278   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:17.843288   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:17.843303   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:17.894338   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:17.894376   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:17.907898   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:17.907927   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:17.977115   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:17.977133   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:17.977145   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:18.059924   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:18.059968   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:16.279697   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:18.780698   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:18.144063   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:20.643891   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:20.772956   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:23.270974   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:20.600903   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:20.613609   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:20.613680   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:20.646352   74203 cri.go:89] found id: ""
	I0731 18:10:20.646379   74203 logs.go:276] 0 containers: []
	W0731 18:10:20.646388   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:20.646395   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:20.646453   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:20.680448   74203 cri.go:89] found id: ""
	I0731 18:10:20.680475   74203 logs.go:276] 0 containers: []
	W0731 18:10:20.680486   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:20.680493   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:20.680555   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:20.716330   74203 cri.go:89] found id: ""
	I0731 18:10:20.716365   74203 logs.go:276] 0 containers: []
	W0731 18:10:20.716378   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:20.716387   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:20.716448   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:20.748630   74203 cri.go:89] found id: ""
	I0731 18:10:20.748657   74203 logs.go:276] 0 containers: []
	W0731 18:10:20.748665   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:20.748670   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:20.748736   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:20.787769   74203 cri.go:89] found id: ""
	I0731 18:10:20.787793   74203 logs.go:276] 0 containers: []
	W0731 18:10:20.787802   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:20.787809   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:20.787869   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:20.819884   74203 cri.go:89] found id: ""
	I0731 18:10:20.819911   74203 logs.go:276] 0 containers: []
	W0731 18:10:20.819921   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:20.819929   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:20.819988   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:20.853414   74203 cri.go:89] found id: ""
	I0731 18:10:20.853437   74203 logs.go:276] 0 containers: []
	W0731 18:10:20.853445   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:20.853450   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:20.853508   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:20.889198   74203 cri.go:89] found id: ""
	I0731 18:10:20.889224   74203 logs.go:276] 0 containers: []
	W0731 18:10:20.889231   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:20.889239   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:20.889251   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:20.903240   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:20.903268   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:20.971003   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:20.971032   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:20.971051   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:21.045856   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:21.045888   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:21.086089   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:21.086121   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:23.639664   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:23.652573   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:23.652632   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:23.684719   74203 cri.go:89] found id: ""
	I0731 18:10:23.684746   74203 logs.go:276] 0 containers: []
	W0731 18:10:23.684757   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:23.684765   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:23.684820   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:23.717315   74203 cri.go:89] found id: ""
	I0731 18:10:23.717350   74203 logs.go:276] 0 containers: []
	W0731 18:10:23.717362   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:23.717369   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:23.717432   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:23.750251   74203 cri.go:89] found id: ""
	I0731 18:10:23.750275   74203 logs.go:276] 0 containers: []
	W0731 18:10:23.750286   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:23.750293   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:23.750397   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:23.785700   74203 cri.go:89] found id: ""
	I0731 18:10:23.785726   74203 logs.go:276] 0 containers: []
	W0731 18:10:23.785737   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:23.785745   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:23.785792   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:23.816856   74203 cri.go:89] found id: ""
	I0731 18:10:23.816885   74203 logs.go:276] 0 containers: []
	W0731 18:10:23.816895   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:23.816902   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:23.816965   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:23.849931   74203 cri.go:89] found id: ""
	I0731 18:10:23.849962   74203 logs.go:276] 0 containers: []
	W0731 18:10:23.849972   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:23.849980   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:23.850043   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:23.881413   74203 cri.go:89] found id: ""
	I0731 18:10:23.881444   74203 logs.go:276] 0 containers: []
	W0731 18:10:23.881452   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:23.881458   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:23.881516   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:23.914272   74203 cri.go:89] found id: ""
	I0731 18:10:23.914303   74203 logs.go:276] 0 containers: []
	W0731 18:10:23.914313   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:23.914325   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:23.914352   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:23.979988   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:23.980015   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:23.980027   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:24.057159   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:24.057198   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:24.097567   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:24.097603   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:24.154740   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:24.154781   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:21.279091   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:23.779103   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:25.779754   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:23.142423   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:25.642901   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:25.272277   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:27.771221   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:26.670324   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:26.683866   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:26.683951   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:26.717671   74203 cri.go:89] found id: ""
	I0731 18:10:26.717722   74203 logs.go:276] 0 containers: []
	W0731 18:10:26.717733   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:26.717739   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:26.717790   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:26.751201   74203 cri.go:89] found id: ""
	I0731 18:10:26.751228   74203 logs.go:276] 0 containers: []
	W0731 18:10:26.751236   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:26.751246   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:26.751315   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:26.784768   74203 cri.go:89] found id: ""
	I0731 18:10:26.784793   74203 logs.go:276] 0 containers: []
	W0731 18:10:26.784803   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:26.784811   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:26.784868   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:26.822269   74203 cri.go:89] found id: ""
	I0731 18:10:26.822298   74203 logs.go:276] 0 containers: []
	W0731 18:10:26.822307   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:26.822315   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:26.822378   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:26.854405   74203 cri.go:89] found id: ""
	I0731 18:10:26.854427   74203 logs.go:276] 0 containers: []
	W0731 18:10:26.854434   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:26.854441   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:26.854490   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:26.888975   74203 cri.go:89] found id: ""
	I0731 18:10:26.889000   74203 logs.go:276] 0 containers: []
	W0731 18:10:26.889007   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:26.889013   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:26.889085   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:26.922940   74203 cri.go:89] found id: ""
	I0731 18:10:26.922967   74203 logs.go:276] 0 containers: []
	W0731 18:10:26.922976   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:26.922981   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:26.923040   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:26.955717   74203 cri.go:89] found id: ""
	I0731 18:10:26.955743   74203 logs.go:276] 0 containers: []
	W0731 18:10:26.955754   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:26.955764   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:26.955779   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:27.006453   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:27.006481   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:27.019136   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:27.019159   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:27.086988   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:27.087014   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:27.087031   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:27.161574   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:27.161604   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:29.705620   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:29.718718   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:29.718775   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:29.751079   74203 cri.go:89] found id: ""
	I0731 18:10:29.751123   74203 logs.go:276] 0 containers: []
	W0731 18:10:29.751134   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:29.751142   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:29.751198   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:29.790944   74203 cri.go:89] found id: ""
	I0731 18:10:29.790971   74203 logs.go:276] 0 containers: []
	W0731 18:10:29.790982   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:29.790988   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:29.791041   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:29.827921   74203 cri.go:89] found id: ""
	I0731 18:10:29.827951   74203 logs.go:276] 0 containers: []
	W0731 18:10:29.827965   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:29.827971   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:29.828031   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:29.861365   74203 cri.go:89] found id: ""
	I0731 18:10:29.861398   74203 logs.go:276] 0 containers: []
	W0731 18:10:29.861409   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:29.861417   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:29.861472   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:29.894509   74203 cri.go:89] found id: ""
	I0731 18:10:29.894537   74203 logs.go:276] 0 containers: []
	W0731 18:10:29.894546   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:29.894552   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:29.894614   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:29.926793   74203 cri.go:89] found id: ""
	I0731 18:10:29.926821   74203 logs.go:276] 0 containers: []
	W0731 18:10:29.926832   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:29.926839   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:29.926904   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:29.963765   74203 cri.go:89] found id: ""
	I0731 18:10:29.963792   74203 logs.go:276] 0 containers: []
	W0731 18:10:29.963802   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:29.963809   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:29.963870   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:29.998577   74203 cri.go:89] found id: ""
	I0731 18:10:29.998604   74203 logs.go:276] 0 containers: []
	W0731 18:10:29.998611   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:29.998619   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:29.998630   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:30.050035   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:30.050072   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:30.064147   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:30.064178   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:30.136990   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:30.137012   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:30.137030   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:30.214687   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:30.214719   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:28.279257   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:30.778466   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:27.644082   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:30.144191   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:29.772316   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:32.272651   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:32.753503   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:32.766795   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:32.766873   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:32.812134   74203 cri.go:89] found id: ""
	I0731 18:10:32.812161   74203 logs.go:276] 0 containers: []
	W0731 18:10:32.812169   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:32.812175   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:32.812229   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:32.846997   74203 cri.go:89] found id: ""
	I0731 18:10:32.847029   74203 logs.go:276] 0 containers: []
	W0731 18:10:32.847039   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:32.847044   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:32.847092   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:32.884093   74203 cri.go:89] found id: ""
	I0731 18:10:32.884123   74203 logs.go:276] 0 containers: []
	W0731 18:10:32.884132   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:32.884138   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:32.884188   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:32.920160   74203 cri.go:89] found id: ""
	I0731 18:10:32.920186   74203 logs.go:276] 0 containers: []
	W0731 18:10:32.920197   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:32.920204   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:32.920263   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:32.952750   74203 cri.go:89] found id: ""
	I0731 18:10:32.952777   74203 logs.go:276] 0 containers: []
	W0731 18:10:32.952788   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:32.952795   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:32.952865   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:32.989086   74203 cri.go:89] found id: ""
	I0731 18:10:32.989115   74203 logs.go:276] 0 containers: []
	W0731 18:10:32.989125   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:32.989135   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:32.989189   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:33.021554   74203 cri.go:89] found id: ""
	I0731 18:10:33.021590   74203 logs.go:276] 0 containers: []
	W0731 18:10:33.021602   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:33.021609   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:33.021662   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:33.061097   74203 cri.go:89] found id: ""
	I0731 18:10:33.061128   74203 logs.go:276] 0 containers: []
	W0731 18:10:33.061141   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:33.061160   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:33.061174   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:33.113497   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:33.113534   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:33.126816   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:33.126842   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:33.196713   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:33.196733   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:33.196744   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:33.277697   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:33.277724   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:33.279738   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:35.780181   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:32.643177   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:35.143606   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:34.771678   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:36.772167   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:39.272752   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:35.817143   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:35.829760   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:35.829820   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:35.862974   74203 cri.go:89] found id: ""
	I0731 18:10:35.863002   74203 logs.go:276] 0 containers: []
	W0731 18:10:35.863014   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:35.863022   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:35.863078   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:35.898547   74203 cri.go:89] found id: ""
	I0731 18:10:35.898576   74203 logs.go:276] 0 containers: []
	W0731 18:10:35.898584   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:35.898590   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:35.898651   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:35.930351   74203 cri.go:89] found id: ""
	I0731 18:10:35.930379   74203 logs.go:276] 0 containers: []
	W0731 18:10:35.930390   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:35.930396   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:35.930463   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:35.962623   74203 cri.go:89] found id: ""
	I0731 18:10:35.962652   74203 logs.go:276] 0 containers: []
	W0731 18:10:35.962663   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:35.962670   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:35.962727   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:35.998213   74203 cri.go:89] found id: ""
	I0731 18:10:35.998233   74203 logs.go:276] 0 containers: []
	W0731 18:10:35.998240   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:35.998245   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:35.998291   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:36.032670   74203 cri.go:89] found id: ""
	I0731 18:10:36.032695   74203 logs.go:276] 0 containers: []
	W0731 18:10:36.032703   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:36.032709   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:36.032757   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:36.066349   74203 cri.go:89] found id: ""
	I0731 18:10:36.066381   74203 logs.go:276] 0 containers: []
	W0731 18:10:36.066392   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:36.066399   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:36.066461   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:36.104137   74203 cri.go:89] found id: ""
	I0731 18:10:36.104168   74203 logs.go:276] 0 containers: []
	W0731 18:10:36.104180   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:36.104200   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:36.104215   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:36.155814   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:36.155844   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:36.168885   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:36.168912   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:36.235950   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:36.235972   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:36.235987   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:36.318382   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:36.318414   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:38.853972   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:38.867018   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:38.867089   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:38.902069   74203 cri.go:89] found id: ""
	I0731 18:10:38.902097   74203 logs.go:276] 0 containers: []
	W0731 18:10:38.902109   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:38.902115   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:38.902181   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:38.935272   74203 cri.go:89] found id: ""
	I0731 18:10:38.935296   74203 logs.go:276] 0 containers: []
	W0731 18:10:38.935316   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:38.935329   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:38.935387   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:38.968582   74203 cri.go:89] found id: ""
	I0731 18:10:38.968610   74203 logs.go:276] 0 containers: []
	W0731 18:10:38.968621   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:38.968629   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:38.968688   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:38.999740   74203 cri.go:89] found id: ""
	I0731 18:10:38.999770   74203 logs.go:276] 0 containers: []
	W0731 18:10:38.999780   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:38.999787   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:38.999845   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:39.032964   74203 cri.go:89] found id: ""
	I0731 18:10:39.032993   74203 logs.go:276] 0 containers: []
	W0731 18:10:39.033008   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:39.033015   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:39.033099   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:39.064121   74203 cri.go:89] found id: ""
	I0731 18:10:39.064149   74203 logs.go:276] 0 containers: []
	W0731 18:10:39.064158   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:39.064164   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:39.064222   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:39.098462   74203 cri.go:89] found id: ""
	I0731 18:10:39.098488   74203 logs.go:276] 0 containers: []
	W0731 18:10:39.098498   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:39.098505   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:39.098564   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:39.130627   74203 cri.go:89] found id: ""
	I0731 18:10:39.130653   74203 logs.go:276] 0 containers: []
	W0731 18:10:39.130663   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:39.130674   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:39.130687   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:39.223664   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:39.223698   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:39.260502   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:39.260533   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:39.315643   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:39.315675   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:39.329731   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:39.329761   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:39.395078   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:38.278911   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:40.779921   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:37.643246   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:39.643862   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:41.772051   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:44.271544   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:41.895698   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:41.910111   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:41.910191   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:41.943700   74203 cri.go:89] found id: ""
	I0731 18:10:41.943732   74203 logs.go:276] 0 containers: []
	W0731 18:10:41.943743   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:41.943751   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:41.943812   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:41.976848   74203 cri.go:89] found id: ""
	I0731 18:10:41.976879   74203 logs.go:276] 0 containers: []
	W0731 18:10:41.976888   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:41.976894   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:41.976967   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:42.009424   74203 cri.go:89] found id: ""
	I0731 18:10:42.009451   74203 logs.go:276] 0 containers: []
	W0731 18:10:42.009462   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:42.009477   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:42.009546   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:42.047233   74203 cri.go:89] found id: ""
	I0731 18:10:42.047260   74203 logs.go:276] 0 containers: []
	W0731 18:10:42.047268   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:42.047274   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:42.047342   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:42.079900   74203 cri.go:89] found id: ""
	I0731 18:10:42.079928   74203 logs.go:276] 0 containers: []
	W0731 18:10:42.079938   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:42.079945   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:42.080025   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:42.114122   74203 cri.go:89] found id: ""
	I0731 18:10:42.114152   74203 logs.go:276] 0 containers: []
	W0731 18:10:42.114164   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:42.114172   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:42.114224   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:42.148741   74203 cri.go:89] found id: ""
	I0731 18:10:42.148768   74203 logs.go:276] 0 containers: []
	W0731 18:10:42.148780   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:42.148789   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:42.148853   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:42.184739   74203 cri.go:89] found id: ""
	I0731 18:10:42.184762   74203 logs.go:276] 0 containers: []
	W0731 18:10:42.184769   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:42.184777   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:42.184791   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:42.254676   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:42.254694   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:42.254706   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:42.334936   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:42.334978   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:42.371511   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:42.371540   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:42.421800   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:42.421831   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:44.934983   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:44.947212   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:44.947293   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:44.979722   74203 cri.go:89] found id: ""
	I0731 18:10:44.979748   74203 logs.go:276] 0 containers: []
	W0731 18:10:44.979760   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:44.979767   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:44.979819   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:45.011594   74203 cri.go:89] found id: ""
	I0731 18:10:45.011620   74203 logs.go:276] 0 containers: []
	W0731 18:10:45.011630   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:45.011637   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:45.011803   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:45.043174   74203 cri.go:89] found id: ""
	I0731 18:10:45.043197   74203 logs.go:276] 0 containers: []
	W0731 18:10:45.043207   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:45.043214   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:45.043278   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:45.074629   74203 cri.go:89] found id: ""
	I0731 18:10:45.074652   74203 logs.go:276] 0 containers: []
	W0731 18:10:45.074662   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:45.074669   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:45.074727   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:45.108917   74203 cri.go:89] found id: ""
	I0731 18:10:45.108944   74203 logs.go:276] 0 containers: []
	W0731 18:10:45.108952   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:45.108959   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:45.109018   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:45.142200   74203 cri.go:89] found id: ""
	I0731 18:10:45.142227   74203 logs.go:276] 0 containers: []
	W0731 18:10:45.142237   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:45.142244   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:45.142306   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:45.177076   74203 cri.go:89] found id: ""
	I0731 18:10:45.177101   74203 logs.go:276] 0 containers: []
	W0731 18:10:45.177109   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:45.177114   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:45.177168   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:45.209352   74203 cri.go:89] found id: ""
	I0731 18:10:45.209376   74203 logs.go:276] 0 containers: []
	W0731 18:10:45.209383   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:45.209392   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:45.209407   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:45.257966   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:45.257998   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:45.272429   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:45.272462   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 18:10:43.279626   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:45.778975   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:42.145247   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:44.642278   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:46.644897   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:46.771785   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:48.772117   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	W0731 18:10:45.347952   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:45.347973   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:45.347988   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:45.428556   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:45.428609   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:47.971089   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:47.986677   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:47.986749   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:48.020396   74203 cri.go:89] found id: ""
	I0731 18:10:48.020426   74203 logs.go:276] 0 containers: []
	W0731 18:10:48.020438   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:48.020446   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:48.020512   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:48.058129   74203 cri.go:89] found id: ""
	I0731 18:10:48.058161   74203 logs.go:276] 0 containers: []
	W0731 18:10:48.058172   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:48.058180   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:48.058249   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:48.091894   74203 cri.go:89] found id: ""
	I0731 18:10:48.091922   74203 logs.go:276] 0 containers: []
	W0731 18:10:48.091932   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:48.091939   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:48.091998   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:48.124757   74203 cri.go:89] found id: ""
	I0731 18:10:48.124788   74203 logs.go:276] 0 containers: []
	W0731 18:10:48.124798   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:48.124807   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:48.124871   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:48.159145   74203 cri.go:89] found id: ""
	I0731 18:10:48.159172   74203 logs.go:276] 0 containers: []
	W0731 18:10:48.159184   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:48.159191   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:48.159253   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:48.200024   74203 cri.go:89] found id: ""
	I0731 18:10:48.200051   74203 logs.go:276] 0 containers: []
	W0731 18:10:48.200061   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:48.200069   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:48.200128   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:48.233838   74203 cri.go:89] found id: ""
	I0731 18:10:48.233870   74203 logs.go:276] 0 containers: []
	W0731 18:10:48.233880   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:48.233886   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:48.233941   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:48.265786   74203 cri.go:89] found id: ""
	I0731 18:10:48.265812   74203 logs.go:276] 0 containers: []
	W0731 18:10:48.265821   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:48.265832   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:48.265846   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:48.280422   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:48.280449   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:48.346774   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:48.346796   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:48.346808   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:48.424017   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:48.424052   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:48.464139   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:48.464166   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:47.781556   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:50.278635   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:49.143684   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:51.144631   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:51.272847   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:53.771397   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:51.013681   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:51.028745   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:51.028814   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:51.062656   74203 cri.go:89] found id: ""
	I0731 18:10:51.062683   74203 logs.go:276] 0 containers: []
	W0731 18:10:51.062691   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:51.062700   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:51.062749   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:51.099203   74203 cri.go:89] found id: ""
	I0731 18:10:51.099228   74203 logs.go:276] 0 containers: []
	W0731 18:10:51.099237   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:51.099243   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:51.099310   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:51.133507   74203 cri.go:89] found id: ""
	I0731 18:10:51.133533   74203 logs.go:276] 0 containers: []
	W0731 18:10:51.133540   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:51.133546   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:51.133596   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:51.169935   74203 cri.go:89] found id: ""
	I0731 18:10:51.169954   74203 logs.go:276] 0 containers: []
	W0731 18:10:51.169961   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:51.169966   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:51.170012   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:51.202877   74203 cri.go:89] found id: ""
	I0731 18:10:51.202903   74203 logs.go:276] 0 containers: []
	W0731 18:10:51.202913   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:51.202919   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:51.202988   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:51.239913   74203 cri.go:89] found id: ""
	I0731 18:10:51.239939   74203 logs.go:276] 0 containers: []
	W0731 18:10:51.239949   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:51.239957   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:51.240018   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:51.272024   74203 cri.go:89] found id: ""
	I0731 18:10:51.272095   74203 logs.go:276] 0 containers: []
	W0731 18:10:51.272115   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:51.272123   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:51.272185   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:51.307016   74203 cri.go:89] found id: ""
	I0731 18:10:51.307043   74203 logs.go:276] 0 containers: []
	W0731 18:10:51.307053   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:51.307063   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:51.307079   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:51.364018   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:51.364066   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:51.384277   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:51.384303   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:51.472657   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:51.472679   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:51.472696   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:51.548408   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:51.548439   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:54.086526   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:54.099293   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:54.099368   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:54.129927   74203 cri.go:89] found id: ""
	I0731 18:10:54.129954   74203 logs.go:276] 0 containers: []
	W0731 18:10:54.129965   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:54.129972   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:54.130042   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:54.166428   74203 cri.go:89] found id: ""
	I0731 18:10:54.166457   74203 logs.go:276] 0 containers: []
	W0731 18:10:54.166468   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:54.166476   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:54.166538   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:54.204523   74203 cri.go:89] found id: ""
	I0731 18:10:54.204549   74203 logs.go:276] 0 containers: []
	W0731 18:10:54.204556   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:54.204562   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:54.204619   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:54.241706   74203 cri.go:89] found id: ""
	I0731 18:10:54.241730   74203 logs.go:276] 0 containers: []
	W0731 18:10:54.241737   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:54.241744   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:54.241802   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:54.277154   74203 cri.go:89] found id: ""
	I0731 18:10:54.277178   74203 logs.go:276] 0 containers: []
	W0731 18:10:54.277187   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:54.277193   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:54.277255   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:54.310198   74203 cri.go:89] found id: ""
	I0731 18:10:54.310223   74203 logs.go:276] 0 containers: []
	W0731 18:10:54.310231   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:54.310237   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:54.310283   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:54.344807   74203 cri.go:89] found id: ""
	I0731 18:10:54.344837   74203 logs.go:276] 0 containers: []
	W0731 18:10:54.344847   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:54.344854   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:54.344915   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:54.383358   74203 cri.go:89] found id: ""
	I0731 18:10:54.383391   74203 logs.go:276] 0 containers: []
	W0731 18:10:54.383400   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:54.383410   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:54.383424   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:54.431876   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:54.431908   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:54.444797   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:54.444824   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:54.518816   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:54.518839   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:54.518855   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:54.600072   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:54.600109   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:52.279006   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:54.279520   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:53.643093   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:56.143250   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:56.272955   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:58.771584   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:57.141070   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:57.155903   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:57.155975   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:57.189406   74203 cri.go:89] found id: ""
	I0731 18:10:57.189428   74203 logs.go:276] 0 containers: []
	W0731 18:10:57.189435   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:57.189441   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:57.189510   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:57.221507   74203 cri.go:89] found id: ""
	I0731 18:10:57.221531   74203 logs.go:276] 0 containers: []
	W0731 18:10:57.221540   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:57.221547   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:57.221614   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:57.257843   74203 cri.go:89] found id: ""
	I0731 18:10:57.257868   74203 logs.go:276] 0 containers: []
	W0731 18:10:57.257880   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:57.257887   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:57.257939   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:57.292697   74203 cri.go:89] found id: ""
	I0731 18:10:57.292728   74203 logs.go:276] 0 containers: []
	W0731 18:10:57.292737   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:57.292744   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:57.292802   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:57.325705   74203 cri.go:89] found id: ""
	I0731 18:10:57.325728   74203 logs.go:276] 0 containers: []
	W0731 18:10:57.325735   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:57.325740   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:57.325787   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:57.357436   74203 cri.go:89] found id: ""
	I0731 18:10:57.357463   74203 logs.go:276] 0 containers: []
	W0731 18:10:57.357471   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:57.357477   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:57.357534   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:57.388215   74203 cri.go:89] found id: ""
	I0731 18:10:57.388240   74203 logs.go:276] 0 containers: []
	W0731 18:10:57.388249   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:57.388256   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:57.388315   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:57.419609   74203 cri.go:89] found id: ""
	I0731 18:10:57.419631   74203 logs.go:276] 0 containers: []
	W0731 18:10:57.419643   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:57.419652   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:57.419663   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:57.497157   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:57.497188   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:57.533512   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:57.533552   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:57.587866   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:57.587904   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:57.601191   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:57.601222   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:57.681899   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:00.182160   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:00.195509   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:00.195598   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:00.230650   74203 cri.go:89] found id: ""
	I0731 18:11:00.230674   74203 logs.go:276] 0 containers: []
	W0731 18:11:00.230682   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:00.230689   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:00.230747   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:00.268629   74203 cri.go:89] found id: ""
	I0731 18:11:00.268656   74203 logs.go:276] 0 containers: []
	W0731 18:11:00.268666   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:00.268672   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:00.268740   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:00.301805   74203 cri.go:89] found id: ""
	I0731 18:11:00.301827   74203 logs.go:276] 0 containers: []
	W0731 18:11:00.301836   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:00.301843   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:00.301901   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:00.333844   74203 cri.go:89] found id: ""
	I0731 18:11:00.333871   74203 logs.go:276] 0 containers: []
	W0731 18:11:00.333882   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:00.333889   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:00.333949   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:56.779307   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:58.779655   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:58.643375   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:00.643713   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:01.272195   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:03.272739   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:00.366250   74203 cri.go:89] found id: ""
	I0731 18:11:00.366278   74203 logs.go:276] 0 containers: []
	W0731 18:11:00.366288   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:00.366295   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:00.366358   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:00.399301   74203 cri.go:89] found id: ""
	I0731 18:11:00.399325   74203 logs.go:276] 0 containers: []
	W0731 18:11:00.399335   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:00.399342   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:00.399405   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:00.432182   74203 cri.go:89] found id: ""
	I0731 18:11:00.432207   74203 logs.go:276] 0 containers: []
	W0731 18:11:00.432218   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:00.432224   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:00.432284   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:00.465395   74203 cri.go:89] found id: ""
	I0731 18:11:00.465423   74203 logs.go:276] 0 containers: []
	W0731 18:11:00.465432   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:00.465440   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:00.465453   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:00.516042   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:00.516077   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:00.528621   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:00.528647   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:00.600297   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:00.600322   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:00.600339   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:00.680368   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:00.680399   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:03.217684   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:03.230691   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:03.230752   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:03.264882   74203 cri.go:89] found id: ""
	I0731 18:11:03.264910   74203 logs.go:276] 0 containers: []
	W0731 18:11:03.264918   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:03.264924   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:03.264976   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:03.301608   74203 cri.go:89] found id: ""
	I0731 18:11:03.301733   74203 logs.go:276] 0 containers: []
	W0731 18:11:03.301754   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:03.301765   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:03.301838   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:03.335077   74203 cri.go:89] found id: ""
	I0731 18:11:03.335102   74203 logs.go:276] 0 containers: []
	W0731 18:11:03.335121   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:03.335128   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:03.335196   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:03.370755   74203 cri.go:89] found id: ""
	I0731 18:11:03.370783   74203 logs.go:276] 0 containers: []
	W0731 18:11:03.370794   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:03.370802   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:03.370862   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:03.403004   74203 cri.go:89] found id: ""
	I0731 18:11:03.403035   74203 logs.go:276] 0 containers: []
	W0731 18:11:03.403045   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:03.403052   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:03.403125   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:03.437169   74203 cri.go:89] found id: ""
	I0731 18:11:03.437209   74203 logs.go:276] 0 containers: []
	W0731 18:11:03.437219   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:03.437235   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:03.437296   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:03.469956   74203 cri.go:89] found id: ""
	I0731 18:11:03.469981   74203 logs.go:276] 0 containers: []
	W0731 18:11:03.469991   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:03.469998   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:03.470056   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:03.503850   74203 cri.go:89] found id: ""
	I0731 18:11:03.503878   74203 logs.go:276] 0 containers: []
	W0731 18:11:03.503888   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:03.503898   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:03.503913   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:03.554993   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:03.555036   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:03.567898   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:03.567925   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:03.630151   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:03.630188   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:03.630207   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:03.708552   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:03.708596   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:01.278830   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:03.278880   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:05.778296   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:03.143289   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:05.152015   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:05.771810   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:08.271205   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:06.249728   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:06.261923   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:06.261998   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:06.296249   74203 cri.go:89] found id: ""
	I0731 18:11:06.296276   74203 logs.go:276] 0 containers: []
	W0731 18:11:06.296286   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:06.296292   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:06.296356   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:06.329355   74203 cri.go:89] found id: ""
	I0731 18:11:06.329381   74203 logs.go:276] 0 containers: []
	W0731 18:11:06.329389   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:06.329395   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:06.329443   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:06.362585   74203 cri.go:89] found id: ""
	I0731 18:11:06.362618   74203 logs.go:276] 0 containers: []
	W0731 18:11:06.362630   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:06.362643   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:06.362704   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:06.396489   74203 cri.go:89] found id: ""
	I0731 18:11:06.396514   74203 logs.go:276] 0 containers: []
	W0731 18:11:06.396521   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:06.396527   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:06.396590   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:06.428859   74203 cri.go:89] found id: ""
	I0731 18:11:06.428888   74203 logs.go:276] 0 containers: []
	W0731 18:11:06.428897   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:06.428903   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:06.428960   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:06.468817   74203 cri.go:89] found id: ""
	I0731 18:11:06.468846   74203 logs.go:276] 0 containers: []
	W0731 18:11:06.468856   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:06.468864   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:06.468924   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:06.499975   74203 cri.go:89] found id: ""
	I0731 18:11:06.500000   74203 logs.go:276] 0 containers: []
	W0731 18:11:06.500008   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:06.500013   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:06.500067   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:06.537410   74203 cri.go:89] found id: ""
	I0731 18:11:06.537440   74203 logs.go:276] 0 containers: []
	W0731 18:11:06.537451   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:06.537461   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:06.537476   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:06.589664   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:06.589709   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:06.603978   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:06.604005   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:06.673436   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:06.673454   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:06.673465   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:06.757101   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:06.757143   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:09.299562   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:09.311910   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:09.311971   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:09.346517   74203 cri.go:89] found id: ""
	I0731 18:11:09.346545   74203 logs.go:276] 0 containers: []
	W0731 18:11:09.346555   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:09.346562   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:09.346634   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:09.377688   74203 cri.go:89] found id: ""
	I0731 18:11:09.377713   74203 logs.go:276] 0 containers: []
	W0731 18:11:09.377720   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:09.377726   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:09.377787   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:09.412149   74203 cri.go:89] found id: ""
	I0731 18:11:09.412176   74203 logs.go:276] 0 containers: []
	W0731 18:11:09.412186   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:09.412193   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:09.412259   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:09.444134   74203 cri.go:89] found id: ""
	I0731 18:11:09.444162   74203 logs.go:276] 0 containers: []
	W0731 18:11:09.444172   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:09.444178   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:09.444233   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:09.481407   74203 cri.go:89] found id: ""
	I0731 18:11:09.481436   74203 logs.go:276] 0 containers: []
	W0731 18:11:09.481447   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:09.481453   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:09.481513   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:09.514926   74203 cri.go:89] found id: ""
	I0731 18:11:09.514950   74203 logs.go:276] 0 containers: []
	W0731 18:11:09.514967   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:09.514974   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:09.515036   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:09.547253   74203 cri.go:89] found id: ""
	I0731 18:11:09.547278   74203 logs.go:276] 0 containers: []
	W0731 18:11:09.547285   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:09.547291   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:09.547376   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:09.587585   74203 cri.go:89] found id: ""
	I0731 18:11:09.587614   74203 logs.go:276] 0 containers: []
	W0731 18:11:09.587622   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:09.587632   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:09.587646   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:09.642024   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:09.642057   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:09.655244   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:09.655270   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:09.721446   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:09.721474   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:09.721489   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:09.803315   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:09.803349   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:07.779195   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:10.278028   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:07.643242   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:10.143895   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:10.271515   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:12.771322   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:12.344355   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:12.357122   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:12.357194   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:12.392237   74203 cri.go:89] found id: ""
	I0731 18:11:12.392258   74203 logs.go:276] 0 containers: []
	W0731 18:11:12.392267   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:12.392272   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:12.392339   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:12.424490   74203 cri.go:89] found id: ""
	I0731 18:11:12.424514   74203 logs.go:276] 0 containers: []
	W0731 18:11:12.424523   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:12.424529   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:12.424587   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:12.458438   74203 cri.go:89] found id: ""
	I0731 18:11:12.458467   74203 logs.go:276] 0 containers: []
	W0731 18:11:12.458477   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:12.458483   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:12.458545   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:12.495343   74203 cri.go:89] found id: ""
	I0731 18:11:12.495371   74203 logs.go:276] 0 containers: []
	W0731 18:11:12.495383   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:12.495391   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:12.495455   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:12.527285   74203 cri.go:89] found id: ""
	I0731 18:11:12.527314   74203 logs.go:276] 0 containers: []
	W0731 18:11:12.527324   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:12.527334   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:12.527393   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:12.560341   74203 cri.go:89] found id: ""
	I0731 18:11:12.560369   74203 logs.go:276] 0 containers: []
	W0731 18:11:12.560379   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:12.560387   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:12.560444   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:12.595084   74203 cri.go:89] found id: ""
	I0731 18:11:12.595120   74203 logs.go:276] 0 containers: []
	W0731 18:11:12.595133   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:12.595141   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:12.595215   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:12.630666   74203 cri.go:89] found id: ""
	I0731 18:11:12.630692   74203 logs.go:276] 0 containers: []
	W0731 18:11:12.630702   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:12.630711   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:12.630727   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:12.683588   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:12.683620   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:12.696899   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:12.696925   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:12.757815   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:12.757837   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:12.757870   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:12.834888   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:12.834927   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:12.278464   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:14.279031   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:12.643960   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:15.142811   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:14.771367   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:16.772010   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:19.271857   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:15.372797   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:15.386268   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:15.386356   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:15.420446   74203 cri.go:89] found id: ""
	I0731 18:11:15.420477   74203 logs.go:276] 0 containers: []
	W0731 18:11:15.420488   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:15.420497   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:15.420556   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:15.456092   74203 cri.go:89] found id: ""
	I0731 18:11:15.456118   74203 logs.go:276] 0 containers: []
	W0731 18:11:15.456129   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:15.456136   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:15.456194   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:15.488277   74203 cri.go:89] found id: ""
	I0731 18:11:15.488304   74203 logs.go:276] 0 containers: []
	W0731 18:11:15.488316   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:15.488323   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:15.488384   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:15.520701   74203 cri.go:89] found id: ""
	I0731 18:11:15.520730   74203 logs.go:276] 0 containers: []
	W0731 18:11:15.520741   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:15.520749   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:15.520818   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:15.552831   74203 cri.go:89] found id: ""
	I0731 18:11:15.552854   74203 logs.go:276] 0 containers: []
	W0731 18:11:15.552862   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:15.552867   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:15.552920   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:15.589161   74203 cri.go:89] found id: ""
	I0731 18:11:15.589191   74203 logs.go:276] 0 containers: []
	W0731 18:11:15.589203   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:15.589210   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:15.589274   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:15.622501   74203 cri.go:89] found id: ""
	I0731 18:11:15.622532   74203 logs.go:276] 0 containers: []
	W0731 18:11:15.622544   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:15.622552   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:15.622611   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:15.654772   74203 cri.go:89] found id: ""
	I0731 18:11:15.654801   74203 logs.go:276] 0 containers: []
	W0731 18:11:15.654815   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:15.654826   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:15.654843   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:15.703103   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:15.703148   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:15.716620   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:15.716645   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:15.783391   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:15.783416   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:15.783431   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:15.857462   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:15.857495   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:18.394223   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:18.407297   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:18.407374   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:18.439542   74203 cri.go:89] found id: ""
	I0731 18:11:18.439564   74203 logs.go:276] 0 containers: []
	W0731 18:11:18.439572   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:18.439578   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:18.439625   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:18.471838   74203 cri.go:89] found id: ""
	I0731 18:11:18.471863   74203 logs.go:276] 0 containers: []
	W0731 18:11:18.471873   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:18.471883   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:18.471943   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:18.505325   74203 cri.go:89] found id: ""
	I0731 18:11:18.505355   74203 logs.go:276] 0 containers: []
	W0731 18:11:18.505365   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:18.505372   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:18.505432   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:18.536155   74203 cri.go:89] found id: ""
	I0731 18:11:18.536180   74203 logs.go:276] 0 containers: []
	W0731 18:11:18.536189   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:18.536194   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:18.536241   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:18.569301   74203 cri.go:89] found id: ""
	I0731 18:11:18.569329   74203 logs.go:276] 0 containers: []
	W0731 18:11:18.569339   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:18.569344   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:18.569398   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:18.603053   74203 cri.go:89] found id: ""
	I0731 18:11:18.603079   74203 logs.go:276] 0 containers: []
	W0731 18:11:18.603087   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:18.603092   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:18.603170   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:18.636259   74203 cri.go:89] found id: ""
	I0731 18:11:18.636287   74203 logs.go:276] 0 containers: []
	W0731 18:11:18.636298   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:18.636305   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:18.636361   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:18.667839   74203 cri.go:89] found id: ""
	I0731 18:11:18.667863   74203 logs.go:276] 0 containers: []
	W0731 18:11:18.667873   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:18.667883   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:18.667897   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:18.681005   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:18.681030   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:18.747793   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:18.747875   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:18.747892   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:18.828970   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:18.829005   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:18.866724   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:18.866749   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:16.279368   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:18.778730   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:20.779465   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:17.144041   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:19.645356   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:21.272651   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:23.771240   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:21.416598   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:21.431968   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:21.432027   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:21.469670   74203 cri.go:89] found id: ""
	I0731 18:11:21.469696   74203 logs.go:276] 0 containers: []
	W0731 18:11:21.469703   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:21.469709   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:21.469762   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:21.508461   74203 cri.go:89] found id: ""
	I0731 18:11:21.508490   74203 logs.go:276] 0 containers: []
	W0731 18:11:21.508500   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:21.508506   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:21.508570   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:21.548101   74203 cri.go:89] found id: ""
	I0731 18:11:21.548127   74203 logs.go:276] 0 containers: []
	W0731 18:11:21.548136   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:21.548142   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:21.548204   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:21.582617   74203 cri.go:89] found id: ""
	I0731 18:11:21.582646   74203 logs.go:276] 0 containers: []
	W0731 18:11:21.582653   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:21.582659   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:21.582712   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:21.614185   74203 cri.go:89] found id: ""
	I0731 18:11:21.614210   74203 logs.go:276] 0 containers: []
	W0731 18:11:21.614218   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:21.614223   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:21.614278   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:21.647596   74203 cri.go:89] found id: ""
	I0731 18:11:21.647619   74203 logs.go:276] 0 containers: []
	W0731 18:11:21.647629   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:21.647636   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:21.647693   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:21.680106   74203 cri.go:89] found id: ""
	I0731 18:11:21.680132   74203 logs.go:276] 0 containers: []
	W0731 18:11:21.680142   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:21.680149   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:21.680208   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:21.714708   74203 cri.go:89] found id: ""
	I0731 18:11:21.714733   74203 logs.go:276] 0 containers: []
	W0731 18:11:21.714742   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:21.714754   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:21.714779   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:21.783425   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:21.783448   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:21.783462   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:21.859943   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:21.859980   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:21.898374   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:21.898405   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:21.945753   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:21.945784   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:24.459481   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:24.471376   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:24.471435   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:24.506474   74203 cri.go:89] found id: ""
	I0731 18:11:24.506502   74203 logs.go:276] 0 containers: []
	W0731 18:11:24.506511   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:24.506516   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:24.506572   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:24.547298   74203 cri.go:89] found id: ""
	I0731 18:11:24.547324   74203 logs.go:276] 0 containers: []
	W0731 18:11:24.547332   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:24.547337   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:24.547402   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:24.579912   74203 cri.go:89] found id: ""
	I0731 18:11:24.579944   74203 logs.go:276] 0 containers: []
	W0731 18:11:24.579955   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:24.579963   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:24.580032   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:24.613754   74203 cri.go:89] found id: ""
	I0731 18:11:24.613782   74203 logs.go:276] 0 containers: []
	W0731 18:11:24.613791   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:24.613799   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:24.613859   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:24.649782   74203 cri.go:89] found id: ""
	I0731 18:11:24.649811   74203 logs.go:276] 0 containers: []
	W0731 18:11:24.649822   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:24.649829   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:24.649888   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:24.689232   74203 cri.go:89] found id: ""
	I0731 18:11:24.689264   74203 logs.go:276] 0 containers: []
	W0731 18:11:24.689274   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:24.689283   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:24.689361   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:24.727861   74203 cri.go:89] found id: ""
	I0731 18:11:24.727894   74203 logs.go:276] 0 containers: []
	W0731 18:11:24.727902   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:24.727924   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:24.727983   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:24.763839   74203 cri.go:89] found id: ""
	I0731 18:11:24.763866   74203 logs.go:276] 0 containers: []
	W0731 18:11:24.763876   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:24.763886   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:24.763901   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:24.841090   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:24.841131   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:24.877206   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:24.877231   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:24.926149   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:24.926180   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:24.938795   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:24.938822   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:25.008349   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:23.279256   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:25.778644   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:22.143312   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:24.144259   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:26.144310   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:25.771403   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:28.270613   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:27.509192   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:27.522506   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:27.522582   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:27.557915   74203 cri.go:89] found id: ""
	I0731 18:11:27.557943   74203 logs.go:276] 0 containers: []
	W0731 18:11:27.557954   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:27.557962   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:27.558019   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:27.594295   74203 cri.go:89] found id: ""
	I0731 18:11:27.594322   74203 logs.go:276] 0 containers: []
	W0731 18:11:27.594332   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:27.594348   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:27.594410   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:27.626830   74203 cri.go:89] found id: ""
	I0731 18:11:27.626857   74203 logs.go:276] 0 containers: []
	W0731 18:11:27.626868   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:27.626875   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:27.626964   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:27.662062   74203 cri.go:89] found id: ""
	I0731 18:11:27.662084   74203 logs.go:276] 0 containers: []
	W0731 18:11:27.662092   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:27.662099   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:27.662158   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:27.695686   74203 cri.go:89] found id: ""
	I0731 18:11:27.695715   74203 logs.go:276] 0 containers: []
	W0731 18:11:27.695727   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:27.695735   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:27.695785   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:27.729444   74203 cri.go:89] found id: ""
	I0731 18:11:27.729467   74203 logs.go:276] 0 containers: []
	W0731 18:11:27.729475   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:27.729481   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:27.729531   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:27.761889   74203 cri.go:89] found id: ""
	I0731 18:11:27.761916   74203 logs.go:276] 0 containers: []
	W0731 18:11:27.761926   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:27.761934   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:27.761995   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:27.796178   74203 cri.go:89] found id: ""
	I0731 18:11:27.796199   74203 logs.go:276] 0 containers: []
	W0731 18:11:27.796206   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:27.796214   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:27.796227   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:27.849613   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:27.849650   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:27.862892   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:27.862923   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:27.928691   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:27.928717   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:27.928740   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:28.006310   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:28.006340   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:27.779125   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:30.279252   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:28.643172   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:30.645474   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:30.271016   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:32.771684   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:30.543065   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:30.555951   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:30.556013   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:30.597411   74203 cri.go:89] found id: ""
	I0731 18:11:30.597440   74203 logs.go:276] 0 containers: []
	W0731 18:11:30.597451   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:30.597458   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:30.597518   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:30.629836   74203 cri.go:89] found id: ""
	I0731 18:11:30.629866   74203 logs.go:276] 0 containers: []
	W0731 18:11:30.629873   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:30.629878   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:30.629932   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:30.667402   74203 cri.go:89] found id: ""
	I0731 18:11:30.667432   74203 logs.go:276] 0 containers: []
	W0731 18:11:30.667443   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:30.667450   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:30.667513   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:30.701677   74203 cri.go:89] found id: ""
	I0731 18:11:30.701708   74203 logs.go:276] 0 containers: []
	W0731 18:11:30.701716   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:30.701722   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:30.701773   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:30.736685   74203 cri.go:89] found id: ""
	I0731 18:11:30.736714   74203 logs.go:276] 0 containers: []
	W0731 18:11:30.736721   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:30.736736   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:30.736786   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:30.771501   74203 cri.go:89] found id: ""
	I0731 18:11:30.771526   74203 logs.go:276] 0 containers: []
	W0731 18:11:30.771543   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:30.771549   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:30.771597   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:30.805878   74203 cri.go:89] found id: ""
	I0731 18:11:30.805902   74203 logs.go:276] 0 containers: []
	W0731 18:11:30.805911   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:30.805921   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:30.805966   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:30.839001   74203 cri.go:89] found id: ""
	I0731 18:11:30.839027   74203 logs.go:276] 0 containers: []
	W0731 18:11:30.839038   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:30.839048   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:30.839062   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:30.893357   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:30.893387   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:30.907222   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:30.907248   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:30.985626   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:30.985648   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:30.985668   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:31.067900   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:31.067948   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:33.607259   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:33.621596   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:33.621656   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:33.663616   74203 cri.go:89] found id: ""
	I0731 18:11:33.663642   74203 logs.go:276] 0 containers: []
	W0731 18:11:33.663649   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:33.663655   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:33.663704   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:33.702133   74203 cri.go:89] found id: ""
	I0731 18:11:33.702159   74203 logs.go:276] 0 containers: []
	W0731 18:11:33.702167   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:33.702173   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:33.702226   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:33.733730   74203 cri.go:89] found id: ""
	I0731 18:11:33.733752   74203 logs.go:276] 0 containers: []
	W0731 18:11:33.733760   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:33.733765   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:33.733813   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:33.765036   74203 cri.go:89] found id: ""
	I0731 18:11:33.765064   74203 logs.go:276] 0 containers: []
	W0731 18:11:33.765074   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:33.765080   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:33.765128   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:33.799604   74203 cri.go:89] found id: ""
	I0731 18:11:33.799630   74203 logs.go:276] 0 containers: []
	W0731 18:11:33.799640   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:33.799648   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:33.799716   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:33.831434   74203 cri.go:89] found id: ""
	I0731 18:11:33.831455   74203 logs.go:276] 0 containers: []
	W0731 18:11:33.831464   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:33.831469   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:33.831514   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:33.862975   74203 cri.go:89] found id: ""
	I0731 18:11:33.863004   74203 logs.go:276] 0 containers: []
	W0731 18:11:33.863014   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:33.863022   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:33.863090   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:33.895674   74203 cri.go:89] found id: ""
	I0731 18:11:33.895704   74203 logs.go:276] 0 containers: []
	W0731 18:11:33.895714   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:33.895723   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:33.895737   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:33.931954   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:33.931980   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:33.985353   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:33.985385   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:33.997857   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:33.997882   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:34.060523   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:34.060553   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:34.060575   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:32.778212   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:35.278655   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:33.151579   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:35.643326   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:34.771873   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:36.772309   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:39.271582   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:36.643003   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:36.659306   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:36.659385   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:36.717097   74203 cri.go:89] found id: ""
	I0731 18:11:36.717129   74203 logs.go:276] 0 containers: []
	W0731 18:11:36.717141   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:36.717149   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:36.717212   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:36.750288   74203 cri.go:89] found id: ""
	I0731 18:11:36.750314   74203 logs.go:276] 0 containers: []
	W0731 18:11:36.750325   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:36.750331   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:36.750391   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:36.785272   74203 cri.go:89] found id: ""
	I0731 18:11:36.785296   74203 logs.go:276] 0 containers: []
	W0731 18:11:36.785304   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:36.785310   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:36.785360   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:36.818927   74203 cri.go:89] found id: ""
	I0731 18:11:36.818953   74203 logs.go:276] 0 containers: []
	W0731 18:11:36.818965   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:36.818972   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:36.819020   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:36.854562   74203 cri.go:89] found id: ""
	I0731 18:11:36.854593   74203 logs.go:276] 0 containers: []
	W0731 18:11:36.854602   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:36.854607   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:36.854670   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:36.887786   74203 cri.go:89] found id: ""
	I0731 18:11:36.887814   74203 logs.go:276] 0 containers: []
	W0731 18:11:36.887825   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:36.887833   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:36.887893   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:36.919418   74203 cri.go:89] found id: ""
	I0731 18:11:36.919446   74203 logs.go:276] 0 containers: []
	W0731 18:11:36.919457   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:36.919471   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:36.919533   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:36.956934   74203 cri.go:89] found id: ""
	I0731 18:11:36.956957   74203 logs.go:276] 0 containers: []
	W0731 18:11:36.956964   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:36.956971   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:36.956989   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:37.003755   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:37.003783   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:37.016977   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:37.017004   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:37.091617   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:37.091646   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:37.091662   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:37.170870   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:37.170903   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:39.714271   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:39.730306   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:39.730383   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:39.765368   74203 cri.go:89] found id: ""
	I0731 18:11:39.765399   74203 logs.go:276] 0 containers: []
	W0731 18:11:39.765407   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:39.765412   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:39.765471   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:39.800394   74203 cri.go:89] found id: ""
	I0731 18:11:39.800419   74203 logs.go:276] 0 containers: []
	W0731 18:11:39.800427   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:39.800433   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:39.800486   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:39.834861   74203 cri.go:89] found id: ""
	I0731 18:11:39.834889   74203 logs.go:276] 0 containers: []
	W0731 18:11:39.834898   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:39.834903   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:39.834958   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:39.868108   74203 cri.go:89] found id: ""
	I0731 18:11:39.868132   74203 logs.go:276] 0 containers: []
	W0731 18:11:39.868141   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:39.868146   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:39.868220   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:39.902097   74203 cri.go:89] found id: ""
	I0731 18:11:39.902120   74203 logs.go:276] 0 containers: []
	W0731 18:11:39.902128   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:39.902134   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:39.902184   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:39.933073   74203 cri.go:89] found id: ""
	I0731 18:11:39.933100   74203 logs.go:276] 0 containers: []
	W0731 18:11:39.933109   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:39.933114   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:39.933165   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:39.965748   74203 cri.go:89] found id: ""
	I0731 18:11:39.965775   74203 logs.go:276] 0 containers: []
	W0731 18:11:39.965785   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:39.965796   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:39.965856   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:39.998164   74203 cri.go:89] found id: ""
	I0731 18:11:39.998189   74203 logs.go:276] 0 containers: []
	W0731 18:11:39.998197   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:39.998205   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:39.998222   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:40.049991   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:40.050027   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:40.063676   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:40.063705   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:40.125855   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:40.125880   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:40.125896   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:40.207937   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:40.207970   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:37.778894   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:40.278489   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:37.643651   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:40.144731   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:41.271897   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:43.771556   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:42.746315   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:42.758998   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:42.759053   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:42.791921   74203 cri.go:89] found id: ""
	I0731 18:11:42.791946   74203 logs.go:276] 0 containers: []
	W0731 18:11:42.791954   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:42.791958   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:42.792004   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:42.822888   74203 cri.go:89] found id: ""
	I0731 18:11:42.822914   74203 logs.go:276] 0 containers: []
	W0731 18:11:42.822922   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:42.822927   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:42.822973   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:42.854516   74203 cri.go:89] found id: ""
	I0731 18:11:42.854545   74203 logs.go:276] 0 containers: []
	W0731 18:11:42.854564   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:42.854574   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:42.854638   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:42.890933   74203 cri.go:89] found id: ""
	I0731 18:11:42.890955   74203 logs.go:276] 0 containers: []
	W0731 18:11:42.890963   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:42.890968   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:42.891013   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:42.925170   74203 cri.go:89] found id: ""
	I0731 18:11:42.925196   74203 logs.go:276] 0 containers: []
	W0731 18:11:42.925206   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:42.925213   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:42.925273   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:42.959845   74203 cri.go:89] found id: ""
	I0731 18:11:42.959868   74203 logs.go:276] 0 containers: []
	W0731 18:11:42.959876   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:42.959881   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:42.959938   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:42.997305   74203 cri.go:89] found id: ""
	I0731 18:11:42.997346   74203 logs.go:276] 0 containers: []
	W0731 18:11:42.997358   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:42.997366   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:42.997427   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:43.030663   74203 cri.go:89] found id: ""
	I0731 18:11:43.030690   74203 logs.go:276] 0 containers: []
	W0731 18:11:43.030700   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:43.030711   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:43.030725   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:43.112280   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:43.112303   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:43.112318   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:43.209002   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:43.209035   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:43.249596   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:43.249629   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:43.302419   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:43.302449   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:42.278874   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:44.273355   73696 pod_ready.go:81] duration metric: took 4m0.000454583s for pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace to be "Ready" ...
	E0731 18:11:44.273380   73696 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace to be "Ready" (will not retry!)
	I0731 18:11:44.273399   73696 pod_ready.go:38] duration metric: took 4m8.019714552s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:11:44.273430   73696 kubeadm.go:597] duration metric: took 4m16.379038728s to restartPrimaryControlPlane
	W0731 18:11:44.273506   73696 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 18:11:44.273531   73696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 18:11:42.643165   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:44.644976   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:46.271751   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:48.771274   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:45.816910   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:45.829909   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:45.829976   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:45.865534   74203 cri.go:89] found id: ""
	I0731 18:11:45.865561   74203 logs.go:276] 0 containers: []
	W0731 18:11:45.865572   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:45.865584   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:45.865646   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:45.901552   74203 cri.go:89] found id: ""
	I0731 18:11:45.901585   74203 logs.go:276] 0 containers: []
	W0731 18:11:45.901593   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:45.901598   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:45.901678   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:45.938790   74203 cri.go:89] found id: ""
	I0731 18:11:45.938820   74203 logs.go:276] 0 containers: []
	W0731 18:11:45.938842   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:45.938859   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:45.938926   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:45.971502   74203 cri.go:89] found id: ""
	I0731 18:11:45.971534   74203 logs.go:276] 0 containers: []
	W0731 18:11:45.971546   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:45.971553   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:45.971620   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:46.009281   74203 cri.go:89] found id: ""
	I0731 18:11:46.009316   74203 logs.go:276] 0 containers: []
	W0731 18:11:46.009327   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:46.009335   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:46.009399   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:46.042899   74203 cri.go:89] found id: ""
	I0731 18:11:46.042928   74203 logs.go:276] 0 containers: []
	W0731 18:11:46.042939   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:46.042947   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:46.043005   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:46.079982   74203 cri.go:89] found id: ""
	I0731 18:11:46.080013   74203 logs.go:276] 0 containers: []
	W0731 18:11:46.080024   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:46.080031   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:46.080098   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:46.113136   74203 cri.go:89] found id: ""
	I0731 18:11:46.113168   74203 logs.go:276] 0 containers: []
	W0731 18:11:46.113179   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:46.113191   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:46.113206   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:46.165818   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:46.165855   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:46.181058   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:46.181083   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:46.256805   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:46.256826   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:46.256838   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:46.353045   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:46.353093   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:48.894656   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:48.910648   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:48.910723   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:48.941080   74203 cri.go:89] found id: ""
	I0731 18:11:48.941103   74203 logs.go:276] 0 containers: []
	W0731 18:11:48.941111   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:48.941117   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:48.941164   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:48.972113   74203 cri.go:89] found id: ""
	I0731 18:11:48.972136   74203 logs.go:276] 0 containers: []
	W0731 18:11:48.972146   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:48.972151   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:48.972208   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:49.004521   74203 cri.go:89] found id: ""
	I0731 18:11:49.004547   74203 logs.go:276] 0 containers: []
	W0731 18:11:49.004557   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:49.004571   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:49.004658   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:49.036600   74203 cri.go:89] found id: ""
	I0731 18:11:49.036622   74203 logs.go:276] 0 containers: []
	W0731 18:11:49.036629   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:49.036635   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:49.036683   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:49.071397   74203 cri.go:89] found id: ""
	I0731 18:11:49.071426   74203 logs.go:276] 0 containers: []
	W0731 18:11:49.071436   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:49.071444   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:49.071501   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:49.108907   74203 cri.go:89] found id: ""
	I0731 18:11:49.108933   74203 logs.go:276] 0 containers: []
	W0731 18:11:49.108944   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:49.108952   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:49.109007   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:49.141808   74203 cri.go:89] found id: ""
	I0731 18:11:49.141834   74203 logs.go:276] 0 containers: []
	W0731 18:11:49.141844   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:49.141856   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:49.141917   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:49.174063   74203 cri.go:89] found id: ""
	I0731 18:11:49.174087   74203 logs.go:276] 0 containers: []
	W0731 18:11:49.174095   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:49.174104   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:49.174116   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:49.212152   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:49.212181   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:49.267297   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:49.267324   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:49.281342   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:49.281365   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:49.349843   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:49.349866   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:49.349882   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:47.144588   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:49.644395   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:51.271203   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:53.770849   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:51.927764   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:51.940480   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:51.940539   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:51.973731   74203 cri.go:89] found id: ""
	I0731 18:11:51.973759   74203 logs.go:276] 0 containers: []
	W0731 18:11:51.973768   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:51.973780   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:51.973837   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:52.003761   74203 cri.go:89] found id: ""
	I0731 18:11:52.003783   74203 logs.go:276] 0 containers: []
	W0731 18:11:52.003790   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:52.003795   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:52.003844   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:52.035009   74203 cri.go:89] found id: ""
	I0731 18:11:52.035028   74203 logs.go:276] 0 containers: []
	W0731 18:11:52.035035   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:52.035041   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:52.035100   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:52.065475   74203 cri.go:89] found id: ""
	I0731 18:11:52.065501   74203 logs.go:276] 0 containers: []
	W0731 18:11:52.065509   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:52.065515   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:52.065574   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:52.097529   74203 cri.go:89] found id: ""
	I0731 18:11:52.097558   74203 logs.go:276] 0 containers: []
	W0731 18:11:52.097567   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:52.097573   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:52.097622   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:52.128881   74203 cri.go:89] found id: ""
	I0731 18:11:52.128909   74203 logs.go:276] 0 containers: []
	W0731 18:11:52.128917   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:52.128923   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:52.128974   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:52.159894   74203 cri.go:89] found id: ""
	I0731 18:11:52.159921   74203 logs.go:276] 0 containers: []
	W0731 18:11:52.159931   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:52.159939   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:52.159998   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:52.191955   74203 cri.go:89] found id: ""
	I0731 18:11:52.191981   74203 logs.go:276] 0 containers: []
	W0731 18:11:52.191990   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:52.191999   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:52.192009   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:52.246389   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:52.246423   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:52.260226   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:52.260253   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:52.328423   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:52.328447   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:52.328459   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:52.408456   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:52.408495   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:54.947734   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:54.960359   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:54.960420   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:54.994231   74203 cri.go:89] found id: ""
	I0731 18:11:54.994256   74203 logs.go:276] 0 containers: []
	W0731 18:11:54.994264   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:54.994270   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:54.994332   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:55.027323   74203 cri.go:89] found id: ""
	I0731 18:11:55.027364   74203 logs.go:276] 0 containers: []
	W0731 18:11:55.027374   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:55.027382   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:55.027440   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:55.061741   74203 cri.go:89] found id: ""
	I0731 18:11:55.061763   74203 logs.go:276] 0 containers: []
	W0731 18:11:55.061771   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:55.061776   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:55.061822   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:55.100685   74203 cri.go:89] found id: ""
	I0731 18:11:55.100712   74203 logs.go:276] 0 containers: []
	W0731 18:11:55.100720   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:55.100726   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:55.100780   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:55.141917   74203 cri.go:89] found id: ""
	I0731 18:11:55.141958   74203 logs.go:276] 0 containers: []
	W0731 18:11:55.141971   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:55.141980   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:55.142054   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:55.176669   74203 cri.go:89] found id: ""
	I0731 18:11:55.176702   74203 logs.go:276] 0 containers: []
	W0731 18:11:55.176711   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:55.176718   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:55.176780   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:55.209795   74203 cri.go:89] found id: ""
	I0731 18:11:55.209829   74203 logs.go:276] 0 containers: []
	W0731 18:11:55.209842   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:55.209850   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:55.209915   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:55.244503   74203 cri.go:89] found id: ""
	I0731 18:11:55.244527   74203 logs.go:276] 0 containers: []
	W0731 18:11:55.244537   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:55.244556   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:55.244572   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:55.320033   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:55.320071   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:52.143803   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:54.644223   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:56.273321   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:58.772541   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:55.357684   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:55.357719   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:55.411465   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:55.411501   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:55.423802   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:55.423833   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:55.487945   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:57.988078   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:58.001639   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:58.001724   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:58.036075   74203 cri.go:89] found id: ""
	I0731 18:11:58.036099   74203 logs.go:276] 0 containers: []
	W0731 18:11:58.036107   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:58.036112   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:58.036163   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:58.067316   74203 cri.go:89] found id: ""
	I0731 18:11:58.067340   74203 logs.go:276] 0 containers: []
	W0731 18:11:58.067348   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:58.067353   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:58.067420   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:58.102446   74203 cri.go:89] found id: ""
	I0731 18:11:58.102470   74203 logs.go:276] 0 containers: []
	W0731 18:11:58.102479   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:58.102485   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:58.102553   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:58.134924   74203 cri.go:89] found id: ""
	I0731 18:11:58.134949   74203 logs.go:276] 0 containers: []
	W0731 18:11:58.134957   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:58.134963   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:58.135023   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:58.171589   74203 cri.go:89] found id: ""
	I0731 18:11:58.171611   74203 logs.go:276] 0 containers: []
	W0731 18:11:58.171620   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:58.171625   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:58.171673   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:58.203813   74203 cri.go:89] found id: ""
	I0731 18:11:58.203836   74203 logs.go:276] 0 containers: []
	W0731 18:11:58.203844   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:58.203850   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:58.203911   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:58.236251   74203 cri.go:89] found id: ""
	I0731 18:11:58.236277   74203 logs.go:276] 0 containers: []
	W0731 18:11:58.236288   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:58.236295   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:58.236357   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:58.270595   74203 cri.go:89] found id: ""
	I0731 18:11:58.270624   74203 logs.go:276] 0 containers: []
	W0731 18:11:58.270636   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:58.270647   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:58.270662   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:58.321889   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:58.321927   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:58.334529   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:58.334552   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:58.398489   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:58.398515   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:58.398540   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:58.479657   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:58.479695   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:57.143080   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:59.144357   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:01.643343   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:01.266100   73800 pod_ready.go:81] duration metric: took 4m0.000711681s for pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace to be "Ready" ...
	E0731 18:12:01.266123   73800 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace to be "Ready" (will not retry!)
	I0731 18:12:01.266160   73800 pod_ready.go:38] duration metric: took 4m6.529342365s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:12:01.266205   73800 kubeadm.go:597] duration metric: took 4m13.643145888s to restartPrimaryControlPlane
	W0731 18:12:01.266270   73800 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 18:12:01.266297   73800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 18:12:01.014684   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:12:01.027959   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:12:01.028026   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:12:01.065423   74203 cri.go:89] found id: ""
	I0731 18:12:01.065459   74203 logs.go:276] 0 containers: []
	W0731 18:12:01.065472   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:12:01.065481   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:12:01.065545   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:12:01.099519   74203 cri.go:89] found id: ""
	I0731 18:12:01.099549   74203 logs.go:276] 0 containers: []
	W0731 18:12:01.099561   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:12:01.099568   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:12:01.099630   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:12:01.131239   74203 cri.go:89] found id: ""
	I0731 18:12:01.131262   74203 logs.go:276] 0 containers: []
	W0731 18:12:01.131270   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:12:01.131275   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:12:01.131321   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:12:01.163209   74203 cri.go:89] found id: ""
	I0731 18:12:01.163229   74203 logs.go:276] 0 containers: []
	W0731 18:12:01.163237   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:12:01.163242   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:12:01.163295   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:12:01.201165   74203 cri.go:89] found id: ""
	I0731 18:12:01.201193   74203 logs.go:276] 0 containers: []
	W0731 18:12:01.201204   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:12:01.201217   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:12:01.201274   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:12:01.233310   74203 cri.go:89] found id: ""
	I0731 18:12:01.233334   74203 logs.go:276] 0 containers: []
	W0731 18:12:01.233342   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:12:01.233348   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:12:01.233415   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:12:01.263412   74203 cri.go:89] found id: ""
	I0731 18:12:01.263442   74203 logs.go:276] 0 containers: []
	W0731 18:12:01.263452   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:12:01.263459   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:12:01.263521   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:12:01.296598   74203 cri.go:89] found id: ""
	I0731 18:12:01.296624   74203 logs.go:276] 0 containers: []
	W0731 18:12:01.296632   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:12:01.296642   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:12:01.296656   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:12:01.372362   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:12:01.372381   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:12:01.372395   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:12:01.461997   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:12:01.462029   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:12:01.507610   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:12:01.507636   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:12:01.558335   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:12:01.558375   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:12:04.073333   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:12:04.091122   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:12:04.091205   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:12:04.130510   74203 cri.go:89] found id: ""
	I0731 18:12:04.130545   74203 logs.go:276] 0 containers: []
	W0731 18:12:04.130558   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:12:04.130566   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:12:04.130632   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:12:04.174749   74203 cri.go:89] found id: ""
	I0731 18:12:04.174775   74203 logs.go:276] 0 containers: []
	W0731 18:12:04.174785   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:12:04.174792   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:12:04.174846   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:12:04.212123   74203 cri.go:89] found id: ""
	I0731 18:12:04.212160   74203 logs.go:276] 0 containers: []
	W0731 18:12:04.212172   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:12:04.212180   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:12:04.212254   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:12:04.251558   74203 cri.go:89] found id: ""
	I0731 18:12:04.251589   74203 logs.go:276] 0 containers: []
	W0731 18:12:04.251600   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:12:04.251608   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:12:04.251671   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:12:04.284831   74203 cri.go:89] found id: ""
	I0731 18:12:04.284864   74203 logs.go:276] 0 containers: []
	W0731 18:12:04.284878   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:12:04.284886   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:12:04.284954   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:12:04.325076   74203 cri.go:89] found id: ""
	I0731 18:12:04.325115   74203 logs.go:276] 0 containers: []
	W0731 18:12:04.325126   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:12:04.325135   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:12:04.325195   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:12:04.370883   74203 cri.go:89] found id: ""
	I0731 18:12:04.370922   74203 logs.go:276] 0 containers: []
	W0731 18:12:04.370933   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:12:04.370940   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:12:04.370999   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:12:04.410639   74203 cri.go:89] found id: ""
	I0731 18:12:04.410671   74203 logs.go:276] 0 containers: []
	W0731 18:12:04.410685   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:12:04.410697   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:12:04.410713   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:12:04.462988   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:12:04.463023   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:12:04.479086   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:12:04.479123   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:12:04.544675   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:12:04.544699   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:12:04.544712   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:12:04.633231   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:12:04.633267   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:12:03.645118   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:06.143865   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:07.174252   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:12:07.187289   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:12:07.187393   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:12:07.220927   74203 cri.go:89] found id: ""
	I0731 18:12:07.220953   74203 logs.go:276] 0 containers: []
	W0731 18:12:07.220964   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:12:07.220972   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:12:07.221040   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:12:07.256817   74203 cri.go:89] found id: ""
	I0731 18:12:07.256849   74203 logs.go:276] 0 containers: []
	W0731 18:12:07.256861   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:12:07.256870   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:12:07.256935   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:12:07.290267   74203 cri.go:89] found id: ""
	I0731 18:12:07.290297   74203 logs.go:276] 0 containers: []
	W0731 18:12:07.290309   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:12:07.290315   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:12:07.290373   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:12:07.330037   74203 cri.go:89] found id: ""
	I0731 18:12:07.330068   74203 logs.go:276] 0 containers: []
	W0731 18:12:07.330079   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:12:07.330087   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:12:07.330143   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:12:07.366745   74203 cri.go:89] found id: ""
	I0731 18:12:07.366770   74203 logs.go:276] 0 containers: []
	W0731 18:12:07.366778   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:12:07.366783   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:12:07.366861   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:12:07.400608   74203 cri.go:89] found id: ""
	I0731 18:12:07.400637   74203 logs.go:276] 0 containers: []
	W0731 18:12:07.400648   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:12:07.400661   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:12:07.400727   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:12:07.434996   74203 cri.go:89] found id: ""
	I0731 18:12:07.435028   74203 logs.go:276] 0 containers: []
	W0731 18:12:07.435037   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:12:07.435044   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:12:07.435130   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:12:07.474347   74203 cri.go:89] found id: ""
	I0731 18:12:07.474375   74203 logs.go:276] 0 containers: []
	W0731 18:12:07.474387   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:12:07.474400   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:12:07.474415   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:12:07.549009   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:12:07.549045   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:12:07.586710   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:12:07.586736   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:12:07.640770   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:12:07.640800   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:12:07.654380   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:12:07.654405   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:12:07.721479   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:12:10.221837   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:12:10.235686   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:12:10.235746   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:12:10.268769   74203 cri.go:89] found id: ""
	I0731 18:12:10.268794   74203 logs.go:276] 0 containers: []
	W0731 18:12:10.268802   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:12:10.268808   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:12:10.268860   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:12:10.305229   74203 cri.go:89] found id: ""
	I0731 18:12:10.305264   74203 logs.go:276] 0 containers: []
	W0731 18:12:10.305277   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:12:10.305290   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:12:10.305353   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:12:10.337070   74203 cri.go:89] found id: ""
	I0731 18:12:10.337095   74203 logs.go:276] 0 containers: []
	W0731 18:12:10.337104   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:12:10.337109   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:12:10.337155   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:12:08.643708   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:10.645483   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:10.372979   74203 cri.go:89] found id: ""
	I0731 18:12:10.373005   74203 logs.go:276] 0 containers: []
	W0731 18:12:10.373015   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:12:10.373022   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:12:10.373079   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:12:10.407225   74203 cri.go:89] found id: ""
	I0731 18:12:10.407252   74203 logs.go:276] 0 containers: []
	W0731 18:12:10.407264   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:12:10.407270   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:12:10.407327   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:12:10.443338   74203 cri.go:89] found id: ""
	I0731 18:12:10.443366   74203 logs.go:276] 0 containers: []
	W0731 18:12:10.443377   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:12:10.443385   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:12:10.443474   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:12:10.477005   74203 cri.go:89] found id: ""
	I0731 18:12:10.477030   74203 logs.go:276] 0 containers: []
	W0731 18:12:10.477038   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:12:10.477043   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:12:10.477092   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:12:10.509338   74203 cri.go:89] found id: ""
	I0731 18:12:10.509367   74203 logs.go:276] 0 containers: []
	W0731 18:12:10.509378   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:12:10.509389   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:12:10.509405   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:12:10.559604   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:12:10.559639   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:12:10.572652   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:12:10.572682   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:12:10.642749   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:12:10.642772   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:12:10.642789   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:12:10.728716   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:12:10.728753   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:12:13.267783   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:12:13.282235   74203 kubeadm.go:597] duration metric: took 4m4.41837453s to restartPrimaryControlPlane
	W0731 18:12:13.282324   74203 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 18:12:13.282355   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 18:12:15.410363   73696 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.136815784s)
	I0731 18:12:15.410431   73696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:12:15.426599   73696 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 18:12:15.435823   73696 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 18:12:15.444553   73696 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 18:12:15.444581   73696 kubeadm.go:157] found existing configuration files:
	
	I0731 18:12:15.444624   73696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0731 18:12:15.453198   73696 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 18:12:15.453273   73696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 18:12:15.461988   73696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0731 18:12:15.470178   73696 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 18:12:15.470238   73696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 18:12:15.478903   73696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0731 18:12:15.487176   73696 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 18:12:15.487215   73696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 18:12:15.496114   73696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0731 18:12:15.504518   73696 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 18:12:15.504579   73696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 18:12:15.513915   73696 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 18:12:15.563318   73696 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0731 18:12:15.563381   73696 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 18:12:15.697426   73696 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 18:12:15.697574   73696 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 18:12:15.697688   73696 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 18:12:15.902621   73696 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 18:12:15.904763   73696 out.go:204]   - Generating certificates and keys ...
	I0731 18:12:15.904869   73696 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 18:12:15.904948   73696 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 18:12:15.905049   73696 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 18:12:15.905149   73696 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 18:12:15.905247   73696 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 18:12:15.905328   73696 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 18:12:15.905426   73696 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 18:12:15.905516   73696 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 18:12:15.905620   73696 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 18:12:15.905729   73696 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 18:12:15.905812   73696 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 18:12:15.905890   73696 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 18:12:16.011366   73696 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 18:12:16.171776   73696 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0731 18:12:16.404302   73696 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 18:12:16.559451   73696 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 18:12:16.686612   73696 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 18:12:16.687311   73696 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 18:12:16.689956   73696 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 18:12:13.142855   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:15.144107   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:16.959318   74203 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.676937263s)
	I0731 18:12:16.959425   74203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:12:16.973440   74203 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 18:12:16.983482   74203 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 18:12:16.993930   74203 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 18:12:16.993951   74203 kubeadm.go:157] found existing configuration files:
	
	I0731 18:12:16.993993   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 18:12:17.002713   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 18:12:17.002771   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 18:12:17.012107   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 18:12:17.022548   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 18:12:17.022604   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 18:12:17.033569   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 18:12:17.043338   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 18:12:17.043391   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 18:12:17.052064   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 18:12:17.060785   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 18:12:17.060850   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 18:12:17.069499   74203 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 18:12:17.136512   74203 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0731 18:12:17.136579   74203 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 18:12:17.286224   74203 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 18:12:17.286383   74203 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 18:12:17.286506   74203 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 18:12:17.467092   74203 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 18:12:17.468918   74203 out.go:204]   - Generating certificates and keys ...
	I0731 18:12:17.469024   74203 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 18:12:17.469135   74203 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 18:12:17.469229   74203 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 18:12:17.469307   74203 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 18:12:17.469439   74203 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 18:12:17.469525   74203 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 18:12:17.469609   74203 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 18:12:17.470025   74203 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 18:12:17.470501   74203 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 18:12:17.470852   74203 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 18:12:17.470899   74203 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 18:12:17.470949   74203 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 18:12:17.673308   74203 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 18:12:17.922789   74203 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 18:12:18.391239   74203 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 18:12:18.464854   74203 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 18:12:18.480495   74203 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 18:12:18.480675   74203 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 18:12:18.480746   74203 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 18:12:18.632564   74203 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 18:12:18.635416   74203 out.go:204]   - Booting up control plane ...
	I0731 18:12:18.635542   74203 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 18:12:18.643338   74203 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 18:12:18.645881   74203 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 18:12:18.646898   74203 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 18:12:18.650052   74203 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 18:12:16.691876   73696 out.go:204]   - Booting up control plane ...
	I0731 18:12:16.691967   73696 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 18:12:16.692064   73696 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 18:12:16.692643   73696 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 18:12:16.713038   73696 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 18:12:16.713123   73696 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 18:12:16.713159   73696 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 18:12:16.855506   73696 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0731 18:12:16.855638   73696 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0731 18:12:17.856697   73696 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001297342s
	I0731 18:12:17.856823   73696 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0731 18:12:17.144295   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:19.644100   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:21.644654   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:22.358287   73696 kubeadm.go:310] [api-check] The API server is healthy after 4.501118217s
	I0731 18:12:22.370066   73696 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 18:12:22.382929   73696 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 18:12:22.402765   73696 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 18:12:22.403044   73696 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-094310 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 18:12:22.419724   73696 kubeadm.go:310] [bootstrap-token] Using token: hduea8.ix2m91ewiu6okgi9
	I0731 18:12:22.421231   73696 out.go:204]   - Configuring RBAC rules ...
	I0731 18:12:22.421382   73696 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 18:12:22.426230   73696 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 18:12:22.434423   73696 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 18:12:22.437839   73696 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 18:12:22.449264   73696 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 18:12:22.452420   73696 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 18:12:22.764876   73696 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 18:12:23.216229   73696 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 18:12:23.765173   73696 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 18:12:23.766223   73696 kubeadm.go:310] 
	I0731 18:12:23.766311   73696 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 18:12:23.766356   73696 kubeadm.go:310] 
	I0731 18:12:23.766466   73696 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 18:12:23.766487   73696 kubeadm.go:310] 
	I0731 18:12:23.766521   73696 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 18:12:23.766641   73696 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 18:12:23.766726   73696 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 18:12:23.766741   73696 kubeadm.go:310] 
	I0731 18:12:23.766827   73696 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 18:12:23.766844   73696 kubeadm.go:310] 
	I0731 18:12:23.766899   73696 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 18:12:23.766910   73696 kubeadm.go:310] 
	I0731 18:12:23.766986   73696 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 18:12:23.767089   73696 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 18:12:23.767225   73696 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 18:12:23.767237   73696 kubeadm.go:310] 
	I0731 18:12:23.767310   73696 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 18:12:23.767401   73696 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 18:12:23.767411   73696 kubeadm.go:310] 
	I0731 18:12:23.767531   73696 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token hduea8.ix2m91ewiu6okgi9 \
	I0731 18:12:23.767662   73696 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:460b24b51d10a3760f2391542a8a89e401359854f437f922cbee3f2d8ed6c856 \
	I0731 18:12:23.767695   73696 kubeadm.go:310] 	--control-plane 
	I0731 18:12:23.767702   73696 kubeadm.go:310] 
	I0731 18:12:23.767773   73696 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 18:12:23.767782   73696 kubeadm.go:310] 
	I0731 18:12:23.767847   73696 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token hduea8.ix2m91ewiu6okgi9 \
	I0731 18:12:23.767930   73696 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:460b24b51d10a3760f2391542a8a89e401359854f437f922cbee3f2d8ed6c856 
	I0731 18:12:23.768912   73696 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 18:12:23.769058   73696 cni.go:84] Creating CNI manager for ""
	I0731 18:12:23.769073   73696 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:12:23.771596   73696 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 18:12:23.773122   73696 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 18:12:23.782944   73696 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 18:12:23.800254   73696 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 18:12:23.800383   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:23.800398   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-094310 minikube.k8s.io/updated_at=2024_07_31T18_12_23_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1d737dad7efa60c56d30434fcd857dd3b14c91d9 minikube.k8s.io/name=default-k8s-diff-port-094310 minikube.k8s.io/primary=true
	I0731 18:12:23.827190   73696 ops.go:34] apiserver oom_adj: -16
	I0731 18:12:23.990425   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:24.490585   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:24.991490   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:25.490948   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:25.991461   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:23.645259   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:26.144352   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:26.491041   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:26.990516   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:27.491386   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:27.991150   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:28.490838   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:28.991267   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:29.490459   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:29.990672   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:30.491302   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:30.990644   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:28.644749   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:31.143617   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:32.532203   73800 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.265875459s)
	I0731 18:12:32.532286   73800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:12:32.548139   73800 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 18:12:32.558049   73800 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 18:12:32.567036   73800 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 18:12:32.567060   73800 kubeadm.go:157] found existing configuration files:
	
	I0731 18:12:32.567133   73800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 18:12:32.576069   73800 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 18:12:32.576124   73800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 18:12:32.584762   73800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 18:12:32.592927   73800 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 18:12:32.592980   73800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 18:12:32.601309   73800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 18:12:32.609478   73800 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 18:12:32.609525   73800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 18:12:32.617980   73800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 18:12:32.625943   73800 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 18:12:32.625978   73800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 18:12:32.634091   73800 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 18:12:32.821569   73800 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 18:12:31.491226   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:31.991099   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:32.490751   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:32.991252   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:33.490564   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:33.990977   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:34.491037   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:34.990696   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:35.491381   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:35.990793   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:36.490926   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:36.581312   73696 kubeadm.go:1113] duration metric: took 12.780981821s to wait for elevateKubeSystemPrivileges
	I0731 18:12:36.581370   73696 kubeadm.go:394] duration metric: took 5m8.741923744s to StartCluster
	I0731 18:12:36.581393   73696 settings.go:142] acquiring lock: {Name:mk8ac8269296bf0b887a00ff74caaad180005f89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:12:36.581485   73696 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 18:12:36.583690   73696 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/kubeconfig: {Name:mkf2340bc1ada3c683f132aca37228d7588e1fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:12:36.583986   73696 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.197 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 18:12:36.585079   73696 config.go:182] Loaded profile config "default-k8s-diff-port-094310": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:12:36.585328   73696 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 18:12:36.585677   73696 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-094310"
	I0731 18:12:36.585686   73696 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-094310"
	I0731 18:12:36.585688   73696 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-094310"
	I0731 18:12:36.585705   73696 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-094310"
	W0731 18:12:36.585717   73696 addons.go:243] addon storage-provisioner should already be in state true
	I0731 18:12:36.585720   73696 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-094310"
	I0731 18:12:36.585732   73696 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-094310"
	W0731 18:12:36.585740   73696 addons.go:243] addon metrics-server should already be in state true
	I0731 18:12:36.585752   73696 host.go:66] Checking if "default-k8s-diff-port-094310" exists ...
	I0731 18:12:36.585766   73696 host.go:66] Checking if "default-k8s-diff-port-094310" exists ...
	I0731 18:12:36.586152   73696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:36.586166   73696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:36.586166   73696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:36.586174   73696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:36.586180   73696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:36.586188   73696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:36.586456   73696 out.go:177] * Verifying Kubernetes components...
	I0731 18:12:36.588174   73696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:12:36.605611   73696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38873
	I0731 18:12:36.605856   73696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44661
	I0731 18:12:36.606122   73696 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:36.606710   73696 main.go:141] libmachine: Using API Version  1
	I0731 18:12:36.606731   73696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:36.606809   73696 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:36.607072   73696 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:36.607240   73696 main.go:141] libmachine: Using API Version  1
	I0731 18:12:36.607262   73696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:36.607789   73696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:36.607817   73696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:36.608000   73696 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:36.608173   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetState
	I0731 18:12:36.609009   73696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39155
	I0731 18:12:36.609469   73696 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:36.609954   73696 main.go:141] libmachine: Using API Version  1
	I0731 18:12:36.609973   73696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:36.610333   73696 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:36.610936   73696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:36.610998   73696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:36.612199   73696 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-094310"
	W0731 18:12:36.612224   73696 addons.go:243] addon default-storageclass should already be in state true
	I0731 18:12:36.612254   73696 host.go:66] Checking if "default-k8s-diff-port-094310" exists ...
	I0731 18:12:36.612624   73696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:36.612659   73696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:36.626474   73696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38009
	I0731 18:12:36.626981   73696 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:36.627514   73696 main.go:141] libmachine: Using API Version  1
	I0731 18:12:36.627534   73696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:36.627836   73696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43349
	I0731 18:12:36.628007   73696 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:36.628336   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetState
	I0731 18:12:36.628415   73696 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:36.628816   73696 main.go:141] libmachine: Using API Version  1
	I0731 18:12:36.628831   73696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:36.629237   73696 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:36.629450   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetState
	I0731 18:12:36.630518   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:12:36.631198   73696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43471
	I0731 18:12:36.631550   73696 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:36.632064   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:12:36.632200   73696 main.go:141] libmachine: Using API Version  1
	I0731 18:12:36.632217   73696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:36.632576   73696 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:12:36.632739   73696 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:36.633275   73696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:36.633313   73696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:36.633711   73696 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0731 18:12:33.642776   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:35.643640   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:36.633805   73696 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 18:12:36.633820   73696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 18:12:36.633840   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:12:36.634990   73696 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 18:12:36.635005   73696 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 18:12:36.635022   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:12:36.637135   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:12:36.637767   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:12:36.637792   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:12:36.637891   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:12:36.639047   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:12:36.639583   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:12:36.639617   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:12:36.639885   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:12:36.640106   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:12:36.640235   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:12:36.640419   73696 sshutil.go:53] new ssh client: &{IP:192.168.72.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/default-k8s-diff-port-094310/id_rsa Username:docker}
	I0731 18:12:36.641860   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:12:36.642037   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:12:36.642205   73696 sshutil.go:53] new ssh client: &{IP:192.168.72.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/default-k8s-diff-port-094310/id_rsa Username:docker}
	I0731 18:12:36.659960   73696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36389
	I0731 18:12:36.660280   73696 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:36.660692   73696 main.go:141] libmachine: Using API Version  1
	I0731 18:12:36.660713   73696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:36.660986   73696 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:36.661150   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetState
	I0731 18:12:36.663024   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:12:36.663232   73696 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 18:12:36.663245   73696 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 18:12:36.663264   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:12:36.666016   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:12:36.666393   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:12:36.666472   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:12:36.666562   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:12:36.666730   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:12:36.666832   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:12:36.666935   73696 sshutil.go:53] new ssh client: &{IP:192.168.72.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/default-k8s-diff-port-094310/id_rsa Username:docker}
	I0731 18:12:36.813977   73696 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 18:12:36.832201   73696 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-094310" to be "Ready" ...
	I0731 18:12:36.849864   73696 node_ready.go:49] node "default-k8s-diff-port-094310" has status "Ready":"True"
	I0731 18:12:36.849891   73696 node_ready.go:38] duration metric: took 17.657098ms for node "default-k8s-diff-port-094310" to be "Ready" ...
	I0731 18:12:36.849903   73696 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:12:36.860981   73696 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:36.865178   73696 pod_ready.go:92] pod "etcd-default-k8s-diff-port-094310" in "kube-system" namespace has status "Ready":"True"
	I0731 18:12:36.865198   73696 pod_ready.go:81] duration metric: took 4.190559ms for pod "etcd-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:36.865209   73696 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:36.869977   73696 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-094310" in "kube-system" namespace has status "Ready":"True"
	I0731 18:12:36.869998   73696 pod_ready.go:81] duration metric: took 4.780295ms for pod "kube-apiserver-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:36.870008   73696 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:36.874051   73696 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-094310" in "kube-system" namespace has status "Ready":"True"
	I0731 18:12:36.874069   73696 pod_ready.go:81] duration metric: took 4.053362ms for pod "kube-controller-manager-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:36.874079   73696 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:36.878519   73696 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-094310" in "kube-system" namespace has status "Ready":"True"
	I0731 18:12:36.878536   73696 pod_ready.go:81] duration metric: took 4.448692ms for pod "kube-scheduler-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:36.878544   73696 pod_ready.go:38] duration metric: took 28.628924ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:12:36.878564   73696 api_server.go:52] waiting for apiserver process to appear ...
	I0731 18:12:36.878622   73696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:12:36.892011   73696 api_server.go:72] duration metric: took 307.983877ms to wait for apiserver process to appear ...
	I0731 18:12:36.892031   73696 api_server.go:88] waiting for apiserver healthz status ...
	I0731 18:12:36.892049   73696 api_server.go:253] Checking apiserver healthz at https://192.168.72.197:8444/healthz ...
	I0731 18:12:36.895929   73696 api_server.go:279] https://192.168.72.197:8444/healthz returned 200:
	ok
	I0731 18:12:36.896760   73696 api_server.go:141] control plane version: v1.30.3
	I0731 18:12:36.896780   73696 api_server.go:131] duration metric: took 4.741896ms to wait for apiserver health ...
	I0731 18:12:36.896789   73696 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 18:12:36.974073   73696 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 18:12:36.974092   73696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0731 18:12:37.010218   73696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 18:12:37.018536   73696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 18:12:37.039734   73696 system_pods.go:59] 5 kube-system pods found
	I0731 18:12:37.039767   73696 system_pods.go:61] "etcd-default-k8s-diff-port-094310" [f73c4fb4-b160-4e79-a0a4-8fba255cfb8c] Running
	I0731 18:12:37.039773   73696 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-094310" [820215b3-0eaa-46ca-bf0b-4fa73942160a] Running
	I0731 18:12:37.039778   73696 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-094310" [5ca0ed32-5439-49b5-9a85-1a4b1628371d] Running
	I0731 18:12:37.039787   73696 system_pods.go:61] "kube-proxy-4vvjq" [a8dd9db6-5f45-4c8d-a9d3-03ede1db07eb] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 18:12:37.039792   73696 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-094310" [11590718-4e74-46eb-be90-6405c3f1759f] Running
	I0731 18:12:37.039802   73696 system_pods.go:74] duration metric: took 143.007992ms to wait for pod list to return data ...
	I0731 18:12:37.039812   73696 default_sa.go:34] waiting for default service account to be created ...
	I0731 18:12:37.041650   73696 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 18:12:37.041672   73696 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 18:12:37.096891   73696 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 18:12:37.096920   73696 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 18:12:37.159438   73696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 18:12:37.235560   73696 default_sa.go:45] found service account: "default"
	I0731 18:12:37.235599   73696 default_sa.go:55] duration metric: took 195.778976ms for default service account to be created ...
	I0731 18:12:37.235612   73696 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 18:12:37.439935   73696 system_pods.go:86] 7 kube-system pods found
	I0731 18:12:37.439966   73696 system_pods.go:89] "coredns-7db6d8ff4d-2r7zb" [e7acc926-db53-4c1c-a62f-45e303c69fc7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:12:37.439975   73696 system_pods.go:89] "coredns-7db6d8ff4d-756jj" [c91e86b1-8348-4dc7-9aa1-6909055c2dde] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:12:37.439982   73696 system_pods.go:89] "etcd-default-k8s-diff-port-094310" [f73c4fb4-b160-4e79-a0a4-8fba255cfb8c] Running
	I0731 18:12:37.439988   73696 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-094310" [820215b3-0eaa-46ca-bf0b-4fa73942160a] Running
	I0731 18:12:37.439993   73696 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-094310" [5ca0ed32-5439-49b5-9a85-1a4b1628371d] Running
	I0731 18:12:37.439998   73696 system_pods.go:89] "kube-proxy-4vvjq" [a8dd9db6-5f45-4c8d-a9d3-03ede1db07eb] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 18:12:37.440003   73696 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-094310" [11590718-4e74-46eb-be90-6405c3f1759f] Running
	I0731 18:12:37.440020   73696 retry.go:31] will retry after 230.300903ms: missing components: kube-dns, kube-proxy
	I0731 18:12:37.676385   73696 system_pods.go:86] 7 kube-system pods found
	I0731 18:12:37.676411   73696 system_pods.go:89] "coredns-7db6d8ff4d-2r7zb" [e7acc926-db53-4c1c-a62f-45e303c69fc7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:12:37.676421   73696 system_pods.go:89] "coredns-7db6d8ff4d-756jj" [c91e86b1-8348-4dc7-9aa1-6909055c2dde] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:12:37.676429   73696 system_pods.go:89] "etcd-default-k8s-diff-port-094310" [f73c4fb4-b160-4e79-a0a4-8fba255cfb8c] Running
	I0731 18:12:37.676436   73696 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-094310" [820215b3-0eaa-46ca-bf0b-4fa73942160a] Running
	I0731 18:12:37.676442   73696 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-094310" [5ca0ed32-5439-49b5-9a85-1a4b1628371d] Running
	I0731 18:12:37.676451   73696 system_pods.go:89] "kube-proxy-4vvjq" [a8dd9db6-5f45-4c8d-a9d3-03ede1db07eb] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 18:12:37.676456   73696 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-094310" [11590718-4e74-46eb-be90-6405c3f1759f] Running
	I0731 18:12:37.676475   73696 retry.go:31] will retry after 311.28179ms: missing components: kube-dns, kube-proxy
	I0731 18:12:37.813837   73696 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:37.813870   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .Close
	I0731 18:12:37.814017   73696 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:37.814039   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .Close
	I0731 18:12:37.814265   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | Closing plugin on server side
	I0731 18:12:37.814316   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | Closing plugin on server side
	I0731 18:12:37.814363   73696 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:37.814376   73696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:37.814391   73696 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:37.814402   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .Close
	I0731 18:12:37.814531   73696 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:37.814556   73696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:37.814598   73696 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:37.814608   73696 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:37.814631   73696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:37.814622   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .Close
	I0731 18:12:37.816102   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | Closing plugin on server side
	I0731 18:12:37.816268   73696 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:37.816280   73696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:37.830991   73696 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:37.831018   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .Close
	I0731 18:12:37.831354   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | Closing plugin on server side
	I0731 18:12:37.831354   73696 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:37.831380   73696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:37.995206   73696 system_pods.go:86] 8 kube-system pods found
	I0731 18:12:37.995248   73696 system_pods.go:89] "coredns-7db6d8ff4d-2r7zb" [e7acc926-db53-4c1c-a62f-45e303c69fc7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:12:37.995262   73696 system_pods.go:89] "coredns-7db6d8ff4d-756jj" [c91e86b1-8348-4dc7-9aa1-6909055c2dde] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:12:37.995272   73696 system_pods.go:89] "etcd-default-k8s-diff-port-094310" [f73c4fb4-b160-4e79-a0a4-8fba255cfb8c] Running
	I0731 18:12:37.995295   73696 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-094310" [820215b3-0eaa-46ca-bf0b-4fa73942160a] Running
	I0731 18:12:37.995310   73696 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-094310" [5ca0ed32-5439-49b5-9a85-1a4b1628371d] Running
	I0731 18:12:37.995322   73696 system_pods.go:89] "kube-proxy-4vvjq" [a8dd9db6-5f45-4c8d-a9d3-03ede1db07eb] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 18:12:37.995332   73696 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-094310" [11590718-4e74-46eb-be90-6405c3f1759f] Running
	I0731 18:12:37.995345   73696 system_pods.go:89] "storage-provisioner" [2e4c5a96-4dfc-4af6-8d3e-bab865644328] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 18:12:37.995370   73696 retry.go:31] will retry after 381.430275ms: missing components: kube-dns, kube-proxy
	I0731 18:12:38.392678   73696 system_pods.go:86] 9 kube-system pods found
	I0731 18:12:38.392719   73696 system_pods.go:89] "coredns-7db6d8ff4d-2r7zb" [e7acc926-db53-4c1c-a62f-45e303c69fc7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:12:38.392732   73696 system_pods.go:89] "coredns-7db6d8ff4d-756jj" [c91e86b1-8348-4dc7-9aa1-6909055c2dde] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:12:38.392742   73696 system_pods.go:89] "etcd-default-k8s-diff-port-094310" [f73c4fb4-b160-4e79-a0a4-8fba255cfb8c] Running
	I0731 18:12:38.392751   73696 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-094310" [820215b3-0eaa-46ca-bf0b-4fa73942160a] Running
	I0731 18:12:38.392760   73696 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-094310" [5ca0ed32-5439-49b5-9a85-1a4b1628371d] Running
	I0731 18:12:38.392770   73696 system_pods.go:89] "kube-proxy-4vvjq" [a8dd9db6-5f45-4c8d-a9d3-03ede1db07eb] Running
	I0731 18:12:38.392778   73696 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-094310" [11590718-4e74-46eb-be90-6405c3f1759f] Running
	I0731 18:12:38.392787   73696 system_pods.go:89] "metrics-server-569cc877fc-mskwc" [c57990b4-1d91-4764-9c33-2fd5f7d2f83b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 18:12:38.392802   73696 system_pods.go:89] "storage-provisioner" [2e4c5a96-4dfc-4af6-8d3e-bab865644328] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 18:12:38.392823   73696 retry.go:31] will retry after 567.905994ms: missing components: kube-dns
	I0731 18:12:38.501117   73696 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.341621275s)
	I0731 18:12:38.501181   73696 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:38.501200   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .Close
	I0731 18:12:38.501595   73696 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:38.501615   73696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:38.501625   73696 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:38.501634   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .Close
	I0731 18:12:38.501593   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | Closing plugin on server side
	I0731 18:12:38.501907   73696 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:38.501953   73696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:38.501966   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | Closing plugin on server side
	I0731 18:12:38.501975   73696 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-094310"
	I0731 18:12:38.505204   73696 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0731 18:12:38.506517   73696 addons.go:510] duration metric: took 1.921658263s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0731 18:12:38.967657   73696 system_pods.go:86] 9 kube-system pods found
	I0731 18:12:38.967691   73696 system_pods.go:89] "coredns-7db6d8ff4d-2r7zb" [e7acc926-db53-4c1c-a62f-45e303c69fc7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:12:38.967700   73696 system_pods.go:89] "coredns-7db6d8ff4d-756jj" [c91e86b1-8348-4dc7-9aa1-6909055c2dde] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:12:38.967708   73696 system_pods.go:89] "etcd-default-k8s-diff-port-094310" [f73c4fb4-b160-4e79-a0a4-8fba255cfb8c] Running
	I0731 18:12:38.967716   73696 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-094310" [820215b3-0eaa-46ca-bf0b-4fa73942160a] Running
	I0731 18:12:38.967723   73696 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-094310" [5ca0ed32-5439-49b5-9a85-1a4b1628371d] Running
	I0731 18:12:38.967729   73696 system_pods.go:89] "kube-proxy-4vvjq" [a8dd9db6-5f45-4c8d-a9d3-03ede1db07eb] Running
	I0731 18:12:38.967736   73696 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-094310" [11590718-4e74-46eb-be90-6405c3f1759f] Running
	I0731 18:12:38.967746   73696 system_pods.go:89] "metrics-server-569cc877fc-mskwc" [c57990b4-1d91-4764-9c33-2fd5f7d2f83b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 18:12:38.967759   73696 system_pods.go:89] "storage-provisioner" [2e4c5a96-4dfc-4af6-8d3e-bab865644328] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 18:12:38.967779   73696 retry.go:31] will retry after 488.293971ms: missing components: kube-dns
	I0731 18:12:39.464918   73696 system_pods.go:86] 9 kube-system pods found
	I0731 18:12:39.464956   73696 system_pods.go:89] "coredns-7db6d8ff4d-2r7zb" [e7acc926-db53-4c1c-a62f-45e303c69fc7] Running
	I0731 18:12:39.464965   73696 system_pods.go:89] "coredns-7db6d8ff4d-756jj" [c91e86b1-8348-4dc7-9aa1-6909055c2dde] Running
	I0731 18:12:39.464972   73696 system_pods.go:89] "etcd-default-k8s-diff-port-094310" [f73c4fb4-b160-4e79-a0a4-8fba255cfb8c] Running
	I0731 18:12:39.464978   73696 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-094310" [820215b3-0eaa-46ca-bf0b-4fa73942160a] Running
	I0731 18:12:39.464986   73696 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-094310" [5ca0ed32-5439-49b5-9a85-1a4b1628371d] Running
	I0731 18:12:39.464992   73696 system_pods.go:89] "kube-proxy-4vvjq" [a8dd9db6-5f45-4c8d-a9d3-03ede1db07eb] Running
	I0731 18:12:39.464999   73696 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-094310" [11590718-4e74-46eb-be90-6405c3f1759f] Running
	I0731 18:12:39.465017   73696 system_pods.go:89] "metrics-server-569cc877fc-mskwc" [c57990b4-1d91-4764-9c33-2fd5f7d2f83b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 18:12:39.465028   73696 system_pods.go:89] "storage-provisioner" [2e4c5a96-4dfc-4af6-8d3e-bab865644328] Running
	I0731 18:12:39.465041   73696 system_pods.go:126] duration metric: took 2.229422302s to wait for k8s-apps to be running ...
	I0731 18:12:39.465053   73696 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 18:12:39.465111   73696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:12:39.482063   73696 system_svc.go:56] duration metric: took 16.998965ms WaitForService to wait for kubelet
	I0731 18:12:39.482092   73696 kubeadm.go:582] duration metric: took 2.898066741s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 18:12:39.482138   73696 node_conditions.go:102] verifying NodePressure condition ...
	I0731 18:12:39.486728   73696 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 18:12:39.486752   73696 node_conditions.go:123] node cpu capacity is 2
	I0731 18:12:39.486764   73696 node_conditions.go:105] duration metric: took 4.617934ms to run NodePressure ...
	I0731 18:12:39.486777   73696 start.go:241] waiting for startup goroutines ...
	I0731 18:12:39.486787   73696 start.go:246] waiting for cluster config update ...
	I0731 18:12:39.486798   73696 start.go:255] writing updated cluster config ...
	I0731 18:12:39.487565   73696 ssh_runner.go:195] Run: rm -f paused
	I0731 18:12:39.539591   73696 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 18:12:39.541533   73696 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-094310" cluster and "default" namespace by default
	I0731 18:12:37.644379   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:39.645608   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:41.969949   73800 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0731 18:12:41.970018   73800 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 18:12:41.970137   73800 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 18:12:41.970234   73800 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 18:12:41.970386   73800 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 18:12:41.970495   73800 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 18:12:41.972177   73800 out.go:204]   - Generating certificates and keys ...
	I0731 18:12:41.972244   73800 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 18:12:41.972314   73800 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 18:12:41.972403   73800 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 18:12:41.972480   73800 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 18:12:41.972538   73800 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 18:12:41.972588   73800 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 18:12:41.972654   73800 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 18:12:41.972748   73800 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 18:12:41.972859   73800 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 18:12:41.972982   73800 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 18:12:41.973027   73800 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 18:12:41.973082   73800 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 18:12:41.973152   73800 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 18:12:41.973205   73800 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0731 18:12:41.973252   73800 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 18:12:41.973323   73800 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 18:12:41.973387   73800 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 18:12:41.973456   73800 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 18:12:41.973553   73800 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 18:12:41.974927   73800 out.go:204]   - Booting up control plane ...
	I0731 18:12:41.975019   73800 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 18:12:41.975128   73800 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 18:12:41.975215   73800 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 18:12:41.975342   73800 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 18:12:41.975425   73800 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 18:12:41.975474   73800 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 18:12:41.975635   73800 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0731 18:12:41.975710   73800 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0731 18:12:41.975766   73800 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001397088s
	I0731 18:12:41.975824   73800 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0731 18:12:41.975909   73800 kubeadm.go:310] [api-check] The API server is healthy after 5.001258426s
	I0731 18:12:41.976064   73800 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 18:12:41.976241   73800 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 18:12:41.976355   73800 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 18:12:41.976528   73800 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-436067 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 18:12:41.976605   73800 kubeadm.go:310] [bootstrap-token] Using token: m9csv8.j58cj919sgzkgy1k
	I0731 18:12:41.978880   73800 out.go:204]   - Configuring RBAC rules ...
	I0731 18:12:41.978976   73800 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 18:12:41.979087   73800 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 18:12:41.979277   73800 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 18:12:41.979441   73800 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 18:12:41.979622   73800 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 18:12:41.979708   73800 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 18:12:41.979835   73800 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 18:12:41.979875   73800 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 18:12:41.979918   73800 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 18:12:41.979924   73800 kubeadm.go:310] 
	I0731 18:12:41.979971   73800 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 18:12:41.979979   73800 kubeadm.go:310] 
	I0731 18:12:41.980058   73800 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 18:12:41.980067   73800 kubeadm.go:310] 
	I0731 18:12:41.980098   73800 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 18:12:41.980160   73800 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 18:12:41.980229   73800 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 18:12:41.980236   73800 kubeadm.go:310] 
	I0731 18:12:41.980300   73800 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 18:12:41.980311   73800 kubeadm.go:310] 
	I0731 18:12:41.980384   73800 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 18:12:41.980393   73800 kubeadm.go:310] 
	I0731 18:12:41.980446   73800 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 18:12:41.980548   73800 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 18:12:41.980644   73800 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 18:12:41.980653   73800 kubeadm.go:310] 
	I0731 18:12:41.980759   73800 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 18:12:41.980824   73800 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 18:12:41.980830   73800 kubeadm.go:310] 
	I0731 18:12:41.980896   73800 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token m9csv8.j58cj919sgzkgy1k \
	I0731 18:12:41.980984   73800 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:460b24b51d10a3760f2391542a8a89e401359854f437f922cbee3f2d8ed6c856 \
	I0731 18:12:41.981011   73800 kubeadm.go:310] 	--control-plane 
	I0731 18:12:41.981023   73800 kubeadm.go:310] 
	I0731 18:12:41.981093   73800 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 18:12:41.981099   73800 kubeadm.go:310] 
	I0731 18:12:41.981183   73800 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token m9csv8.j58cj919sgzkgy1k \
	I0731 18:12:41.981306   73800 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:460b24b51d10a3760f2391542a8a89e401359854f437f922cbee3f2d8ed6c856 
	I0731 18:12:41.981317   73800 cni.go:84] Creating CNI manager for ""
	I0731 18:12:41.981324   73800 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:12:41.982701   73800 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 18:12:41.983929   73800 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 18:12:41.995272   73800 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 18:12:42.014929   73800 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 18:12:42.014984   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:42.015033   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-436067 minikube.k8s.io/updated_at=2024_07_31T18_12_42_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1d737dad7efa60c56d30434fcd857dd3b14c91d9 minikube.k8s.io/name=embed-certs-436067 minikube.k8s.io/primary=true
	I0731 18:12:42.164811   73800 ops.go:34] apiserver oom_adj: -16
	I0731 18:12:42.164934   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:42.665108   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:43.165818   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:43.665733   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:44.165074   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:42.144896   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:44.644077   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:44.665477   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:45.165127   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:45.665440   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:46.165555   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:46.665998   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:47.165829   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:47.665704   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:48.164973   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:48.665549   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:49.165210   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:47.142947   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:49.144015   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:51.644495   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:49.665500   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:50.165567   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:50.665547   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:51.166002   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:51.664981   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:52.165135   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:52.665927   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:53.165045   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:53.664981   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:54.165715   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:54.252373   73800 kubeadm.go:1113] duration metric: took 12.237438799s to wait for elevateKubeSystemPrivileges
	I0731 18:12:54.252415   73800 kubeadm.go:394] duration metric: took 5m6.689979758s to StartCluster
	I0731 18:12:54.252435   73800 settings.go:142] acquiring lock: {Name:mk8ac8269296bf0b887a00ff74caaad180005f89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:12:54.252509   73800 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 18:12:54.254175   73800 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/kubeconfig: {Name:mkf2340bc1ada3c683f132aca37228d7588e1fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:12:54.254495   73800 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.86 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 18:12:54.254600   73800 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 18:12:54.254687   73800 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-436067"
	I0731 18:12:54.254721   73800 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-436067"
	I0731 18:12:54.254724   73800 addons.go:69] Setting default-storageclass=true in profile "embed-certs-436067"
	W0731 18:12:54.254734   73800 addons.go:243] addon storage-provisioner should already be in state true
	I0731 18:12:54.254737   73800 config.go:182] Loaded profile config "embed-certs-436067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:12:54.254743   73800 addons.go:69] Setting metrics-server=true in profile "embed-certs-436067"
	I0731 18:12:54.254760   73800 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-436067"
	I0731 18:12:54.254769   73800 host.go:66] Checking if "embed-certs-436067" exists ...
	I0731 18:12:54.254785   73800 addons.go:234] Setting addon metrics-server=true in "embed-certs-436067"
	W0731 18:12:54.254795   73800 addons.go:243] addon metrics-server should already be in state true
	I0731 18:12:54.254826   73800 host.go:66] Checking if "embed-certs-436067" exists ...
	I0731 18:12:54.255205   73800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:54.255208   73800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:54.255233   73800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:54.255238   73800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:54.255302   73800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:54.255323   73800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:54.256412   73800 out.go:177] * Verifying Kubernetes components...
	I0731 18:12:54.257653   73800 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:12:54.274456   73800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34433
	I0731 18:12:54.274959   73800 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:54.275532   73800 main.go:141] libmachine: Using API Version  1
	I0731 18:12:54.275554   73800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:54.275828   73800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44423
	I0731 18:12:54.275851   73800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41513
	I0731 18:12:54.276001   73800 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:54.276152   73800 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:54.276225   73800 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:54.276498   73800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:54.276534   73800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:54.276592   73800 main.go:141] libmachine: Using API Version  1
	I0731 18:12:54.276606   73800 main.go:141] libmachine: Using API Version  1
	I0731 18:12:54.276613   73800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:54.276616   73800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:54.276954   73800 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:54.277055   73800 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:54.277103   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetState
	I0731 18:12:54.277663   73800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:54.277704   73800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:54.280559   73800 addons.go:234] Setting addon default-storageclass=true in "embed-certs-436067"
	W0731 18:12:54.280583   73800 addons.go:243] addon default-storageclass should already be in state true
	I0731 18:12:54.280615   73800 host.go:66] Checking if "embed-certs-436067" exists ...
	I0731 18:12:54.280969   73800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:54.281000   73800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:54.293211   73800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38393
	I0731 18:12:54.293657   73800 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:54.294121   73800 main.go:141] libmachine: Using API Version  1
	I0731 18:12:54.294142   73800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:54.294444   73800 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:54.294642   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetState
	I0731 18:12:54.294724   73800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38771
	I0731 18:12:54.295077   73800 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:54.295590   73800 main.go:141] libmachine: Using API Version  1
	I0731 18:12:54.295609   73800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:54.296058   73800 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:54.296285   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetState
	I0731 18:12:54.296377   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:12:54.298013   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:12:54.298541   73800 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0731 18:12:54.299454   73800 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:12:54.299489   73800 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 18:12:54.299501   73800 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 18:12:54.299515   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:12:54.300664   73800 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 18:12:54.300682   73800 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 18:12:54.300699   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:12:54.301018   73800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36647
	I0731 18:12:54.301671   73800 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:54.302210   73800 main.go:141] libmachine: Using API Version  1
	I0731 18:12:54.302229   73800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:54.302731   73800 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:54.302857   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:12:54.303479   73800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:54.303503   73800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:54.303710   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:12:54.303744   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:12:54.303768   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:12:54.303893   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:12:54.304071   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:12:54.304232   73800 sshutil.go:53] new ssh client: &{IP:192.168.50.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/embed-certs-436067/id_rsa Username:docker}
	I0731 18:12:54.304601   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:12:54.305040   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:12:54.305063   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:12:54.305311   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:12:54.305480   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:12:54.305594   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:12:54.305712   73800 sshutil.go:53] new ssh client: &{IP:192.168.50.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/embed-certs-436067/id_rsa Username:docker}
	I0731 18:12:54.318168   73800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33711
	I0731 18:12:54.318558   73800 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:54.319015   73800 main.go:141] libmachine: Using API Version  1
	I0731 18:12:54.319033   73800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:54.319355   73800 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:54.319552   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetState
	I0731 18:12:54.321369   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:12:54.321540   73800 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 18:12:54.321553   73800 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 18:12:54.321565   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:12:54.324613   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:12:54.324994   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:12:54.325011   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:12:54.325310   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:12:54.325437   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:12:54.325571   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:12:54.325683   73800 sshutil.go:53] new ssh client: &{IP:192.168.50.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/embed-certs-436067/id_rsa Username:docker}
	I0731 18:12:54.435485   73800 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 18:12:54.462541   73800 node_ready.go:35] waiting up to 6m0s for node "embed-certs-436067" to be "Ready" ...
	I0731 18:12:54.473787   73800 node_ready.go:49] node "embed-certs-436067" has status "Ready":"True"
	I0731 18:12:54.473810   73800 node_ready.go:38] duration metric: took 11.237808ms for node "embed-certs-436067" to be "Ready" ...
	I0731 18:12:54.473819   73800 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:12:54.485589   73800 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:54.507887   73800 pod_ready.go:92] pod "etcd-embed-certs-436067" in "kube-system" namespace has status "Ready":"True"
	I0731 18:12:54.507910   73800 pod_ready.go:81] duration metric: took 22.296215ms for pod "etcd-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:54.507921   73800 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:54.524721   73800 pod_ready.go:92] pod "kube-apiserver-embed-certs-436067" in "kube-system" namespace has status "Ready":"True"
	I0731 18:12:54.524742   73800 pod_ready.go:81] duration metric: took 16.814491ms for pod "kube-apiserver-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:54.524751   73800 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:54.536810   73800 pod_ready.go:92] pod "kube-controller-manager-embed-certs-436067" in "kube-system" namespace has status "Ready":"True"
	I0731 18:12:54.536837   73800 pod_ready.go:81] duration metric: took 12.078703ms for pod "kube-controller-manager-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:54.536848   73800 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-85spm" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:54.552538   73800 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 18:12:54.579223   73800 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 18:12:54.579244   73800 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0731 18:12:54.596087   73800 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 18:12:54.617180   73800 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 18:12:54.617209   73800 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 18:12:54.679879   73800 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 18:12:54.679908   73800 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 18:12:54.775272   73800 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 18:12:55.199299   73800 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:55.199335   73800 main.go:141] libmachine: (embed-certs-436067) Calling .Close
	I0731 18:12:55.199342   73800 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:55.199361   73800 main.go:141] libmachine: (embed-certs-436067) Calling .Close
	I0731 18:12:55.199618   73800 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:55.199666   73800 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:55.199678   73800 main.go:141] libmachine: (embed-certs-436067) DBG | Closing plugin on server side
	I0731 18:12:55.199634   73800 main.go:141] libmachine: (embed-certs-436067) DBG | Closing plugin on server side
	I0731 18:12:55.199685   73800 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:55.199710   73800 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:55.199689   73800 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:55.199717   73800 main.go:141] libmachine: (embed-certs-436067) Calling .Close
	I0731 18:12:55.199726   73800 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:55.199735   73800 main.go:141] libmachine: (embed-certs-436067) Calling .Close
	I0731 18:12:55.200002   73800 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:55.200016   73800 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:55.200079   73800 main.go:141] libmachine: (embed-certs-436067) DBG | Closing plugin on server side
	I0731 18:12:55.200107   73800 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:55.200120   73800 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:55.227472   73800 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:55.227497   73800 main.go:141] libmachine: (embed-certs-436067) Calling .Close
	I0731 18:12:55.227792   73800 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:55.227811   73800 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:55.712134   73800 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:55.712159   73800 main.go:141] libmachine: (embed-certs-436067) Calling .Close
	I0731 18:12:55.712516   73800 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:55.712568   73800 main.go:141] libmachine: (embed-certs-436067) DBG | Closing plugin on server side
	I0731 18:12:55.712574   73800 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:55.712596   73800 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:55.712605   73800 main.go:141] libmachine: (embed-certs-436067) Calling .Close
	I0731 18:12:55.712851   73800 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:55.712868   73800 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:55.712867   73800 main.go:141] libmachine: (embed-certs-436067) DBG | Closing plugin on server side
	I0731 18:12:55.712877   73800 addons.go:475] Verifying addon metrics-server=true in "embed-certs-436067"
	I0731 18:12:55.714432   73800 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0731 18:12:54.143455   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:56.144177   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:55.715903   73800 addons.go:510] duration metric: took 1.461304856s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0731 18:12:56.542100   73800 pod_ready.go:92] pod "kube-proxy-85spm" in "kube-system" namespace has status "Ready":"True"
	I0731 18:12:56.542122   73800 pod_ready.go:81] duration metric: took 2.005265959s for pod "kube-proxy-85spm" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:56.542135   73800 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:56.553810   73800 pod_ready.go:92] pod "kube-scheduler-embed-certs-436067" in "kube-system" namespace has status "Ready":"True"
	I0731 18:12:56.553831   73800 pod_ready.go:81] duration metric: took 11.689814ms for pod "kube-scheduler-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:56.553840   73800 pod_ready.go:38] duration metric: took 2.080010607s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:12:56.553853   73800 api_server.go:52] waiting for apiserver process to appear ...
	I0731 18:12:56.553899   73800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:12:56.568301   73800 api_server.go:72] duration metric: took 2.313759916s to wait for apiserver process to appear ...
	I0731 18:12:56.568327   73800 api_server.go:88] waiting for apiserver healthz status ...
	I0731 18:12:56.568345   73800 api_server.go:253] Checking apiserver healthz at https://192.168.50.86:8443/healthz ...
	I0731 18:12:56.573861   73800 api_server.go:279] https://192.168.50.86:8443/healthz returned 200:
	ok
	I0731 18:12:56.575494   73800 api_server.go:141] control plane version: v1.30.3
	I0731 18:12:56.575513   73800 api_server.go:131] duration metric: took 7.1795ms to wait for apiserver health ...
	I0731 18:12:56.575520   73800 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 18:12:56.669169   73800 system_pods.go:59] 9 kube-system pods found
	I0731 18:12:56.669197   73800 system_pods.go:61] "coredns-7db6d8ff4d-fqkfd" [2e5a67a3-6f2b-43a5-8b94-cf48202c5958] Running
	I0731 18:12:56.669202   73800 system_pods.go:61] "coredns-7db6d8ff4d-qpb62" [e57f157b-01b5-42ec-9630-b9b5ae94fe5d] Running
	I0731 18:12:56.669206   73800 system_pods.go:61] "etcd-embed-certs-436067" [d0439817-b307-4673-8a64-34aa31266fbb] Running
	I0731 18:12:56.669210   73800 system_pods.go:61] "kube-apiserver-embed-certs-436067" [c2409e33-d42b-4132-8f63-197c4d445704] Running
	I0731 18:12:56.669214   73800 system_pods.go:61] "kube-controller-manager-embed-certs-436067" [dfea1f44-48a3-4a3c-8a27-be6f01f58add] Running
	I0731 18:12:56.669218   73800 system_pods.go:61] "kube-proxy-85spm" [eecb399a-0365-436d-8f14-a7853a5a2ea3] Running
	I0731 18:12:56.669221   73800 system_pods.go:61] "kube-scheduler-embed-certs-436067" [e406617d-adf7-4baa-9a8a-6fd92847a12f] Running
	I0731 18:12:56.669228   73800 system_pods.go:61] "metrics-server-569cc877fc-pgf6q" [b799fa04-0bf7-4914-9738-964b825577b5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 18:12:56.669231   73800 system_pods.go:61] "storage-provisioner" [a7d95d32-1002-4fba-b0fd-555b872efa1f] Running
	I0731 18:12:56.669240   73800 system_pods.go:74] duration metric: took 93.714593ms to wait for pod list to return data ...
	I0731 18:12:56.669247   73800 default_sa.go:34] waiting for default service account to be created ...
	I0731 18:12:56.866494   73800 default_sa.go:45] found service account: "default"
	I0731 18:12:56.866521   73800 default_sa.go:55] duration metric: took 197.264891ms for default service account to be created ...
	I0731 18:12:56.866532   73800 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 18:12:57.068903   73800 system_pods.go:86] 9 kube-system pods found
	I0731 18:12:57.068930   73800 system_pods.go:89] "coredns-7db6d8ff4d-fqkfd" [2e5a67a3-6f2b-43a5-8b94-cf48202c5958] Running
	I0731 18:12:57.068936   73800 system_pods.go:89] "coredns-7db6d8ff4d-qpb62" [e57f157b-01b5-42ec-9630-b9b5ae94fe5d] Running
	I0731 18:12:57.068940   73800 system_pods.go:89] "etcd-embed-certs-436067" [d0439817-b307-4673-8a64-34aa31266fbb] Running
	I0731 18:12:57.068944   73800 system_pods.go:89] "kube-apiserver-embed-certs-436067" [c2409e33-d42b-4132-8f63-197c4d445704] Running
	I0731 18:12:57.068948   73800 system_pods.go:89] "kube-controller-manager-embed-certs-436067" [dfea1f44-48a3-4a3c-8a27-be6f01f58add] Running
	I0731 18:12:57.068951   73800 system_pods.go:89] "kube-proxy-85spm" [eecb399a-0365-436d-8f14-a7853a5a2ea3] Running
	I0731 18:12:57.068955   73800 system_pods.go:89] "kube-scheduler-embed-certs-436067" [e406617d-adf7-4baa-9a8a-6fd92847a12f] Running
	I0731 18:12:57.068961   73800 system_pods.go:89] "metrics-server-569cc877fc-pgf6q" [b799fa04-0bf7-4914-9738-964b825577b5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 18:12:57.068965   73800 system_pods.go:89] "storage-provisioner" [a7d95d32-1002-4fba-b0fd-555b872efa1f] Running
	I0731 18:12:57.068972   73800 system_pods.go:126] duration metric: took 202.435205ms to wait for k8s-apps to be running ...
	I0731 18:12:57.068980   73800 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 18:12:57.069018   73800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:12:57.083728   73800 system_svc.go:56] duration metric: took 14.739831ms WaitForService to wait for kubelet
	I0731 18:12:57.083756   73800 kubeadm.go:582] duration metric: took 2.829227102s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 18:12:57.083782   73800 node_conditions.go:102] verifying NodePressure condition ...
	I0731 18:12:57.266463   73800 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 18:12:57.266486   73800 node_conditions.go:123] node cpu capacity is 2
	I0731 18:12:57.266495   73800 node_conditions.go:105] duration metric: took 182.707869ms to run NodePressure ...
	I0731 18:12:57.266505   73800 start.go:241] waiting for startup goroutines ...
	I0731 18:12:57.266512   73800 start.go:246] waiting for cluster config update ...
	I0731 18:12:57.266521   73800 start.go:255] writing updated cluster config ...
	I0731 18:12:57.266767   73800 ssh_runner.go:195] Run: rm -f paused
	I0731 18:12:57.313723   73800 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 18:12:57.315966   73800 out.go:177] * Done! kubectl is now configured to use "embed-certs-436067" cluster and "default" namespace by default
	I0731 18:12:58.652853   74203 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0731 18:12:58.653480   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:12:58.653735   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:12:58.643237   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:13:01.143274   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:13:01.643357   73479 pod_ready.go:81] duration metric: took 4m0.006506347s for pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace to be "Ready" ...
	E0731 18:13:01.643382   73479 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0731 18:13:01.643388   73479 pod_ready.go:38] duration metric: took 4m7.418860701s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:13:01.643402   73479 api_server.go:52] waiting for apiserver process to appear ...
	I0731 18:13:01.643428   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:13:01.643481   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:13:01.692071   73479 cri.go:89] found id: "895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0"
	I0731 18:13:01.692092   73479 cri.go:89] found id: ""
	I0731 18:13:01.692101   73479 logs.go:276] 1 containers: [895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0]
	I0731 18:13:01.692159   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:01.697266   73479 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:13:01.697356   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:13:01.736299   73479 cri.go:89] found id: "65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645"
	I0731 18:13:01.736350   73479 cri.go:89] found id: ""
	I0731 18:13:01.736360   73479 logs.go:276] 1 containers: [65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645]
	I0731 18:13:01.736417   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:01.740672   73479 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:13:01.740733   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:13:01.774782   73479 cri.go:89] found id: "f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855"
	I0731 18:13:01.774816   73479 cri.go:89] found id: ""
	I0731 18:13:01.774826   73479 logs.go:276] 1 containers: [f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855]
	I0731 18:13:01.774893   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:01.778542   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:13:01.778618   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:13:01.818749   73479 cri.go:89] found id: "ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2"
	I0731 18:13:01.818769   73479 cri.go:89] found id: ""
	I0731 18:13:01.818776   73479 logs.go:276] 1 containers: [ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2]
	I0731 18:13:01.818828   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:01.827176   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:13:01.827248   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:13:01.860700   73479 cri.go:89] found id: "57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847"
	I0731 18:13:01.860730   73479 cri.go:89] found id: ""
	I0731 18:13:01.860739   73479 logs.go:276] 1 containers: [57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847]
	I0731 18:13:01.860825   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:03.654494   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:13:03.654747   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:13:01.864629   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:13:01.864702   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:13:01.899293   73479 cri.go:89] found id: "ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c"
	I0731 18:13:01.899338   73479 cri.go:89] found id: ""
	I0731 18:13:01.899347   73479 logs.go:276] 1 containers: [ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c]
	I0731 18:13:01.899406   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:01.903202   73479 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:13:01.903272   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:13:01.934472   73479 cri.go:89] found id: ""
	I0731 18:13:01.934505   73479 logs.go:276] 0 containers: []
	W0731 18:13:01.934516   73479 logs.go:278] No container was found matching "kindnet"
	I0731 18:13:01.934523   73479 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 18:13:01.934588   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 18:13:01.967244   73479 cri.go:89] found id: "6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df"
	I0731 18:13:01.967271   73479 cri.go:89] found id: "9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c"
	I0731 18:13:01.967276   73479 cri.go:89] found id: ""
	I0731 18:13:01.967285   73479 logs.go:276] 2 containers: [6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df 9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c]
	I0731 18:13:01.967349   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:01.971167   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:01.975648   73479 logs.go:123] Gathering logs for kubelet ...
	I0731 18:13:01.975670   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:13:02.031430   73479 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:13:02.031472   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 18:13:02.158774   73479 logs.go:123] Gathering logs for kube-apiserver [895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0] ...
	I0731 18:13:02.158803   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0"
	I0731 18:13:02.199495   73479 logs.go:123] Gathering logs for kube-scheduler [ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2] ...
	I0731 18:13:02.199521   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2"
	I0731 18:13:02.232285   73479 logs.go:123] Gathering logs for kube-proxy [57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847] ...
	I0731 18:13:02.232327   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847"
	I0731 18:13:02.272360   73479 logs.go:123] Gathering logs for storage-provisioner [9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c] ...
	I0731 18:13:02.272389   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c"
	I0731 18:13:02.305902   73479 logs.go:123] Gathering logs for dmesg ...
	I0731 18:13:02.305931   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:13:02.319954   73479 logs.go:123] Gathering logs for etcd [65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645] ...
	I0731 18:13:02.319984   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645"
	I0731 18:13:02.361657   73479 logs.go:123] Gathering logs for coredns [f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855] ...
	I0731 18:13:02.361685   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855"
	I0731 18:13:02.395696   73479 logs.go:123] Gathering logs for kube-controller-manager [ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c] ...
	I0731 18:13:02.395724   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c"
	I0731 18:13:02.444671   73479 logs.go:123] Gathering logs for storage-provisioner [6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df] ...
	I0731 18:13:02.444704   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df"
	I0731 18:13:02.480666   73479 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:13:02.480693   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:13:02.967693   73479 logs.go:123] Gathering logs for container status ...
	I0731 18:13:02.967741   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:13:05.512381   73479 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:13:05.528582   73479 api_server.go:72] duration metric: took 4m19.030809429s to wait for apiserver process to appear ...
	I0731 18:13:05.528612   73479 api_server.go:88] waiting for apiserver healthz status ...
	I0731 18:13:05.528652   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:13:05.528730   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:13:05.567984   73479 cri.go:89] found id: "895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0"
	I0731 18:13:05.568004   73479 cri.go:89] found id: ""
	I0731 18:13:05.568013   73479 logs.go:276] 1 containers: [895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0]
	I0731 18:13:05.568073   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:05.571946   73479 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:13:05.572003   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:13:05.620468   73479 cri.go:89] found id: "65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645"
	I0731 18:13:05.620495   73479 cri.go:89] found id: ""
	I0731 18:13:05.620504   73479 logs.go:276] 1 containers: [65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645]
	I0731 18:13:05.620571   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:05.624599   73479 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:13:05.624653   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:13:05.663717   73479 cri.go:89] found id: "f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855"
	I0731 18:13:05.663740   73479 cri.go:89] found id: ""
	I0731 18:13:05.663748   73479 logs.go:276] 1 containers: [f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855]
	I0731 18:13:05.663803   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:05.667601   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:13:05.667672   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:13:05.699764   73479 cri.go:89] found id: "ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2"
	I0731 18:13:05.699791   73479 cri.go:89] found id: ""
	I0731 18:13:05.699801   73479 logs.go:276] 1 containers: [ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2]
	I0731 18:13:05.699858   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:05.703965   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:13:05.704036   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:13:05.739460   73479 cri.go:89] found id: "57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847"
	I0731 18:13:05.739487   73479 cri.go:89] found id: ""
	I0731 18:13:05.739496   73479 logs.go:276] 1 containers: [57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847]
	I0731 18:13:05.739558   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:05.743180   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:13:05.743232   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:13:05.777369   73479 cri.go:89] found id: "ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c"
	I0731 18:13:05.777390   73479 cri.go:89] found id: ""
	I0731 18:13:05.777397   73479 logs.go:276] 1 containers: [ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c]
	I0731 18:13:05.777449   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:05.781388   73479 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:13:05.781435   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:13:05.825567   73479 cri.go:89] found id: ""
	I0731 18:13:05.825599   73479 logs.go:276] 0 containers: []
	W0731 18:13:05.825610   73479 logs.go:278] No container was found matching "kindnet"
	I0731 18:13:05.825617   73479 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 18:13:05.825689   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 18:13:05.859538   73479 cri.go:89] found id: "6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df"
	I0731 18:13:05.859570   73479 cri.go:89] found id: "9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c"
	I0731 18:13:05.859577   73479 cri.go:89] found id: ""
	I0731 18:13:05.859586   73479 logs.go:276] 2 containers: [6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df 9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c]
	I0731 18:13:05.859657   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:05.863513   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:05.866989   73479 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:13:05.867011   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:13:06.314116   73479 logs.go:123] Gathering logs for container status ...
	I0731 18:13:06.314166   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:13:06.357738   73479 logs.go:123] Gathering logs for kubelet ...
	I0731 18:13:06.357764   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:13:06.407330   73479 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:13:06.407365   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 18:13:06.508580   73479 logs.go:123] Gathering logs for kube-apiserver [895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0] ...
	I0731 18:13:06.508616   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0"
	I0731 18:13:06.550032   73479 logs.go:123] Gathering logs for coredns [f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855] ...
	I0731 18:13:06.550071   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855"
	I0731 18:13:06.588519   73479 logs.go:123] Gathering logs for kube-proxy [57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847] ...
	I0731 18:13:06.588548   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847"
	I0731 18:13:06.622872   73479 logs.go:123] Gathering logs for storage-provisioner [6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df] ...
	I0731 18:13:06.622901   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df"
	I0731 18:13:06.666694   73479 logs.go:123] Gathering logs for dmesg ...
	I0731 18:13:06.666721   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:13:06.680326   73479 logs.go:123] Gathering logs for etcd [65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645] ...
	I0731 18:13:06.680355   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645"
	I0731 18:13:06.723966   73479 logs.go:123] Gathering logs for kube-scheduler [ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2] ...
	I0731 18:13:06.723997   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2"
	I0731 18:13:06.760873   73479 logs.go:123] Gathering logs for kube-controller-manager [ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c] ...
	I0731 18:13:06.760901   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c"
	I0731 18:13:06.809348   73479 logs.go:123] Gathering logs for storage-provisioner [9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c] ...
	I0731 18:13:06.809387   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c"
	I0731 18:13:09.341394   73479 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0731 18:13:09.346642   73479 api_server.go:279] https://192.168.61.126:8443/healthz returned 200:
	ok
	I0731 18:13:09.347803   73479 api_server.go:141] control plane version: v1.31.0-beta.0
	I0731 18:13:09.347821   73479 api_server.go:131] duration metric: took 3.819202346s to wait for apiserver health ...
	I0731 18:13:09.347828   73479 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 18:13:09.347850   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:13:09.347903   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:13:09.391857   73479 cri.go:89] found id: "895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0"
	I0731 18:13:09.391885   73479 cri.go:89] found id: ""
	I0731 18:13:09.391895   73479 logs.go:276] 1 containers: [895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0]
	I0731 18:13:09.391956   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:09.395723   73479 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:13:09.395789   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:13:09.430108   73479 cri.go:89] found id: "65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645"
	I0731 18:13:09.430128   73479 cri.go:89] found id: ""
	I0731 18:13:09.430135   73479 logs.go:276] 1 containers: [65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645]
	I0731 18:13:09.430180   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:09.433933   73479 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:13:09.434037   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:13:09.471630   73479 cri.go:89] found id: "f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855"
	I0731 18:13:09.471655   73479 cri.go:89] found id: ""
	I0731 18:13:09.471663   73479 logs.go:276] 1 containers: [f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855]
	I0731 18:13:09.471709   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:09.476432   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:13:09.476496   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:13:09.519568   73479 cri.go:89] found id: "ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2"
	I0731 18:13:09.519590   73479 cri.go:89] found id: ""
	I0731 18:13:09.519598   73479 logs.go:276] 1 containers: [ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2]
	I0731 18:13:09.519641   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:09.523587   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:13:09.523656   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:13:09.559405   73479 cri.go:89] found id: "57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847"
	I0731 18:13:09.559429   73479 cri.go:89] found id: ""
	I0731 18:13:09.559438   73479 logs.go:276] 1 containers: [57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847]
	I0731 18:13:09.559485   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:09.564137   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:13:09.564199   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:13:09.605298   73479 cri.go:89] found id: "ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c"
	I0731 18:13:09.605324   73479 cri.go:89] found id: ""
	I0731 18:13:09.605332   73479 logs.go:276] 1 containers: [ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c]
	I0731 18:13:09.605403   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:09.612233   73479 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:13:09.612296   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:13:09.648804   73479 cri.go:89] found id: ""
	I0731 18:13:09.648836   73479 logs.go:276] 0 containers: []
	W0731 18:13:09.648848   73479 logs.go:278] No container was found matching "kindnet"
	I0731 18:13:09.648855   73479 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 18:13:09.648916   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 18:13:09.694708   73479 cri.go:89] found id: "6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df"
	I0731 18:13:09.694733   73479 cri.go:89] found id: "9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c"
	I0731 18:13:09.694737   73479 cri.go:89] found id: ""
	I0731 18:13:09.694743   73479 logs.go:276] 2 containers: [6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df 9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c]
	I0731 18:13:09.694794   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:09.698687   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:09.702244   73479 logs.go:123] Gathering logs for storage-provisioner [6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df] ...
	I0731 18:13:09.702261   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df"
	I0731 18:13:09.737777   73479 logs.go:123] Gathering logs for storage-provisioner [9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c] ...
	I0731 18:13:09.737808   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c"
	I0731 18:13:09.771128   73479 logs.go:123] Gathering logs for container status ...
	I0731 18:13:09.771161   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:13:09.817498   73479 logs.go:123] Gathering logs for dmesg ...
	I0731 18:13:09.817525   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:13:09.833574   73479 logs.go:123] Gathering logs for etcd [65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645] ...
	I0731 18:13:09.833607   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645"
	I0731 18:13:09.872664   73479 logs.go:123] Gathering logs for kube-scheduler [ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2] ...
	I0731 18:13:09.872691   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2"
	I0731 18:13:09.913741   73479 logs.go:123] Gathering logs for coredns [f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855] ...
	I0731 18:13:09.913771   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855"
	I0731 18:13:09.949469   73479 logs.go:123] Gathering logs for kube-proxy [57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847] ...
	I0731 18:13:09.949512   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847"
	I0731 18:13:09.985409   73479 logs.go:123] Gathering logs for kube-controller-manager [ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c] ...
	I0731 18:13:09.985447   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c"
	I0731 18:13:10.039018   73479 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:13:10.039048   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:13:10.406380   73479 logs.go:123] Gathering logs for kubelet ...
	I0731 18:13:10.406416   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:13:10.459944   73479 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:13:10.459997   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 18:13:10.564092   73479 logs.go:123] Gathering logs for kube-apiserver [895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0] ...
	I0731 18:13:10.564134   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0"
	I0731 18:13:13.124074   73479 system_pods.go:59] 8 kube-system pods found
	I0731 18:13:13.124102   73479 system_pods.go:61] "coredns-5cfdc65f69-k7clq" [e4be77b6-aa7c-45e2-90a1-6a8264fd5101] Running
	I0731 18:13:13.124107   73479 system_pods.go:61] "etcd-no-preload-673754" [d64e9194-dd33-4a28-b8c3-d25462f1dae8] Running
	I0731 18:13:13.124110   73479 system_pods.go:61] "kube-apiserver-no-preload-673754" [4497614d-51de-4a5f-93dc-1397446fe4c8] Running
	I0731 18:13:13.124114   73479 system_pods.go:61] "kube-controller-manager-no-preload-673754" [fe7f714e-d778-49e7-b4ef-44e6efb5bc33] Running
	I0731 18:13:13.124117   73479 system_pods.go:61] "kube-proxy-hqxh6" [4fb623c0-4be3-421c-85d6-1d76a90b874f] Running
	I0731 18:13:13.124119   73479 system_pods.go:61] "kube-scheduler-no-preload-673754" [558e9b78-a6a9-46b8-9db8-004126359c74] Running
	I0731 18:13:13.124125   73479 system_pods.go:61] "metrics-server-78fcd8795b-27pkr" [a63a156b-0446-4bd9-8619-de75edaeb481] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 18:13:13.124129   73479 system_pods.go:61] "storage-provisioner" [a9f0ab70-18f4-4fac-b858-a9177077fe29] Running
	I0731 18:13:13.124135   73479 system_pods.go:74] duration metric: took 3.776302431s to wait for pod list to return data ...
	I0731 18:13:13.124141   73479 default_sa.go:34] waiting for default service account to be created ...
	I0731 18:13:13.127100   73479 default_sa.go:45] found service account: "default"
	I0731 18:13:13.127137   73479 default_sa.go:55] duration metric: took 2.989455ms for default service account to be created ...
	I0731 18:13:13.127148   73479 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 18:13:13.132359   73479 system_pods.go:86] 8 kube-system pods found
	I0731 18:13:13.132379   73479 system_pods.go:89] "coredns-5cfdc65f69-k7clq" [e4be77b6-aa7c-45e2-90a1-6a8264fd5101] Running
	I0731 18:13:13.132387   73479 system_pods.go:89] "etcd-no-preload-673754" [d64e9194-dd33-4a28-b8c3-d25462f1dae8] Running
	I0731 18:13:13.132393   73479 system_pods.go:89] "kube-apiserver-no-preload-673754" [4497614d-51de-4a5f-93dc-1397446fe4c8] Running
	I0731 18:13:13.132399   73479 system_pods.go:89] "kube-controller-manager-no-preload-673754" [fe7f714e-d778-49e7-b4ef-44e6efb5bc33] Running
	I0731 18:13:13.132405   73479 system_pods.go:89] "kube-proxy-hqxh6" [4fb623c0-4be3-421c-85d6-1d76a90b874f] Running
	I0731 18:13:13.132410   73479 system_pods.go:89] "kube-scheduler-no-preload-673754" [558e9b78-a6a9-46b8-9db8-004126359c74] Running
	I0731 18:13:13.132420   73479 system_pods.go:89] "metrics-server-78fcd8795b-27pkr" [a63a156b-0446-4bd9-8619-de75edaeb481] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 18:13:13.132427   73479 system_pods.go:89] "storage-provisioner" [a9f0ab70-18f4-4fac-b858-a9177077fe29] Running
	I0731 18:13:13.132435   73479 system_pods.go:126] duration metric: took 5.281138ms to wait for k8s-apps to be running ...
	I0731 18:13:13.132443   73479 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 18:13:13.132488   73479 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:13:13.148254   73479 system_svc.go:56] duration metric: took 15.802724ms WaitForService to wait for kubelet
	I0731 18:13:13.148281   73479 kubeadm.go:582] duration metric: took 4m26.650509962s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 18:13:13.148315   73479 node_conditions.go:102] verifying NodePressure condition ...
	I0731 18:13:13.151986   73479 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 18:13:13.152006   73479 node_conditions.go:123] node cpu capacity is 2
	I0731 18:13:13.152018   73479 node_conditions.go:105] duration metric: took 3.693857ms to run NodePressure ...
	I0731 18:13:13.152031   73479 start.go:241] waiting for startup goroutines ...
	I0731 18:13:13.152043   73479 start.go:246] waiting for cluster config update ...
	I0731 18:13:13.152058   73479 start.go:255] writing updated cluster config ...
	I0731 18:13:13.152347   73479 ssh_runner.go:195] Run: rm -f paused
	I0731 18:13:13.202434   73479 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0731 18:13:13.205205   73479 out.go:177] * Done! kubectl is now configured to use "no-preload-673754" cluster and "default" namespace by default
	I0731 18:13:13.655618   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:13:13.655843   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:13:33.657356   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:13:33.657560   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:14:13.660934   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:14:13.661161   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:14:13.661183   74203 kubeadm.go:310] 
	I0731 18:14:13.661216   74203 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 18:14:13.661251   74203 kubeadm.go:310] 		timed out waiting for the condition
	I0731 18:14:13.661279   74203 kubeadm.go:310] 
	I0731 18:14:13.661338   74203 kubeadm.go:310] 	This error is likely caused by:
	I0731 18:14:13.661378   74203 kubeadm.go:310] 		- The kubelet is not running
	I0731 18:14:13.661477   74203 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 18:14:13.661483   74203 kubeadm.go:310] 
	I0731 18:14:13.661577   74203 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 18:14:13.661617   74203 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 18:14:13.661646   74203 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 18:14:13.661651   74203 kubeadm.go:310] 
	I0731 18:14:13.661781   74203 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 18:14:13.661897   74203 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 18:14:13.661909   74203 kubeadm.go:310] 
	I0731 18:14:13.662044   74203 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 18:14:13.662164   74203 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 18:14:13.662265   74203 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 18:14:13.662444   74203 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 18:14:13.662477   74203 kubeadm.go:310] 
	I0731 18:14:13.663123   74203 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 18:14:13.663235   74203 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 18:14:13.663331   74203 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0731 18:14:13.663497   74203 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0731 18:14:13.663559   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 18:14:18.956376   74203 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.292787213s)
	I0731 18:14:18.956479   74203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:14:18.970820   74203 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 18:14:18.980747   74203 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 18:14:18.980771   74203 kubeadm.go:157] found existing configuration files:
	
	I0731 18:14:18.980816   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 18:14:18.989985   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 18:14:18.990063   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 18:14:18.999143   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 18:14:19.008740   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 18:14:19.008798   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 18:14:19.018729   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 18:14:19.028953   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 18:14:19.029015   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 18:14:19.039399   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 18:14:19.049072   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 18:14:19.049124   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 18:14:19.059592   74203 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 18:14:19.121542   74203 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0731 18:14:19.121613   74203 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 18:14:19.271989   74203 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 18:14:19.272100   74203 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 18:14:19.272223   74203 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 18:14:19.440224   74203 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 18:14:19.441929   74203 out.go:204]   - Generating certificates and keys ...
	I0731 18:14:19.442025   74203 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 18:14:19.442104   74203 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 18:14:19.442196   74203 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 18:14:19.442245   74203 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 18:14:19.442326   74203 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 18:14:19.442395   74203 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 18:14:19.442498   74203 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 18:14:19.442610   74203 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 18:14:19.442687   74203 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 18:14:19.442770   74203 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 18:14:19.442813   74203 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 18:14:19.442887   74203 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 18:14:19.481696   74203 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 18:14:19.804252   74203 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 18:14:20.038734   74203 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 18:14:20.211133   74203 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 18:14:20.225726   74203 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 18:14:20.227920   74203 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 18:14:20.227977   74203 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 18:14:20.364068   74203 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 18:14:20.365991   74203 out.go:204]   - Booting up control plane ...
	I0731 18:14:20.366094   74203 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 18:14:20.366195   74203 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 18:14:20.366270   74203 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 18:14:20.366379   74203 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 18:14:20.367688   74203 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 18:15:00.365616   74203 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0731 18:15:00.366184   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:15:00.366412   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:15:05.366332   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:15:05.366529   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:15:15.366241   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:15:15.366499   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:15:35.366114   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:15:35.366344   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:16:15.365995   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:16:15.366181   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:16:15.366191   74203 kubeadm.go:310] 
	I0731 18:16:15.366224   74203 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 18:16:15.366448   74203 kubeadm.go:310] 		timed out waiting for the condition
	I0731 18:16:15.366472   74203 kubeadm.go:310] 
	I0731 18:16:15.366517   74203 kubeadm.go:310] 	This error is likely caused by:
	I0731 18:16:15.366568   74203 kubeadm.go:310] 		- The kubelet is not running
	I0731 18:16:15.366723   74203 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 18:16:15.366740   74203 kubeadm.go:310] 
	I0731 18:16:15.366863   74203 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 18:16:15.366924   74203 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 18:16:15.366986   74203 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 18:16:15.366999   74203 kubeadm.go:310] 
	I0731 18:16:15.367153   74203 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 18:16:15.367271   74203 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 18:16:15.367283   74203 kubeadm.go:310] 
	I0731 18:16:15.367418   74203 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 18:16:15.367504   74203 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 18:16:15.367609   74203 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 18:16:15.367725   74203 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 18:16:15.367734   74203 kubeadm.go:310] 
	I0731 18:16:15.369210   74203 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 18:16:15.369361   74203 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 18:16:15.369434   74203 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0731 18:16:15.369496   74203 kubeadm.go:394] duration metric: took 8m6.557607575s to StartCluster
	I0731 18:16:15.369537   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:16:15.369590   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:16:15.432899   74203 cri.go:89] found id: ""
	I0731 18:16:15.432929   74203 logs.go:276] 0 containers: []
	W0731 18:16:15.432941   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:16:15.432947   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:16:15.433005   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:16:15.470506   74203 cri.go:89] found id: ""
	I0731 18:16:15.470534   74203 logs.go:276] 0 containers: []
	W0731 18:16:15.470542   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:16:15.470549   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:16:15.470609   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:16:15.502032   74203 cri.go:89] found id: ""
	I0731 18:16:15.502055   74203 logs.go:276] 0 containers: []
	W0731 18:16:15.502062   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:16:15.502067   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:16:15.502115   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:16:15.533897   74203 cri.go:89] found id: ""
	I0731 18:16:15.533918   74203 logs.go:276] 0 containers: []
	W0731 18:16:15.533925   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:16:15.533930   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:16:15.533980   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:16:15.565275   74203 cri.go:89] found id: ""
	I0731 18:16:15.565311   74203 logs.go:276] 0 containers: []
	W0731 18:16:15.565326   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:16:15.565333   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:16:15.565395   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:16:15.601402   74203 cri.go:89] found id: ""
	I0731 18:16:15.601427   74203 logs.go:276] 0 containers: []
	W0731 18:16:15.601435   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:16:15.601440   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:16:15.601489   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:16:15.638778   74203 cri.go:89] found id: ""
	I0731 18:16:15.638801   74203 logs.go:276] 0 containers: []
	W0731 18:16:15.638808   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:16:15.638813   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:16:15.638861   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:16:15.675697   74203 cri.go:89] found id: ""
	I0731 18:16:15.675720   74203 logs.go:276] 0 containers: []
	W0731 18:16:15.675728   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:16:15.675736   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:16:15.675748   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:16:15.745287   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:16:15.745325   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:16:15.745341   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:16:15.848503   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:16:15.848536   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:16:15.887234   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:16:15.887258   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:16:15.934871   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:16:15.934901   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0731 18:16:15.947727   74203 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0731 18:16:15.947769   74203 out.go:239] * 
	W0731 18:16:15.947817   74203 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 18:16:15.947836   74203 out.go:239] * 
	W0731 18:16:15.948669   74203 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 18:16:15.952286   74203 out.go:177] 
	W0731 18:16:15.953375   74203 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 18:16:15.953424   74203 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0731 18:16:15.953442   74203 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0731 18:16:15.954734   74203 out.go:177] 
	
	
	==> CRI-O <==
	Jul 31 18:25:21 old-k8s-version-276459 crio[645]: time="2024-07-31 18:25:21.158348524Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722450321158316338,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ecfb4bb2-0e29-40fa-80bb-a0cf2af19db6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:25:21 old-k8s-version-276459 crio[645]: time="2024-07-31 18:25:21.158865192Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=58170b1e-1584-4e67-9023-a2aca13d3279 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:25:21 old-k8s-version-276459 crio[645]: time="2024-07-31 18:25:21.158928040Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=58170b1e-1584-4e67-9023-a2aca13d3279 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:25:21 old-k8s-version-276459 crio[645]: time="2024-07-31 18:25:21.158964952Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=58170b1e-1584-4e67-9023-a2aca13d3279 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:25:21 old-k8s-version-276459 crio[645]: time="2024-07-31 18:25:21.192896780Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e23db1fa-bd7f-4b9d-aa60-508d673d8130 name=/runtime.v1.RuntimeService/Version
	Jul 31 18:25:21 old-k8s-version-276459 crio[645]: time="2024-07-31 18:25:21.192987265Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e23db1fa-bd7f-4b9d-aa60-508d673d8130 name=/runtime.v1.RuntimeService/Version
	Jul 31 18:25:21 old-k8s-version-276459 crio[645]: time="2024-07-31 18:25:21.194278808Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fc506998-acb4-41bd-afa3-f68198a73de3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:25:21 old-k8s-version-276459 crio[645]: time="2024-07-31 18:25:21.194739001Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722450321194706605,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fc506998-acb4-41bd-afa3-f68198a73de3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:25:21 old-k8s-version-276459 crio[645]: time="2024-07-31 18:25:21.195196776Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0a008752-93ff-4d3f-afd8-2b6bcd96b349 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:25:21 old-k8s-version-276459 crio[645]: time="2024-07-31 18:25:21.195265575Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0a008752-93ff-4d3f-afd8-2b6bcd96b349 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:25:21 old-k8s-version-276459 crio[645]: time="2024-07-31 18:25:21.195300464Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=0a008752-93ff-4d3f-afd8-2b6bcd96b349 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:25:21 old-k8s-version-276459 crio[645]: time="2024-07-31 18:25:21.224146551Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=016cf14a-6adf-4d48-851e-be129cef2a4f name=/runtime.v1.RuntimeService/Version
	Jul 31 18:25:21 old-k8s-version-276459 crio[645]: time="2024-07-31 18:25:21.224280205Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=016cf14a-6adf-4d48-851e-be129cef2a4f name=/runtime.v1.RuntimeService/Version
	Jul 31 18:25:21 old-k8s-version-276459 crio[645]: time="2024-07-31 18:25:21.225308392Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=33a08077-4da5-4c97-b172-cf4224132daf name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:25:21 old-k8s-version-276459 crio[645]: time="2024-07-31 18:25:21.225860593Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722450321225833959,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=33a08077-4da5-4c97-b172-cf4224132daf name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:25:21 old-k8s-version-276459 crio[645]: time="2024-07-31 18:25:21.226461220Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=635605dd-f1d3-4d74-8724-72578ef85b08 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:25:21 old-k8s-version-276459 crio[645]: time="2024-07-31 18:25:21.226529163Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=635605dd-f1d3-4d74-8724-72578ef85b08 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:25:21 old-k8s-version-276459 crio[645]: time="2024-07-31 18:25:21.226571322Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=635605dd-f1d3-4d74-8724-72578ef85b08 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:25:21 old-k8s-version-276459 crio[645]: time="2024-07-31 18:25:21.256867397Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7b25160b-b9aa-4435-ac71-52d4cf47bf87 name=/runtime.v1.RuntimeService/Version
	Jul 31 18:25:21 old-k8s-version-276459 crio[645]: time="2024-07-31 18:25:21.256953547Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7b25160b-b9aa-4435-ac71-52d4cf47bf87 name=/runtime.v1.RuntimeService/Version
	Jul 31 18:25:21 old-k8s-version-276459 crio[645]: time="2024-07-31 18:25:21.258144939Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6b302198-c3d6-437a-b0d7-377560d42da0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:25:21 old-k8s-version-276459 crio[645]: time="2024-07-31 18:25:21.258625672Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722450321258603644,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6b302198-c3d6-437a-b0d7-377560d42da0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:25:21 old-k8s-version-276459 crio[645]: time="2024-07-31 18:25:21.259176414Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2c941d0c-01e1-4c79-a390-c048c82caebc name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:25:21 old-k8s-version-276459 crio[645]: time="2024-07-31 18:25:21.259240680Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2c941d0c-01e1-4c79-a390-c048c82caebc name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:25:21 old-k8s-version-276459 crio[645]: time="2024-07-31 18:25:21.259272043Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=2c941d0c-01e1-4c79-a390-c048c82caebc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul31 18:07] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051412] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042688] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.944073] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.816954] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.537194] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul31 18:08] systemd-fstab-generator[565]: Ignoring "noauto" option for root device
	[  +0.060507] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.075418] systemd-fstab-generator[577]: Ignoring "noauto" option for root device
	[  +0.176279] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.160769] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.263587] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +6.137429] systemd-fstab-generator[832]: Ignoring "noauto" option for root device
	[  +0.060102] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.916258] systemd-fstab-generator[956]: Ignoring "noauto" option for root device
	[ +12.736454] kauditd_printk_skb: 46 callbacks suppressed
	[Jul31 18:12] systemd-fstab-generator[5055]: Ignoring "noauto" option for root device
	[Jul31 18:14] systemd-fstab-generator[5341]: Ignoring "noauto" option for root device
	[  +0.064729] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 18:25:21 up 17 min,  0 users,  load average: 0.02, 0.04, 0.03
	Linux old-k8s-version-276459 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 31 18:25:15 old-k8s-version-276459 kubelet[6520]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0000a00c0, 0xc000d0f950)
	Jul 31 18:25:15 old-k8s-version-276459 kubelet[6520]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Jul 31 18:25:15 old-k8s-version-276459 kubelet[6520]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Jul 31 18:25:15 old-k8s-version-276459 kubelet[6520]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Jul 31 18:25:15 old-k8s-version-276459 kubelet[6520]: goroutine 158 [select]:
	Jul 31 18:25:15 old-k8s-version-276459 kubelet[6520]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000173ef0, 0x4f0ac20, 0xc00047a410, 0x1, 0xc0000a00c0)
	Jul 31 18:25:15 old-k8s-version-276459 kubelet[6520]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Jul 31 18:25:15 old-k8s-version-276459 kubelet[6520]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000150460, 0xc0000a00c0)
	Jul 31 18:25:15 old-k8s-version-276459 kubelet[6520]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Jul 31 18:25:15 old-k8s-version-276459 kubelet[6520]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Jul 31 18:25:15 old-k8s-version-276459 kubelet[6520]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Jul 31 18:25:15 old-k8s-version-276459 kubelet[6520]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc0002d2760, 0xc000b420c0)
	Jul 31 18:25:15 old-k8s-version-276459 kubelet[6520]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Jul 31 18:25:15 old-k8s-version-276459 kubelet[6520]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Jul 31 18:25:15 old-k8s-version-276459 kubelet[6520]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Jul 31 18:25:15 old-k8s-version-276459 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 31 18:25:15 old-k8s-version-276459 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 31 18:25:16 old-k8s-version-276459 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Jul 31 18:25:16 old-k8s-version-276459 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 31 18:25:16 old-k8s-version-276459 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 31 18:25:16 old-k8s-version-276459 kubelet[6530]: I0731 18:25:16.631559    6530 server.go:416] Version: v1.20.0
	Jul 31 18:25:16 old-k8s-version-276459 kubelet[6530]: I0731 18:25:16.631867    6530 server.go:837] Client rotation is on, will bootstrap in background
	Jul 31 18:25:16 old-k8s-version-276459 kubelet[6530]: I0731 18:25:16.633671    6530 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 31 18:25:16 old-k8s-version-276459 kubelet[6530]: W0731 18:25:16.634534    6530 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jul 31 18:25:16 old-k8s-version-276459 kubelet[6530]: I0731 18:25:16.634826    6530 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-276459 -n old-k8s-version-276459
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-276459 -n old-k8s-version-276459: exit status 2 (212.602177ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-276459" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (430.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-094310 -n default-k8s-diff-port-094310
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-31 18:28:52.594693619 +0000 UTC m=+6536.301431801
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-094310 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-094310 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.567µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-094310 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-094310 -n default-k8s-diff-port-094310
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-094310 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-094310 logs -n 25: (1.126179201s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                     | default-k8s-diff-port-094310 | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 18:00 UTC |
	|         | default-k8s-diff-port-094310                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-673754             | no-preload-673754            | jenkins | v1.33.1 | 31 Jul 24 17:59 UTC | 31 Jul 24 17:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-673754                                   | no-preload-673754            | jenkins | v1.33.1 | 31 Jul 24 17:59 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-094310  | default-k8s-diff-port-094310 | jenkins | v1.33.1 | 31 Jul 24 18:00 UTC | 31 Jul 24 18:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-094310 | jenkins | v1.33.1 | 31 Jul 24 18:00 UTC |                     |
	|         | default-k8s-diff-port-094310                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-436067            | embed-certs-436067           | jenkins | v1.33.1 | 31 Jul 24 18:00 UTC | 31 Jul 24 18:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-436067                                  | embed-certs-436067           | jenkins | v1.33.1 | 31 Jul 24 18:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-276459        | old-k8s-version-276459       | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-673754                  | no-preload-673754            | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-673754 --memory=2200                     | no-preload-673754            | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC | 31 Jul 24 18:13 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-094310       | default-k8s-diff-port-094310 | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-436067                 | embed-certs-436067           | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-094310 | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC | 31 Jul 24 18:12 UTC |
	|         | default-k8s-diff-port-094310                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-436067                                  | embed-certs-436067           | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC | 31 Jul 24 18:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-276459                              | old-k8s-version-276459       | jenkins | v1.33.1 | 31 Jul 24 18:03 UTC | 31 Jul 24 18:03 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-276459             | old-k8s-version-276459       | jenkins | v1.33.1 | 31 Jul 24 18:03 UTC | 31 Jul 24 18:03 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-276459                              | old-k8s-version-276459       | jenkins | v1.33.1 | 31 Jul 24 18:03 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-276459                              | old-k8s-version-276459       | jenkins | v1.33.1 | 31 Jul 24 18:27 UTC | 31 Jul 24 18:27 UTC |
	| start   | -p newest-cni-094683 --memory=2200 --alsologtostderr   | newest-cni-094683            | jenkins | v1.33.1 | 31 Jul 24 18:27 UTC | 31 Jul 24 18:28 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| delete  | -p no-preload-673754                                   | no-preload-673754            | jenkins | v1.33.1 | 31 Jul 24 18:27 UTC | 31 Jul 24 18:27 UTC |
	| delete  | -p embed-certs-436067                                  | embed-certs-436067           | jenkins | v1.33.1 | 31 Jul 24 18:28 UTC | 31 Jul 24 18:28 UTC |
	| addons  | enable metrics-server -p newest-cni-094683             | newest-cni-094683            | jenkins | v1.33.1 | 31 Jul 24 18:28 UTC | 31 Jul 24 18:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-094683                                   | newest-cni-094683            | jenkins | v1.33.1 | 31 Jul 24 18:28 UTC | 31 Jul 24 18:28 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-094683                  | newest-cni-094683            | jenkins | v1.33.1 | 31 Jul 24 18:28 UTC | 31 Jul 24 18:28 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-094683 --memory=2200 --alsologtostderr   | newest-cni-094683            | jenkins | v1.33.1 | 31 Jul 24 18:28 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 18:28:44
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 18:28:44.899623   81862 out.go:291] Setting OutFile to fd 1 ...
	I0731 18:28:44.899872   81862 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:28:44.899880   81862 out.go:304] Setting ErrFile to fd 2...
	I0731 18:28:44.899884   81862 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:28:44.900053   81862 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19349-8084/.minikube/bin
	I0731 18:28:44.900583   81862 out.go:298] Setting JSON to false
	I0731 18:28:44.901426   81862 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7869,"bootTime":1722442656,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 18:28:44.901480   81862 start.go:139] virtualization: kvm guest
	I0731 18:28:44.903781   81862 out.go:177] * [newest-cni-094683] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 18:28:44.905223   81862 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 18:28:44.905227   81862 notify.go:220] Checking for updates...
	I0731 18:28:44.906766   81862 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 18:28:44.908230   81862 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 18:28:44.909506   81862 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19349-8084/.minikube
	I0731 18:28:44.910666   81862 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 18:28:44.911885   81862 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 18:28:44.913537   81862 config.go:182] Loaded profile config "newest-cni-094683": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 18:28:44.913930   81862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:28:44.913980   81862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:28:44.928486   81862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40873
	I0731 18:28:44.929016   81862 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:28:44.929539   81862 main.go:141] libmachine: Using API Version  1
	I0731 18:28:44.929571   81862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:28:44.929920   81862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:28:44.930115   81862 main.go:141] libmachine: (newest-cni-094683) Calling .DriverName
	I0731 18:28:44.930378   81862 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 18:28:44.930710   81862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:28:44.930749   81862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:28:44.945946   81862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43823
	I0731 18:28:44.946330   81862 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:28:44.946780   81862 main.go:141] libmachine: Using API Version  1
	I0731 18:28:44.946802   81862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:28:44.947104   81862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:28:44.947380   81862 main.go:141] libmachine: (newest-cni-094683) Calling .DriverName
	I0731 18:28:44.982034   81862 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 18:28:44.983214   81862 start.go:297] selected driver: kvm2
	I0731 18:28:44.983226   81862 start.go:901] validating driver "kvm2" against &{Name:newest-cni-094683 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0-beta.0 ClusterName:newest-cni-094683 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_
pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:28:44.983381   81862 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 18:28:44.984099   81862 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 18:28:44.984176   81862 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19349-8084/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 18:28:44.998681   81862 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 18:28:44.999014   81862 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0731 18:28:44.999076   81862 cni.go:84] Creating CNI manager for ""
	I0731 18:28:44.999094   81862 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:28:44.999164   81862 start.go:340] cluster config:
	{Name:newest-cni-094683 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-094683 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAdd
ress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:28:44.999282   81862 iso.go:125] acquiring lock: {Name:mk4494bb5a63902bf12a909c3109db85229bc516 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 18:28:45.001696   81862 out.go:177] * Starting "newest-cni-094683" primary control-plane node in "newest-cni-094683" cluster
	I0731 18:28:45.003104   81862 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0731 18:28:45.003162   81862 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0731 18:28:45.003171   81862 cache.go:56] Caching tarball of preloaded images
	I0731 18:28:45.003251   81862 preload.go:172] Found /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 18:28:45.003261   81862 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0731 18:28:45.003350   81862 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/newest-cni-094683/config.json ...
	I0731 18:28:45.003519   81862 start.go:360] acquireMachinesLock for newest-cni-094683: {Name:mkd78eacfab7d6c3058c6674434b3d889ec957e0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 18:28:45.003556   81862 start.go:364] duration metric: took 20.987µs to acquireMachinesLock for "newest-cni-094683"
	I0731 18:28:45.003570   81862 start.go:96] Skipping create...Using existing machine configuration
	I0731 18:28:45.003576   81862 fix.go:54] fixHost starting: 
	I0731 18:28:45.003811   81862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:28:45.003847   81862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:28:45.017761   81862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44053
	I0731 18:28:45.018166   81862 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:28:45.018565   81862 main.go:141] libmachine: Using API Version  1
	I0731 18:28:45.018587   81862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:28:45.018890   81862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:28:45.019079   81862 main.go:141] libmachine: (newest-cni-094683) Calling .DriverName
	I0731 18:28:45.019238   81862 main.go:141] libmachine: (newest-cni-094683) Calling .GetState
	I0731 18:28:45.020761   81862 fix.go:112] recreateIfNeeded on newest-cni-094683: state=Stopped err=<nil>
	I0731 18:28:45.020789   81862 main.go:141] libmachine: (newest-cni-094683) Calling .DriverName
	W0731 18:28:45.020947   81862 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 18:28:45.022971   81862 out.go:177] * Restarting existing kvm2 VM for "newest-cni-094683" ...
	I0731 18:28:45.024230   81862 main.go:141] libmachine: (newest-cni-094683) Calling .Start
	I0731 18:28:45.024418   81862 main.go:141] libmachine: (newest-cni-094683) Ensuring networks are active...
	I0731 18:28:45.025337   81862 main.go:141] libmachine: (newest-cni-094683) Ensuring network default is active
	I0731 18:28:45.025667   81862 main.go:141] libmachine: (newest-cni-094683) Ensuring network mk-newest-cni-094683 is active
	I0731 18:28:45.026095   81862 main.go:141] libmachine: (newest-cni-094683) Getting domain xml...
	I0731 18:28:45.026889   81862 main.go:141] libmachine: (newest-cni-094683) Creating domain...
	I0731 18:28:46.244660   81862 main.go:141] libmachine: (newest-cni-094683) Waiting to get IP...
	I0731 18:28:46.245573   81862 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:28:46.245977   81862 main.go:141] libmachine: (newest-cni-094683) DBG | unable to find current IP address of domain newest-cni-094683 in network mk-newest-cni-094683
	I0731 18:28:46.246040   81862 main.go:141] libmachine: (newest-cni-094683) DBG | I0731 18:28:46.245951   81897 retry.go:31] will retry after 200.375982ms: waiting for machine to come up
	I0731 18:28:46.448568   81862 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:28:46.449015   81862 main.go:141] libmachine: (newest-cni-094683) DBG | unable to find current IP address of domain newest-cni-094683 in network mk-newest-cni-094683
	I0731 18:28:46.449054   81862 main.go:141] libmachine: (newest-cni-094683) DBG | I0731 18:28:46.448991   81897 retry.go:31] will retry after 297.525785ms: waiting for machine to come up
	I0731 18:28:46.748474   81862 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:28:46.748978   81862 main.go:141] libmachine: (newest-cni-094683) DBG | unable to find current IP address of domain newest-cni-094683 in network mk-newest-cni-094683
	I0731 18:28:46.749006   81862 main.go:141] libmachine: (newest-cni-094683) DBG | I0731 18:28:46.748926   81897 retry.go:31] will retry after 442.51276ms: waiting for machine to come up
	I0731 18:28:47.193236   81862 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:28:47.193706   81862 main.go:141] libmachine: (newest-cni-094683) DBG | unable to find current IP address of domain newest-cni-094683 in network mk-newest-cni-094683
	I0731 18:28:47.193735   81862 main.go:141] libmachine: (newest-cni-094683) DBG | I0731 18:28:47.193658   81897 retry.go:31] will retry after 447.172169ms: waiting for machine to come up
	I0731 18:28:47.642383   81862 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:28:47.642825   81862 main.go:141] libmachine: (newest-cni-094683) DBG | unable to find current IP address of domain newest-cni-094683 in network mk-newest-cni-094683
	I0731 18:28:47.642853   81862 main.go:141] libmachine: (newest-cni-094683) DBG | I0731 18:28:47.642794   81897 retry.go:31] will retry after 685.847537ms: waiting for machine to come up
	I0731 18:28:48.330514   81862 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:28:48.330881   81862 main.go:141] libmachine: (newest-cni-094683) DBG | unable to find current IP address of domain newest-cni-094683 in network mk-newest-cni-094683
	I0731 18:28:48.330909   81862 main.go:141] libmachine: (newest-cni-094683) DBG | I0731 18:28:48.330823   81897 retry.go:31] will retry after 804.543257ms: waiting for machine to come up
	I0731 18:28:49.136781   81862 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:28:49.137233   81862 main.go:141] libmachine: (newest-cni-094683) DBG | unable to find current IP address of domain newest-cni-094683 in network mk-newest-cni-094683
	I0731 18:28:49.137264   81862 main.go:141] libmachine: (newest-cni-094683) DBG | I0731 18:28:49.137186   81897 retry.go:31] will retry after 1.001850399s: waiting for machine to come up
	
	
	==> CRI-O <==
	Jul 31 18:28:53 default-k8s-diff-port-094310 crio[732]: time="2024-07-31 18:28:53.120254318Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722450533120233164,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=113420a5-7575-4c5d-bfaf-79e6c8b33ea9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:28:53 default-k8s-diff-port-094310 crio[732]: time="2024-07-31 18:28:53.120702989Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d4badedb-d6d7-4a98-9f8c-0332cc8b3e0c name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:28:53 default-k8s-diff-port-094310 crio[732]: time="2024-07-31 18:28:53.120764635Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d4badedb-d6d7-4a98-9f8c-0332cc8b3e0c name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:28:53 default-k8s-diff-port-094310 crio[732]: time="2024-07-31 18:28:53.120969345Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8867449b4946a6b46cb5df1ec153fc2d52afab53d8d19dc9fc5d2d8685f5c162,PodSandboxId:74fa804456abf16fac2e4fa969eee65afe4b7e15d6f538d13d57f28771a4d365,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722449558463916333,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e4c5a96-4dfc-4af6-8d3e-bab865644328,},Annotations:map[string]string{io.kubernetes.container.hash: 1864afb1,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e823d63ea68928a212649386ed1e1c88825a99b2e51a8fd260abd803ead2f898,PodSandboxId:48fd734a4f028ef2ae7597ea7dacd7fd78736f6d56e69f8e4d542e8d8e3bb1b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722449558198634989,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2r7zb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7acc926-db53-4c1c-a62f-45e303c69fc7,},Annotations:map[string]string{io.kubernetes.container.hash: 9fb1dd8f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebc062f33e6f9112555983f1590d8754e32c9f6b68fb23b23ead8100dc9fecca,PodSandboxId:a0ea8bf7817a49db99a3ee9ba0faf78e0ccb811c9e36c0a1fb4c9cb41b64efe4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722449558231808748,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-756jj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: c91e86b1-8348-4dc7-9aa1-6909055c2dde,},Annotations:map[string]string{io.kubernetes.container.hash: 2b84c784,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f8486d598e4f1a0155d2efcf42b5d5b6189672809432dddfbf300f911645053,PodSandboxId:75a61419abdf32959044b20bd7157430ccd33922eac014b248e8ebaa12533a4a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING
,CreatedAt:1722449557536317934,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4vvjq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8dd9db6-5f45-4c8d-a9d3-03ede1db07eb,},Annotations:map[string]string{io.kubernetes.container.hash: 53ff8087,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2cf9a7321c1d465b4ff676a304ab76c71686a8fa65ce79b916fb5388df6c085,PodSandboxId:cec8900941b1a088416eb020a103598cfbb3be666a95976ea8c4d2b45b764cfe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:172244953
8066367091,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-094310,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f9d9a2710369dbc0687b88cb4bd4404,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:157f17723a1b942bc7ad5fa1c4a9bb8b7ea85b1e1b216e3275a8bb5dc2cb7825,PodSandboxId:d3a6809c64dd5600a0019669b05eb9fff5535d97af24300d2f118746de5120dd,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:172
2449538059414984,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-094310,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0108a789bab39d2e181154113e900f6c,},Annotations:map[string]string{io.kubernetes.container.hash: 3de99cbf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a91f54ad2f9d99bd43cf0a559b3727f82d3316da1a4f6b0bcae0d421af842b1c,PodSandboxId:e76d819d9c1a5fc216d7b8d8dbead55ebe011c4aaff616f53493d4c59d3e6267,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722449537998604800,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-094310,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b04f1a21453eb8a8c3d16cebd0427a1,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5575a74c69e5ebdff7ec9a9a2b7dcbc1703d1f590f0dd35611530a040896cbd6,PodSandboxId:91d9f73237ba174e1e107f148a125f315e4e482ea1fc9bd3aacd781e4957fa4b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722449537950680856,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-094310,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13952284e64c880c73a35e185bde83ea,},Annotations:map[string]string{io.kubernetes.container.hash: 4e096051,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d4badedb-d6d7-4a98-9f8c-0332cc8b3e0c name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:28:53 default-k8s-diff-port-094310 crio[732]: time="2024-07-31 18:28:53.156246287Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a0e3253a-2a2b-4824-b3fb-8d2be746674c name=/runtime.v1.RuntimeService/Version
	Jul 31 18:28:53 default-k8s-diff-port-094310 crio[732]: time="2024-07-31 18:28:53.156338304Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a0e3253a-2a2b-4824-b3fb-8d2be746674c name=/runtime.v1.RuntimeService/Version
	Jul 31 18:28:53 default-k8s-diff-port-094310 crio[732]: time="2024-07-31 18:28:53.157421071Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e74dc4f4-45dd-45a9-aa67-485ef48cb1f0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:28:53 default-k8s-diff-port-094310 crio[732]: time="2024-07-31 18:28:53.157813373Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722450533157793319,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e74dc4f4-45dd-45a9-aa67-485ef48cb1f0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:28:53 default-k8s-diff-port-094310 crio[732]: time="2024-07-31 18:28:53.158319888Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9d573e78-722f-4b7f-8fe0-735f65e408a0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:28:53 default-k8s-diff-port-094310 crio[732]: time="2024-07-31 18:28:53.158394858Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9d573e78-722f-4b7f-8fe0-735f65e408a0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:28:53 default-k8s-diff-port-094310 crio[732]: time="2024-07-31 18:28:53.158593642Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8867449b4946a6b46cb5df1ec153fc2d52afab53d8d19dc9fc5d2d8685f5c162,PodSandboxId:74fa804456abf16fac2e4fa969eee65afe4b7e15d6f538d13d57f28771a4d365,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722449558463916333,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e4c5a96-4dfc-4af6-8d3e-bab865644328,},Annotations:map[string]string{io.kubernetes.container.hash: 1864afb1,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e823d63ea68928a212649386ed1e1c88825a99b2e51a8fd260abd803ead2f898,PodSandboxId:48fd734a4f028ef2ae7597ea7dacd7fd78736f6d56e69f8e4d542e8d8e3bb1b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722449558198634989,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2r7zb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7acc926-db53-4c1c-a62f-45e303c69fc7,},Annotations:map[string]string{io.kubernetes.container.hash: 9fb1dd8f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebc062f33e6f9112555983f1590d8754e32c9f6b68fb23b23ead8100dc9fecca,PodSandboxId:a0ea8bf7817a49db99a3ee9ba0faf78e0ccb811c9e36c0a1fb4c9cb41b64efe4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722449558231808748,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-756jj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: c91e86b1-8348-4dc7-9aa1-6909055c2dde,},Annotations:map[string]string{io.kubernetes.container.hash: 2b84c784,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f8486d598e4f1a0155d2efcf42b5d5b6189672809432dddfbf300f911645053,PodSandboxId:75a61419abdf32959044b20bd7157430ccd33922eac014b248e8ebaa12533a4a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING
,CreatedAt:1722449557536317934,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4vvjq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8dd9db6-5f45-4c8d-a9d3-03ede1db07eb,},Annotations:map[string]string{io.kubernetes.container.hash: 53ff8087,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2cf9a7321c1d465b4ff676a304ab76c71686a8fa65ce79b916fb5388df6c085,PodSandboxId:cec8900941b1a088416eb020a103598cfbb3be666a95976ea8c4d2b45b764cfe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:172244953
8066367091,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-094310,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f9d9a2710369dbc0687b88cb4bd4404,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:157f17723a1b942bc7ad5fa1c4a9bb8b7ea85b1e1b216e3275a8bb5dc2cb7825,PodSandboxId:d3a6809c64dd5600a0019669b05eb9fff5535d97af24300d2f118746de5120dd,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:172
2449538059414984,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-094310,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0108a789bab39d2e181154113e900f6c,},Annotations:map[string]string{io.kubernetes.container.hash: 3de99cbf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a91f54ad2f9d99bd43cf0a559b3727f82d3316da1a4f6b0bcae0d421af842b1c,PodSandboxId:e76d819d9c1a5fc216d7b8d8dbead55ebe011c4aaff616f53493d4c59d3e6267,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722449537998604800,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-094310,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b04f1a21453eb8a8c3d16cebd0427a1,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5575a74c69e5ebdff7ec9a9a2b7dcbc1703d1f590f0dd35611530a040896cbd6,PodSandboxId:91d9f73237ba174e1e107f148a125f315e4e482ea1fc9bd3aacd781e4957fa4b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722449537950680856,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-094310,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13952284e64c880c73a35e185bde83ea,},Annotations:map[string]string{io.kubernetes.container.hash: 4e096051,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9d573e78-722f-4b7f-8fe0-735f65e408a0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:28:53 default-k8s-diff-port-094310 crio[732]: time="2024-07-31 18:28:53.205088873Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e03781b3-2d33-4737-ac73-1cfd08b1b587 name=/runtime.v1.RuntimeService/Version
	Jul 31 18:28:53 default-k8s-diff-port-094310 crio[732]: time="2024-07-31 18:28:53.205214494Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e03781b3-2d33-4737-ac73-1cfd08b1b587 name=/runtime.v1.RuntimeService/Version
	Jul 31 18:28:53 default-k8s-diff-port-094310 crio[732]: time="2024-07-31 18:28:53.206466381Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3d8668fb-34e1-4f43-ae8f-b60f982fc0a5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:28:53 default-k8s-diff-port-094310 crio[732]: time="2024-07-31 18:28:53.206863655Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722450533206842408,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3d8668fb-34e1-4f43-ae8f-b60f982fc0a5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:28:53 default-k8s-diff-port-094310 crio[732]: time="2024-07-31 18:28:53.207441753Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8e4d6c28-028e-4c08-bb5e-1184eefbfc07 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:28:53 default-k8s-diff-port-094310 crio[732]: time="2024-07-31 18:28:53.207510012Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8e4d6c28-028e-4c08-bb5e-1184eefbfc07 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:28:53 default-k8s-diff-port-094310 crio[732]: time="2024-07-31 18:28:53.208107032Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8867449b4946a6b46cb5df1ec153fc2d52afab53d8d19dc9fc5d2d8685f5c162,PodSandboxId:74fa804456abf16fac2e4fa969eee65afe4b7e15d6f538d13d57f28771a4d365,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722449558463916333,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e4c5a96-4dfc-4af6-8d3e-bab865644328,},Annotations:map[string]string{io.kubernetes.container.hash: 1864afb1,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e823d63ea68928a212649386ed1e1c88825a99b2e51a8fd260abd803ead2f898,PodSandboxId:48fd734a4f028ef2ae7597ea7dacd7fd78736f6d56e69f8e4d542e8d8e3bb1b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722449558198634989,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2r7zb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7acc926-db53-4c1c-a62f-45e303c69fc7,},Annotations:map[string]string{io.kubernetes.container.hash: 9fb1dd8f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebc062f33e6f9112555983f1590d8754e32c9f6b68fb23b23ead8100dc9fecca,PodSandboxId:a0ea8bf7817a49db99a3ee9ba0faf78e0ccb811c9e36c0a1fb4c9cb41b64efe4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722449558231808748,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-756jj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: c91e86b1-8348-4dc7-9aa1-6909055c2dde,},Annotations:map[string]string{io.kubernetes.container.hash: 2b84c784,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f8486d598e4f1a0155d2efcf42b5d5b6189672809432dddfbf300f911645053,PodSandboxId:75a61419abdf32959044b20bd7157430ccd33922eac014b248e8ebaa12533a4a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING
,CreatedAt:1722449557536317934,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4vvjq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8dd9db6-5f45-4c8d-a9d3-03ede1db07eb,},Annotations:map[string]string{io.kubernetes.container.hash: 53ff8087,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2cf9a7321c1d465b4ff676a304ab76c71686a8fa65ce79b916fb5388df6c085,PodSandboxId:cec8900941b1a088416eb020a103598cfbb3be666a95976ea8c4d2b45b764cfe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:172244953
8066367091,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-094310,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f9d9a2710369dbc0687b88cb4bd4404,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:157f17723a1b942bc7ad5fa1c4a9bb8b7ea85b1e1b216e3275a8bb5dc2cb7825,PodSandboxId:d3a6809c64dd5600a0019669b05eb9fff5535d97af24300d2f118746de5120dd,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:172
2449538059414984,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-094310,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0108a789bab39d2e181154113e900f6c,},Annotations:map[string]string{io.kubernetes.container.hash: 3de99cbf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a91f54ad2f9d99bd43cf0a559b3727f82d3316da1a4f6b0bcae0d421af842b1c,PodSandboxId:e76d819d9c1a5fc216d7b8d8dbead55ebe011c4aaff616f53493d4c59d3e6267,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722449537998604800,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-094310,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b04f1a21453eb8a8c3d16cebd0427a1,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5575a74c69e5ebdff7ec9a9a2b7dcbc1703d1f590f0dd35611530a040896cbd6,PodSandboxId:91d9f73237ba174e1e107f148a125f315e4e482ea1fc9bd3aacd781e4957fa4b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722449537950680856,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-094310,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13952284e64c880c73a35e185bde83ea,},Annotations:map[string]string{io.kubernetes.container.hash: 4e096051,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8e4d6c28-028e-4c08-bb5e-1184eefbfc07 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:28:53 default-k8s-diff-port-094310 crio[732]: time="2024-07-31 18:28:53.243786433Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c62c6261-90c0-4790-b009-33ec685b2f76 name=/runtime.v1.RuntimeService/Version
	Jul 31 18:28:53 default-k8s-diff-port-094310 crio[732]: time="2024-07-31 18:28:53.243861051Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c62c6261-90c0-4790-b009-33ec685b2f76 name=/runtime.v1.RuntimeService/Version
	Jul 31 18:28:53 default-k8s-diff-port-094310 crio[732]: time="2024-07-31 18:28:53.245236900Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7a9433a5-2777-4cd9-8562-5c7f9351d37b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:28:53 default-k8s-diff-port-094310 crio[732]: time="2024-07-31 18:28:53.246506823Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722450533246479236,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7a9433a5-2777-4cd9-8562-5c7f9351d37b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:28:53 default-k8s-diff-port-094310 crio[732]: time="2024-07-31 18:28:53.247403003Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2702a8d2-3af2-4e57-b6d1-402d16a18eca name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:28:53 default-k8s-diff-port-094310 crio[732]: time="2024-07-31 18:28:53.247474093Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2702a8d2-3af2-4e57-b6d1-402d16a18eca name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:28:53 default-k8s-diff-port-094310 crio[732]: time="2024-07-31 18:28:53.247765324Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8867449b4946a6b46cb5df1ec153fc2d52afab53d8d19dc9fc5d2d8685f5c162,PodSandboxId:74fa804456abf16fac2e4fa969eee65afe4b7e15d6f538d13d57f28771a4d365,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722449558463916333,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e4c5a96-4dfc-4af6-8d3e-bab865644328,},Annotations:map[string]string{io.kubernetes.container.hash: 1864afb1,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e823d63ea68928a212649386ed1e1c88825a99b2e51a8fd260abd803ead2f898,PodSandboxId:48fd734a4f028ef2ae7597ea7dacd7fd78736f6d56e69f8e4d542e8d8e3bb1b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722449558198634989,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2r7zb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7acc926-db53-4c1c-a62f-45e303c69fc7,},Annotations:map[string]string{io.kubernetes.container.hash: 9fb1dd8f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebc062f33e6f9112555983f1590d8754e32c9f6b68fb23b23ead8100dc9fecca,PodSandboxId:a0ea8bf7817a49db99a3ee9ba0faf78e0ccb811c9e36c0a1fb4c9cb41b64efe4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722449558231808748,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-756jj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: c91e86b1-8348-4dc7-9aa1-6909055c2dde,},Annotations:map[string]string{io.kubernetes.container.hash: 2b84c784,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f8486d598e4f1a0155d2efcf42b5d5b6189672809432dddfbf300f911645053,PodSandboxId:75a61419abdf32959044b20bd7157430ccd33922eac014b248e8ebaa12533a4a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING
,CreatedAt:1722449557536317934,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4vvjq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8dd9db6-5f45-4c8d-a9d3-03ede1db07eb,},Annotations:map[string]string{io.kubernetes.container.hash: 53ff8087,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2cf9a7321c1d465b4ff676a304ab76c71686a8fa65ce79b916fb5388df6c085,PodSandboxId:cec8900941b1a088416eb020a103598cfbb3be666a95976ea8c4d2b45b764cfe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:172244953
8066367091,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-094310,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f9d9a2710369dbc0687b88cb4bd4404,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:157f17723a1b942bc7ad5fa1c4a9bb8b7ea85b1e1b216e3275a8bb5dc2cb7825,PodSandboxId:d3a6809c64dd5600a0019669b05eb9fff5535d97af24300d2f118746de5120dd,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:172
2449538059414984,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-094310,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0108a789bab39d2e181154113e900f6c,},Annotations:map[string]string{io.kubernetes.container.hash: 3de99cbf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a91f54ad2f9d99bd43cf0a559b3727f82d3316da1a4f6b0bcae0d421af842b1c,PodSandboxId:e76d819d9c1a5fc216d7b8d8dbead55ebe011c4aaff616f53493d4c59d3e6267,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722449537998604800,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-094310,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b04f1a21453eb8a8c3d16cebd0427a1,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5575a74c69e5ebdff7ec9a9a2b7dcbc1703d1f590f0dd35611530a040896cbd6,PodSandboxId:91d9f73237ba174e1e107f148a125f315e4e482ea1fc9bd3aacd781e4957fa4b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722449537950680856,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-094310,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13952284e64c880c73a35e185bde83ea,},Annotations:map[string]string{io.kubernetes.container.hash: 4e096051,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2702a8d2-3af2-4e57-b6d1-402d16a18eca name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8867449b4946a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   74fa804456abf       storage-provisioner
	ebc062f33e6f9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 minutes ago      Running             coredns                   0                   a0ea8bf7817a4       coredns-7db6d8ff4d-756jj
	e823d63ea6892       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 minutes ago      Running             coredns                   0                   48fd734a4f028       coredns-7db6d8ff4d-2r7zb
	2f8486d598e4f       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   16 minutes ago      Running             kube-proxy                0                   75a61419abdf3       kube-proxy-4vvjq
	f2cf9a7321c1d       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   16 minutes ago      Running             kube-controller-manager   2                   cec8900941b1a       kube-controller-manager-default-k8s-diff-port-094310
	157f17723a1b9       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   16 minutes ago      Running             etcd                      2                   d3a6809c64dd5       etcd-default-k8s-diff-port-094310
	a91f54ad2f9d9       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   16 minutes ago      Running             kube-scheduler            2                   e76d819d9c1a5       kube-scheduler-default-k8s-diff-port-094310
	5575a74c69e5e       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   16 minutes ago      Running             kube-apiserver            2                   91d9f73237ba1       kube-apiserver-default-k8s-diff-port-094310
	
	
	==> coredns [e823d63ea68928a212649386ed1e1c88825a99b2e51a8fd260abd803ead2f898] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [ebc062f33e6f9112555983f1590d8754e32c9f6b68fb23b23ead8100dc9fecca] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-094310
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-094310
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1d737dad7efa60c56d30434fcd857dd3b14c91d9
	                    minikube.k8s.io/name=default-k8s-diff-port-094310
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T18_12_23_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 18:12:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-094310
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 18:28:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 18:28:02 +0000   Wed, 31 Jul 2024 18:12:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 18:28:02 +0000   Wed, 31 Jul 2024 18:12:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 18:28:02 +0000   Wed, 31 Jul 2024 18:12:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 18:28:02 +0000   Wed, 31 Jul 2024 18:12:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.197
	  Hostname:    default-k8s-diff-port-094310
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b931426af7404ddd8ff19612654a9015
	  System UUID:                b931426a-f740-4ddd-8ff1-9612654a9015
	  Boot ID:                    1cbee3ab-6252-4713-a476-4e77af6b70c8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-2r7zb                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-7db6d8ff4d-756jj                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-default-k8s-diff-port-094310                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kube-apiserver-default-k8s-diff-port-094310             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-094310    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-4vvjq                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-default-k8s-diff-port-094310             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 metrics-server-569cc877fc-mskwc                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 16m   kube-proxy       
	  Normal  Starting                 16m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m   kubelet          Node default-k8s-diff-port-094310 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m   kubelet          Node default-k8s-diff-port-094310 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m   kubelet          Node default-k8s-diff-port-094310 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m   node-controller  Node default-k8s-diff-port-094310 event: Registered Node default-k8s-diff-port-094310 in Controller
	
	
	==> dmesg <==
	[  +0.037041] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.674092] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.767925] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.524839] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.779733] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +0.063172] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056380] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +0.170434] systemd-fstab-generator[675]: Ignoring "noauto" option for root device
	[  +0.145133] systemd-fstab-generator[687]: Ignoring "noauto" option for root device
	[  +0.256289] systemd-fstab-generator[716]: Ignoring "noauto" option for root device
	[  +4.107426] systemd-fstab-generator[813]: Ignoring "noauto" option for root device
	[  +1.831623] systemd-fstab-generator[936]: Ignoring "noauto" option for root device
	[  +0.063601] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.493060] kauditd_printk_skb: 69 callbacks suppressed
	[  +6.533131] kauditd_printk_skb: 79 callbacks suppressed
	[  +7.873186] kauditd_printk_skb: 2 callbacks suppressed
	[Jul31 18:12] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.807598] systemd-fstab-generator[3562]: Ignoring "noauto" option for root device
	[  +4.688996] kauditd_printk_skb: 57 callbacks suppressed
	[  +1.368027] systemd-fstab-generator[3889]: Ignoring "noauto" option for root device
	[ +13.860194] systemd-fstab-generator[4081]: Ignoring "noauto" option for root device
	[  +0.130983] kauditd_printk_skb: 14 callbacks suppressed
	[Jul31 18:13] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [157f17723a1b942bc7ad5fa1c4a9bb8b7ea85b1e1b216e3275a8bb5dc2cb7825] <==
	{"level":"info","ts":"2024-07-31T18:12:18.727206Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"caefc81a6a0a2f54 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-31T18:12:18.727319Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"caefc81a6a0a2f54 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-31T18:12:18.727376Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"caefc81a6a0a2f54 received MsgPreVoteResp from caefc81a6a0a2f54 at term 1"}
	{"level":"info","ts":"2024-07-31T18:12:18.72741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"caefc81a6a0a2f54 became candidate at term 2"}
	{"level":"info","ts":"2024-07-31T18:12:18.727434Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"caefc81a6a0a2f54 received MsgVoteResp from caefc81a6a0a2f54 at term 2"}
	{"level":"info","ts":"2024-07-31T18:12:18.727461Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"caefc81a6a0a2f54 became leader at term 2"}
	{"level":"info","ts":"2024-07-31T18:12:18.727486Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: caefc81a6a0a2f54 elected leader caefc81a6a0a2f54 at term 2"}
	{"level":"info","ts":"2024-07-31T18:12:18.731301Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T18:12:18.73355Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"caefc81a6a0a2f54","local-member-attributes":"{Name:default-k8s-diff-port-094310 ClientURLs:[https://192.168.72.197:2379]}","request-path":"/0/members/caefc81a6a0a2f54/attributes","cluster-id":"260389a3b5060778","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T18:12:18.73372Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T18:12:18.740398Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.197:2379"}
	{"level":"info","ts":"2024-07-31T18:12:18.745473Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"260389a3b5060778","local-member-id":"caefc81a6a0a2f54","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T18:12:18.75029Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T18:12:18.750364Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T18:12:18.745496Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T18:12:18.751258Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T18:12:18.754186Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-31T18:12:18.754963Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-31T18:22:19.107598Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":681}
	{"level":"info","ts":"2024-07-31T18:22:19.117672Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":681,"took":"8.925337ms","hash":2585920560,"current-db-size-bytes":2179072,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2179072,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-07-31T18:22:19.117728Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2585920560,"revision":681,"compact-revision":-1}
	{"level":"info","ts":"2024-07-31T18:27:19.116796Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":923}
	{"level":"info","ts":"2024-07-31T18:27:19.120749Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":923,"took":"3.151083ms","hash":4121534766,"current-db-size-bytes":2179072,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":1540096,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-07-31T18:27:19.120825Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4121534766,"revision":923,"compact-revision":681}
	{"level":"info","ts":"2024-07-31T18:28:17.499606Z","caller":"traceutil/trace.go:171","msg":"trace[468111275] transaction","detail":"{read_only:false; response_revision:1216; number_of_response:1; }","duration":"117.707562ms","start":"2024-07-31T18:28:17.381847Z","end":"2024-07-31T18:28:17.499555Z","steps":["trace[468111275] 'process raft request'  (duration: 117.559158ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:28:53 up 21 min,  0 users,  load average: 0.20, 0.13, 0.11
	Linux default-k8s-diff-port-094310 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [5575a74c69e5ebdff7ec9a9a2b7dcbc1703d1f590f0dd35611530a040896cbd6] <==
	I0731 18:23:21.684841       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 18:25:21.684702       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 18:25:21.684794       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0731 18:25:21.684804       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 18:25:21.686118       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 18:25:21.686312       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0731 18:25:21.686327       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 18:27:20.687746       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 18:27:20.688060       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0731 18:27:21.689192       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 18:27:21.689239       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0731 18:27:21.689247       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 18:27:21.689369       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 18:27:21.689472       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0731 18:27:21.690781       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 18:28:21.690363       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 18:28:21.690435       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0731 18:28:21.690447       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 18:28:21.691700       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 18:28:21.691814       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0731 18:28:21.691825       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [f2cf9a7321c1d465b4ff676a304ab76c71686a8fa65ce79b916fb5388df6c085] <==
	I0731 18:23:06.718679       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 18:23:36.242416       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 18:23:36.727478       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0731 18:23:55.086119       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="282.286µs"
	E0731 18:24:06.249333       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 18:24:06.735414       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0731 18:24:07.079927       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="87.537µs"
	E0731 18:24:36.254666       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 18:24:36.743236       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 18:25:06.260441       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 18:25:06.750640       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 18:25:36.266267       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 18:25:36.759068       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 18:26:06.271878       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 18:26:06.766725       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 18:26:36.277139       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 18:26:36.775388       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 18:27:06.282885       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 18:27:06.782887       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 18:27:36.287969       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 18:27:36.790936       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 18:28:06.293896       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 18:28:06.799777       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 18:28:36.299475       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 18:28:36.807037       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [2f8486d598e4f1a0155d2efcf42b5d5b6189672809432dddfbf300f911645053] <==
	I0731 18:12:37.898727       1 server_linux.go:69] "Using iptables proxy"
	I0731 18:12:37.923719       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.72.197"]
	I0731 18:12:38.498328       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 18:12:38.498701       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 18:12:38.501250       1 server_linux.go:165] "Using iptables Proxier"
	I0731 18:12:38.523587       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 18:12:38.523882       1 server.go:872] "Version info" version="v1.30.3"
	I0731 18:12:38.524119       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 18:12:38.525420       1 config.go:192] "Starting service config controller"
	I0731 18:12:38.525766       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 18:12:38.525899       1 config.go:101] "Starting endpoint slice config controller"
	I0731 18:12:38.527439       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 18:12:38.528261       1 config.go:319] "Starting node config controller"
	I0731 18:12:38.528382       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 18:12:38.627200       1 shared_informer.go:320] Caches are synced for service config
	I0731 18:12:38.629898       1 shared_informer.go:320] Caches are synced for node config
	I0731 18:12:38.630700       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [a91f54ad2f9d99bd43cf0a559b3727f82d3316da1a4f6b0bcae0d421af842b1c] <==
	W0731 18:12:20.705737       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 18:12:20.705782       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0731 18:12:20.705811       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0731 18:12:20.705842       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0731 18:12:20.706011       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 18:12:20.706061       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0731 18:12:20.706253       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 18:12:20.706303       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0731 18:12:21.524350       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 18:12:21.524479       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 18:12:21.605945       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 18:12:21.605999       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 18:12:21.657332       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0731 18:12:21.657427       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0731 18:12:21.673778       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 18:12:21.673831       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0731 18:12:21.752045       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 18:12:21.752213       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0731 18:12:21.770817       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 18:12:21.770911       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0731 18:12:21.796357       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0731 18:12:21.796473       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0731 18:12:21.927894       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 18:12:21.927941       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0731 18:12:23.990915       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 31 18:26:25 default-k8s-diff-port-094310 kubelet[3896]: E0731 18:26:25.068833    3896 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mskwc" podUID="c57990b4-1d91-4764-9c33-2fd5f7d2f83b"
	Jul 31 18:26:36 default-k8s-diff-port-094310 kubelet[3896]: E0731 18:26:36.063204    3896 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mskwc" podUID="c57990b4-1d91-4764-9c33-2fd5f7d2f83b"
	Jul 31 18:26:49 default-k8s-diff-port-094310 kubelet[3896]: E0731 18:26:49.066992    3896 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mskwc" podUID="c57990b4-1d91-4764-9c33-2fd5f7d2f83b"
	Jul 31 18:27:01 default-k8s-diff-port-094310 kubelet[3896]: E0731 18:27:01.066270    3896 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mskwc" podUID="c57990b4-1d91-4764-9c33-2fd5f7d2f83b"
	Jul 31 18:27:14 default-k8s-diff-port-094310 kubelet[3896]: E0731 18:27:14.064065    3896 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mskwc" podUID="c57990b4-1d91-4764-9c33-2fd5f7d2f83b"
	Jul 31 18:27:23 default-k8s-diff-port-094310 kubelet[3896]: E0731 18:27:23.090465    3896 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 18:27:23 default-k8s-diff-port-094310 kubelet[3896]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 18:27:23 default-k8s-diff-port-094310 kubelet[3896]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 18:27:23 default-k8s-diff-port-094310 kubelet[3896]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 18:27:23 default-k8s-diff-port-094310 kubelet[3896]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 18:27:27 default-k8s-diff-port-094310 kubelet[3896]: E0731 18:27:27.068595    3896 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mskwc" podUID="c57990b4-1d91-4764-9c33-2fd5f7d2f83b"
	Jul 31 18:27:38 default-k8s-diff-port-094310 kubelet[3896]: E0731 18:27:38.063975    3896 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mskwc" podUID="c57990b4-1d91-4764-9c33-2fd5f7d2f83b"
	Jul 31 18:27:51 default-k8s-diff-port-094310 kubelet[3896]: E0731 18:27:51.065065    3896 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mskwc" podUID="c57990b4-1d91-4764-9c33-2fd5f7d2f83b"
	Jul 31 18:28:05 default-k8s-diff-port-094310 kubelet[3896]: E0731 18:28:05.065977    3896 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mskwc" podUID="c57990b4-1d91-4764-9c33-2fd5f7d2f83b"
	Jul 31 18:28:20 default-k8s-diff-port-094310 kubelet[3896]: E0731 18:28:20.064218    3896 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mskwc" podUID="c57990b4-1d91-4764-9c33-2fd5f7d2f83b"
	Jul 31 18:28:23 default-k8s-diff-port-094310 kubelet[3896]: E0731 18:28:23.088502    3896 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 18:28:23 default-k8s-diff-port-094310 kubelet[3896]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 18:28:23 default-k8s-diff-port-094310 kubelet[3896]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 18:28:23 default-k8s-diff-port-094310 kubelet[3896]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 18:28:23 default-k8s-diff-port-094310 kubelet[3896]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 18:28:32 default-k8s-diff-port-094310 kubelet[3896]: E0731 18:28:32.063830    3896 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mskwc" podUID="c57990b4-1d91-4764-9c33-2fd5f7d2f83b"
	Jul 31 18:28:45 default-k8s-diff-port-094310 kubelet[3896]: E0731 18:28:45.084577    3896 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 31 18:28:45 default-k8s-diff-port-094310 kubelet[3896]: E0731 18:28:45.085020    3896 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 31 18:28:45 default-k8s-diff-port-094310 kubelet[3896]: E0731 18:28:45.085529    3896 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tbg62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathE
xpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,Stdi
nOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-mskwc_kube-system(c57990b4-1d91-4764-9c33-2fd5f7d2f83b): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 31 18:28:45 default-k8s-diff-port-094310 kubelet[3896]: E0731 18:28:45.085726    3896 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-mskwc" podUID="c57990b4-1d91-4764-9c33-2fd5f7d2f83b"
	
	
	==> storage-provisioner [8867449b4946a6b46cb5df1ec153fc2d52afab53d8d19dc9fc5d2d8685f5c162] <==
	I0731 18:12:38.647075       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0731 18:12:38.667448       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0731 18:12:38.667515       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0731 18:12:38.688429       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0731 18:12:38.688714       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-094310_3fb7820d-59af-45e0-9f2c-6a4c796e2267!
	I0731 18:12:38.696716       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"44438c96-eb0c-4bdb-b67c-216ba6e640fa", APIVersion:"v1", ResourceVersion:"404", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-094310_3fb7820d-59af-45e0-9f2c-6a4c796e2267 became leader
	I0731 18:12:38.790297       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-094310_3fb7820d-59af-45e0-9f2c-6a4c796e2267!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-094310 -n default-k8s-diff-port-094310
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-094310 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-mskwc
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-094310 describe pod metrics-server-569cc877fc-mskwc
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-094310 describe pod metrics-server-569cc877fc-mskwc: exit status 1 (74.265774ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-mskwc" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-094310 describe pod metrics-server-569cc877fc-mskwc: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (430.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (378.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-436067 -n embed-certs-436067
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-31 18:28:16.942541963 +0000 UTC m=+6500.649280145
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-436067 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-436067 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.049µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-436067 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-436067 -n embed-certs-436067
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-436067 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-436067 logs -n 25: (2.241617403s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-985288                           | enable-default-cni-985288    | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 17:58 UTC |
	|         | sudo systemctl cat crio                                |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-985288                           | enable-default-cni-985288    | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 17:58 UTC |
	|         | sudo find /etc/crio -type f                            |                              |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                          |                              |         |         |                     |                     |
	|         | \;                                                     |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-985288                           | enable-default-cni-985288    | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 17:58 UTC |
	|         | sudo crio config                                       |                              |         |         |                     |                     |
	| delete  | -p enable-default-cni-985288                           | enable-default-cni-985288    | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 17:58 UTC |
	| delete  | -p                                                     | disable-driver-mounts-280161 | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 17:58 UTC |
	|         | disable-driver-mounts-280161                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-094310 | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 18:00 UTC |
	|         | default-k8s-diff-port-094310                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-673754             | no-preload-673754            | jenkins | v1.33.1 | 31 Jul 24 17:59 UTC | 31 Jul 24 17:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-673754                                   | no-preload-673754            | jenkins | v1.33.1 | 31 Jul 24 17:59 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-094310  | default-k8s-diff-port-094310 | jenkins | v1.33.1 | 31 Jul 24 18:00 UTC | 31 Jul 24 18:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-094310 | jenkins | v1.33.1 | 31 Jul 24 18:00 UTC |                     |
	|         | default-k8s-diff-port-094310                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-436067            | embed-certs-436067           | jenkins | v1.33.1 | 31 Jul 24 18:00 UTC | 31 Jul 24 18:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-436067                                  | embed-certs-436067           | jenkins | v1.33.1 | 31 Jul 24 18:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-276459        | old-k8s-version-276459       | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-673754                  | no-preload-673754            | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-673754 --memory=2200                     | no-preload-673754            | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC | 31 Jul 24 18:13 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-094310       | default-k8s-diff-port-094310 | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-436067                 | embed-certs-436067           | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-094310 | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC | 31 Jul 24 18:12 UTC |
	|         | default-k8s-diff-port-094310                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-436067                                  | embed-certs-436067           | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC | 31 Jul 24 18:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-276459                              | old-k8s-version-276459       | jenkins | v1.33.1 | 31 Jul 24 18:03 UTC | 31 Jul 24 18:03 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-276459             | old-k8s-version-276459       | jenkins | v1.33.1 | 31 Jul 24 18:03 UTC | 31 Jul 24 18:03 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-276459                              | old-k8s-version-276459       | jenkins | v1.33.1 | 31 Jul 24 18:03 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-276459                              | old-k8s-version-276459       | jenkins | v1.33.1 | 31 Jul 24 18:27 UTC | 31 Jul 24 18:27 UTC |
	| start   | -p newest-cni-094683 --memory=2200 --alsologtostderr   | newest-cni-094683            | jenkins | v1.33.1 | 31 Jul 24 18:27 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| delete  | -p no-preload-673754                                   | no-preload-673754            | jenkins | v1.33.1 | 31 Jul 24 18:27 UTC | 31 Jul 24 18:27 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 18:27:43
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 18:27:43.485752   80933 out.go:291] Setting OutFile to fd 1 ...
	I0731 18:27:43.485989   80933 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:27:43.486000   80933 out.go:304] Setting ErrFile to fd 2...
	I0731 18:27:43.486004   80933 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:27:43.486171   80933 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19349-8084/.minikube/bin
	I0731 18:27:43.486746   80933 out.go:298] Setting JSON to false
	I0731 18:27:43.487705   80933 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7807,"bootTime":1722442656,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 18:27:43.487759   80933 start.go:139] virtualization: kvm guest
	I0731 18:27:43.490097   80933 out.go:177] * [newest-cni-094683] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 18:27:43.491485   80933 notify.go:220] Checking for updates...
	I0731 18:27:43.491526   80933 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 18:27:43.493132   80933 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 18:27:43.494544   80933 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 18:27:43.495802   80933 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19349-8084/.minikube
	I0731 18:27:43.497173   80933 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 18:27:43.498529   80933 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 18:27:43.500299   80933 config.go:182] Loaded profile config "default-k8s-diff-port-094310": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:27:43.500407   80933 config.go:182] Loaded profile config "embed-certs-436067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:27:43.500515   80933 config.go:182] Loaded profile config "no-preload-673754": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 18:27:43.500588   80933 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 18:27:43.536849   80933 out.go:177] * Using the kvm2 driver based on user configuration
	I0731 18:27:43.538259   80933 start.go:297] selected driver: kvm2
	I0731 18:27:43.538269   80933 start.go:901] validating driver "kvm2" against <nil>
	I0731 18:27:43.538279   80933 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 18:27:43.538928   80933 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 18:27:43.538988   80933 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19349-8084/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 18:27:43.554753   80933 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 18:27:43.554813   80933 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0731 18:27:43.554837   80933 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0731 18:27:43.555096   80933 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0731 18:27:43.555198   80933 cni.go:84] Creating CNI manager for ""
	I0731 18:27:43.555217   80933 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:27:43.555231   80933 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 18:27:43.555335   80933 start.go:340] cluster config:
	{Name:newest-cni-094683 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-094683 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:27:43.555443   80933 iso.go:125] acquiring lock: {Name:mk4494bb5a63902bf12a909c3109db85229bc516 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 18:27:43.557064   80933 out.go:177] * Starting "newest-cni-094683" primary control-plane node in "newest-cni-094683" cluster
	I0731 18:27:43.558183   80933 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0731 18:27:43.558218   80933 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0731 18:27:43.558227   80933 cache.go:56] Caching tarball of preloaded images
	I0731 18:27:43.558320   80933 preload.go:172] Found /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 18:27:43.558334   80933 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0731 18:27:43.558435   80933 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/newest-cni-094683/config.json ...
	I0731 18:27:43.558464   80933 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/newest-cni-094683/config.json: {Name:mkee846a2f2bebb9197f1ea334d63f1371a10147 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:27:43.558612   80933 start.go:360] acquireMachinesLock for newest-cni-094683: {Name:mkd78eacfab7d6c3058c6674434b3d889ec957e0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 18:27:43.558645   80933 start.go:364] duration metric: took 17.977µs to acquireMachinesLock for "newest-cni-094683"
	I0731 18:27:43.558667   80933 start.go:93] Provisioning new machine with config: &{Name:newest-cni-094683 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-094683 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minik
ube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 18:27:43.558753   80933 start.go:125] createHost starting for "" (driver="kvm2")
	I0731 18:27:43.560466   80933 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 18:27:43.560645   80933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:27:43.560694   80933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:27:43.576281   80933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46655
	I0731 18:27:43.576698   80933 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:27:43.577257   80933 main.go:141] libmachine: Using API Version  1
	I0731 18:27:43.577278   80933 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:27:43.577709   80933 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:27:43.578031   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetMachineName
	I0731 18:27:43.578225   80933 main.go:141] libmachine: (newest-cni-094683) Calling .DriverName
	I0731 18:27:43.578390   80933 start.go:159] libmachine.API.Create for "newest-cni-094683" (driver="kvm2")
	I0731 18:27:43.578418   80933 client.go:168] LocalClient.Create starting
	I0731 18:27:43.578456   80933 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem
	I0731 18:27:43.578493   80933 main.go:141] libmachine: Decoding PEM data...
	I0731 18:27:43.578515   80933 main.go:141] libmachine: Parsing certificate...
	I0731 18:27:43.578598   80933 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem
	I0731 18:27:43.578626   80933 main.go:141] libmachine: Decoding PEM data...
	I0731 18:27:43.578646   80933 main.go:141] libmachine: Parsing certificate...
	I0731 18:27:43.578670   80933 main.go:141] libmachine: Running pre-create checks...
	I0731 18:27:43.578693   80933 main.go:141] libmachine: (newest-cni-094683) Calling .PreCreateCheck
	I0731 18:27:43.579091   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetConfigRaw
	I0731 18:27:43.579469   80933 main.go:141] libmachine: Creating machine...
	I0731 18:27:43.579482   80933 main.go:141] libmachine: (newest-cni-094683) Calling .Create
	I0731 18:27:43.579635   80933 main.go:141] libmachine: (newest-cni-094683) Creating KVM machine...
	I0731 18:27:43.580931   80933 main.go:141] libmachine: (newest-cni-094683) DBG | found existing default KVM network
	I0731 18:27:43.582343   80933 main.go:141] libmachine: (newest-cni-094683) DBG | I0731 18:27:43.582169   80956 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012dfa0}
	I0731 18:27:43.582362   80933 main.go:141] libmachine: (newest-cni-094683) DBG | created network xml: 
	I0731 18:27:43.582371   80933 main.go:141] libmachine: (newest-cni-094683) DBG | <network>
	I0731 18:27:43.582377   80933 main.go:141] libmachine: (newest-cni-094683) DBG |   <name>mk-newest-cni-094683</name>
	I0731 18:27:43.582383   80933 main.go:141] libmachine: (newest-cni-094683) DBG |   <dns enable='no'/>
	I0731 18:27:43.582392   80933 main.go:141] libmachine: (newest-cni-094683) DBG |   
	I0731 18:27:43.582399   80933 main.go:141] libmachine: (newest-cni-094683) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0731 18:27:43.582408   80933 main.go:141] libmachine: (newest-cni-094683) DBG |     <dhcp>
	I0731 18:27:43.582417   80933 main.go:141] libmachine: (newest-cni-094683) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0731 18:27:43.582425   80933 main.go:141] libmachine: (newest-cni-094683) DBG |     </dhcp>
	I0731 18:27:43.582457   80933 main.go:141] libmachine: (newest-cni-094683) DBG |   </ip>
	I0731 18:27:43.582480   80933 main.go:141] libmachine: (newest-cni-094683) DBG |   
	I0731 18:27:43.582493   80933 main.go:141] libmachine: (newest-cni-094683) DBG | </network>
	I0731 18:27:43.582508   80933 main.go:141] libmachine: (newest-cni-094683) DBG | 
	I0731 18:27:43.587980   80933 main.go:141] libmachine: (newest-cni-094683) DBG | trying to create private KVM network mk-newest-cni-094683 192.168.39.0/24...
	I0731 18:27:43.660320   80933 main.go:141] libmachine: (newest-cni-094683) DBG | private KVM network mk-newest-cni-094683 192.168.39.0/24 created
	I0731 18:27:43.660401   80933 main.go:141] libmachine: (newest-cni-094683) DBG | I0731 18:27:43.660292   80956 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19349-8084/.minikube
	I0731 18:27:43.660423   80933 main.go:141] libmachine: (newest-cni-094683) Setting up store path in /home/jenkins/minikube-integration/19349-8084/.minikube/machines/newest-cni-094683 ...
	I0731 18:27:43.660442   80933 main.go:141] libmachine: (newest-cni-094683) Building disk image from file:///home/jenkins/minikube-integration/19349-8084/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0731 18:27:43.660538   80933 main.go:141] libmachine: (newest-cni-094683) Downloading /home/jenkins/minikube-integration/19349-8084/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19349-8084/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0731 18:27:43.910473   80933 main.go:141] libmachine: (newest-cni-094683) DBG | I0731 18:27:43.910315   80956 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/newest-cni-094683/id_rsa...
	I0731 18:27:44.025657   80933 main.go:141] libmachine: (newest-cni-094683) DBG | I0731 18:27:44.025505   80956 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/newest-cni-094683/newest-cni-094683.rawdisk...
	I0731 18:27:44.025707   80933 main.go:141] libmachine: (newest-cni-094683) DBG | Writing magic tar header
	I0731 18:27:44.025726   80933 main.go:141] libmachine: (newest-cni-094683) DBG | Writing SSH key tar header
	I0731 18:27:44.025739   80933 main.go:141] libmachine: (newest-cni-094683) DBG | I0731 18:27:44.025660   80956 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19349-8084/.minikube/machines/newest-cni-094683 ...
	I0731 18:27:44.025847   80933 main.go:141] libmachine: (newest-cni-094683) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/newest-cni-094683
	I0731 18:27:44.025885   80933 main.go:141] libmachine: (newest-cni-094683) Setting executable bit set on /home/jenkins/minikube-integration/19349-8084/.minikube/machines/newest-cni-094683 (perms=drwx------)
	I0731 18:27:44.025903   80933 main.go:141] libmachine: (newest-cni-094683) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19349-8084/.minikube/machines
	I0731 18:27:44.025925   80933 main.go:141] libmachine: (newest-cni-094683) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19349-8084/.minikube
	I0731 18:27:44.025938   80933 main.go:141] libmachine: (newest-cni-094683) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19349-8084
	I0731 18:27:44.025953   80933 main.go:141] libmachine: (newest-cni-094683) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0731 18:27:44.025972   80933 main.go:141] libmachine: (newest-cni-094683) DBG | Checking permissions on dir: /home/jenkins
	I0731 18:27:44.025988   80933 main.go:141] libmachine: (newest-cni-094683) Setting executable bit set on /home/jenkins/minikube-integration/19349-8084/.minikube/machines (perms=drwxr-xr-x)
	I0731 18:27:44.026002   80933 main.go:141] libmachine: (newest-cni-094683) Setting executable bit set on /home/jenkins/minikube-integration/19349-8084/.minikube (perms=drwxr-xr-x)
	I0731 18:27:44.026016   80933 main.go:141] libmachine: (newest-cni-094683) Setting executable bit set on /home/jenkins/minikube-integration/19349-8084 (perms=drwxrwxr-x)
	I0731 18:27:44.026031   80933 main.go:141] libmachine: (newest-cni-094683) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0731 18:27:44.026045   80933 main.go:141] libmachine: (newest-cni-094683) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0731 18:27:44.026057   80933 main.go:141] libmachine: (newest-cni-094683) DBG | Checking permissions on dir: /home
	I0731 18:27:44.026074   80933 main.go:141] libmachine: (newest-cni-094683) DBG | Skipping /home - not owner
	I0731 18:27:44.026091   80933 main.go:141] libmachine: (newest-cni-094683) Creating domain...
	I0731 18:27:44.027250   80933 main.go:141] libmachine: (newest-cni-094683) define libvirt domain using xml: 
	I0731 18:27:44.027276   80933 main.go:141] libmachine: (newest-cni-094683) <domain type='kvm'>
	I0731 18:27:44.027287   80933 main.go:141] libmachine: (newest-cni-094683)   <name>newest-cni-094683</name>
	I0731 18:27:44.027295   80933 main.go:141] libmachine: (newest-cni-094683)   <memory unit='MiB'>2200</memory>
	I0731 18:27:44.027309   80933 main.go:141] libmachine: (newest-cni-094683)   <vcpu>2</vcpu>
	I0731 18:27:44.027327   80933 main.go:141] libmachine: (newest-cni-094683)   <features>
	I0731 18:27:44.027336   80933 main.go:141] libmachine: (newest-cni-094683)     <acpi/>
	I0731 18:27:44.027344   80933 main.go:141] libmachine: (newest-cni-094683)     <apic/>
	I0731 18:27:44.027353   80933 main.go:141] libmachine: (newest-cni-094683)     <pae/>
	I0731 18:27:44.027362   80933 main.go:141] libmachine: (newest-cni-094683)     
	I0731 18:27:44.027368   80933 main.go:141] libmachine: (newest-cni-094683)   </features>
	I0731 18:27:44.027380   80933 main.go:141] libmachine: (newest-cni-094683)   <cpu mode='host-passthrough'>
	I0731 18:27:44.027388   80933 main.go:141] libmachine: (newest-cni-094683)   
	I0731 18:27:44.027392   80933 main.go:141] libmachine: (newest-cni-094683)   </cpu>
	I0731 18:27:44.027400   80933 main.go:141] libmachine: (newest-cni-094683)   <os>
	I0731 18:27:44.027405   80933 main.go:141] libmachine: (newest-cni-094683)     <type>hvm</type>
	I0731 18:27:44.027410   80933 main.go:141] libmachine: (newest-cni-094683)     <boot dev='cdrom'/>
	I0731 18:27:44.027415   80933 main.go:141] libmachine: (newest-cni-094683)     <boot dev='hd'/>
	I0731 18:27:44.027421   80933 main.go:141] libmachine: (newest-cni-094683)     <bootmenu enable='no'/>
	I0731 18:27:44.027425   80933 main.go:141] libmachine: (newest-cni-094683)   </os>
	I0731 18:27:44.027430   80933 main.go:141] libmachine: (newest-cni-094683)   <devices>
	I0731 18:27:44.027438   80933 main.go:141] libmachine: (newest-cni-094683)     <disk type='file' device='cdrom'>
	I0731 18:27:44.027456   80933 main.go:141] libmachine: (newest-cni-094683)       <source file='/home/jenkins/minikube-integration/19349-8084/.minikube/machines/newest-cni-094683/boot2docker.iso'/>
	I0731 18:27:44.027463   80933 main.go:141] libmachine: (newest-cni-094683)       <target dev='hdc' bus='scsi'/>
	I0731 18:27:44.027502   80933 main.go:141] libmachine: (newest-cni-094683)       <readonly/>
	I0731 18:27:44.027524   80933 main.go:141] libmachine: (newest-cni-094683)     </disk>
	I0731 18:27:44.027536   80933 main.go:141] libmachine: (newest-cni-094683)     <disk type='file' device='disk'>
	I0731 18:27:44.027549   80933 main.go:141] libmachine: (newest-cni-094683)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0731 18:27:44.027564   80933 main.go:141] libmachine: (newest-cni-094683)       <source file='/home/jenkins/minikube-integration/19349-8084/.minikube/machines/newest-cni-094683/newest-cni-094683.rawdisk'/>
	I0731 18:27:44.027575   80933 main.go:141] libmachine: (newest-cni-094683)       <target dev='hda' bus='virtio'/>
	I0731 18:27:44.027586   80933 main.go:141] libmachine: (newest-cni-094683)     </disk>
	I0731 18:27:44.027606   80933 main.go:141] libmachine: (newest-cni-094683)     <interface type='network'>
	I0731 18:27:44.027619   80933 main.go:141] libmachine: (newest-cni-094683)       <source network='mk-newest-cni-094683'/>
	I0731 18:27:44.027628   80933 main.go:141] libmachine: (newest-cni-094683)       <model type='virtio'/>
	I0731 18:27:44.027636   80933 main.go:141] libmachine: (newest-cni-094683)     </interface>
	I0731 18:27:44.027647   80933 main.go:141] libmachine: (newest-cni-094683)     <interface type='network'>
	I0731 18:27:44.027659   80933 main.go:141] libmachine: (newest-cni-094683)       <source network='default'/>
	I0731 18:27:44.027669   80933 main.go:141] libmachine: (newest-cni-094683)       <model type='virtio'/>
	I0731 18:27:44.027696   80933 main.go:141] libmachine: (newest-cni-094683)     </interface>
	I0731 18:27:44.027724   80933 main.go:141] libmachine: (newest-cni-094683)     <serial type='pty'>
	I0731 18:27:44.027739   80933 main.go:141] libmachine: (newest-cni-094683)       <target port='0'/>
	I0731 18:27:44.027750   80933 main.go:141] libmachine: (newest-cni-094683)     </serial>
	I0731 18:27:44.027762   80933 main.go:141] libmachine: (newest-cni-094683)     <console type='pty'>
	I0731 18:27:44.027776   80933 main.go:141] libmachine: (newest-cni-094683)       <target type='serial' port='0'/>
	I0731 18:27:44.027791   80933 main.go:141] libmachine: (newest-cni-094683)     </console>
	I0731 18:27:44.027802   80933 main.go:141] libmachine: (newest-cni-094683)     <rng model='virtio'>
	I0731 18:27:44.027818   80933 main.go:141] libmachine: (newest-cni-094683)       <backend model='random'>/dev/random</backend>
	I0731 18:27:44.027830   80933 main.go:141] libmachine: (newest-cni-094683)     </rng>
	I0731 18:27:44.027843   80933 main.go:141] libmachine: (newest-cni-094683)     
	I0731 18:27:44.027865   80933 main.go:141] libmachine: (newest-cni-094683)     
	I0731 18:27:44.027878   80933 main.go:141] libmachine: (newest-cni-094683)   </devices>
	I0731 18:27:44.027889   80933 main.go:141] libmachine: (newest-cni-094683) </domain>
	I0731 18:27:44.027902   80933 main.go:141] libmachine: (newest-cni-094683) 
	I0731 18:27:44.032607   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined MAC address 52:54:00:ee:00:0d in network default
	I0731 18:27:44.033223   80933 main.go:141] libmachine: (newest-cni-094683) Ensuring networks are active...
	I0731 18:27:44.033248   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:27:44.034139   80933 main.go:141] libmachine: (newest-cni-094683) Ensuring network default is active
	I0731 18:27:44.034468   80933 main.go:141] libmachine: (newest-cni-094683) Ensuring network mk-newest-cni-094683 is active
	I0731 18:27:44.034924   80933 main.go:141] libmachine: (newest-cni-094683) Getting domain xml...
	I0731 18:27:44.035697   80933 main.go:141] libmachine: (newest-cni-094683) Creating domain...
	I0731 18:27:45.317910   80933 main.go:141] libmachine: (newest-cni-094683) Waiting to get IP...
	I0731 18:27:45.318942   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:27:45.319502   80933 main.go:141] libmachine: (newest-cni-094683) DBG | unable to find current IP address of domain newest-cni-094683 in network mk-newest-cni-094683
	I0731 18:27:45.319532   80933 main.go:141] libmachine: (newest-cni-094683) DBG | I0731 18:27:45.319476   80956 retry.go:31] will retry after 255.191833ms: waiting for machine to come up
	I0731 18:27:45.575958   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:27:45.576663   80933 main.go:141] libmachine: (newest-cni-094683) DBG | unable to find current IP address of domain newest-cni-094683 in network mk-newest-cni-094683
	I0731 18:27:45.576689   80933 main.go:141] libmachine: (newest-cni-094683) DBG | I0731 18:27:45.576592   80956 retry.go:31] will retry after 334.257459ms: waiting for machine to come up
	I0731 18:27:45.912189   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:27:45.912690   80933 main.go:141] libmachine: (newest-cni-094683) DBG | unable to find current IP address of domain newest-cni-094683 in network mk-newest-cni-094683
	I0731 18:27:45.912721   80933 main.go:141] libmachine: (newest-cni-094683) DBG | I0731 18:27:45.912664   80956 retry.go:31] will retry after 477.652726ms: waiting for machine to come up
	I0731 18:27:46.392307   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:27:46.392829   80933 main.go:141] libmachine: (newest-cni-094683) DBG | unable to find current IP address of domain newest-cni-094683 in network mk-newest-cni-094683
	I0731 18:27:46.392880   80933 main.go:141] libmachine: (newest-cni-094683) DBG | I0731 18:27:46.392772   80956 retry.go:31] will retry after 399.453401ms: waiting for machine to come up
	I0731 18:27:46.793289   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:27:46.793751   80933 main.go:141] libmachine: (newest-cni-094683) DBG | unable to find current IP address of domain newest-cni-094683 in network mk-newest-cni-094683
	I0731 18:27:46.793788   80933 main.go:141] libmachine: (newest-cni-094683) DBG | I0731 18:27:46.793695   80956 retry.go:31] will retry after 465.668265ms: waiting for machine to come up
	I0731 18:27:47.262724   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:27:47.263130   80933 main.go:141] libmachine: (newest-cni-094683) DBG | unable to find current IP address of domain newest-cni-094683 in network mk-newest-cni-094683
	I0731 18:27:47.263358   80933 main.go:141] libmachine: (newest-cni-094683) DBG | I0731 18:27:47.263075   80956 retry.go:31] will retry after 910.112794ms: waiting for machine to come up
	I0731 18:27:48.174979   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:27:48.175441   80933 main.go:141] libmachine: (newest-cni-094683) DBG | unable to find current IP address of domain newest-cni-094683 in network mk-newest-cni-094683
	I0731 18:27:48.175488   80933 main.go:141] libmachine: (newest-cni-094683) DBG | I0731 18:27:48.175404   80956 retry.go:31] will retry after 1.007502944s: waiting for machine to come up
	I0731 18:27:49.184116   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:27:49.184550   80933 main.go:141] libmachine: (newest-cni-094683) DBG | unable to find current IP address of domain newest-cni-094683 in network mk-newest-cni-094683
	I0731 18:27:49.184580   80933 main.go:141] libmachine: (newest-cni-094683) DBG | I0731 18:27:49.184507   80956 retry.go:31] will retry after 1.299184929s: waiting for machine to come up
	I0731 18:27:50.485846   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:27:50.486249   80933 main.go:141] libmachine: (newest-cni-094683) DBG | unable to find current IP address of domain newest-cni-094683 in network mk-newest-cni-094683
	I0731 18:27:50.486274   80933 main.go:141] libmachine: (newest-cni-094683) DBG | I0731 18:27:50.486206   80956 retry.go:31] will retry after 1.302052298s: waiting for machine to come up
	I0731 18:27:51.789638   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:27:51.790238   80933 main.go:141] libmachine: (newest-cni-094683) DBG | unable to find current IP address of domain newest-cni-094683 in network mk-newest-cni-094683
	I0731 18:27:51.790265   80933 main.go:141] libmachine: (newest-cni-094683) DBG | I0731 18:27:51.790188   80956 retry.go:31] will retry after 1.801258526s: waiting for machine to come up
	I0731 18:27:53.595492   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:27:53.596027   80933 main.go:141] libmachine: (newest-cni-094683) DBG | unable to find current IP address of domain newest-cni-094683 in network mk-newest-cni-094683
	I0731 18:27:53.596053   80933 main.go:141] libmachine: (newest-cni-094683) DBG | I0731 18:27:53.595971   80956 retry.go:31] will retry after 2.661403176s: waiting for machine to come up
	I0731 18:27:56.260107   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:27:56.260531   80933 main.go:141] libmachine: (newest-cni-094683) DBG | unable to find current IP address of domain newest-cni-094683 in network mk-newest-cni-094683
	I0731 18:27:56.260558   80933 main.go:141] libmachine: (newest-cni-094683) DBG | I0731 18:27:56.260478   80956 retry.go:31] will retry after 2.877208126s: waiting for machine to come up
	I0731 18:27:59.139494   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:27:59.139845   80933 main.go:141] libmachine: (newest-cni-094683) DBG | unable to find current IP address of domain newest-cni-094683 in network mk-newest-cni-094683
	I0731 18:27:59.139868   80933 main.go:141] libmachine: (newest-cni-094683) DBG | I0731 18:27:59.139821   80956 retry.go:31] will retry after 3.057088151s: waiting for machine to come up
	I0731 18:28:02.200135   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:28:02.200580   80933 main.go:141] libmachine: (newest-cni-094683) DBG | unable to find current IP address of domain newest-cni-094683 in network mk-newest-cni-094683
	I0731 18:28:02.200606   80933 main.go:141] libmachine: (newest-cni-094683) DBG | I0731 18:28:02.200539   80956 retry.go:31] will retry after 5.261972519s: waiting for machine to come up
	I0731 18:28:07.466157   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:28:07.466768   80933 main.go:141] libmachine: (newest-cni-094683) Found IP for machine: 192.168.39.71
	I0731 18:28:07.466794   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has current primary IP address 192.168.39.71 and MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:28:07.466800   80933 main.go:141] libmachine: (newest-cni-094683) Reserving static IP address...
	I0731 18:28:07.467140   80933 main.go:141] libmachine: (newest-cni-094683) DBG | unable to find host DHCP lease matching {name: "newest-cni-094683", mac: "52:54:00:ba:97:e4", ip: "192.168.39.71"} in network mk-newest-cni-094683
	I0731 18:28:07.545005   80933 main.go:141] libmachine: (newest-cni-094683) DBG | Getting to WaitForSSH function...
	I0731 18:28:07.545037   80933 main.go:141] libmachine: (newest-cni-094683) Reserved static IP address: 192.168.39.71
	I0731 18:28:07.545064   80933 main.go:141] libmachine: (newest-cni-094683) Waiting for SSH to be available...
	I0731 18:28:07.547819   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:28:07.548305   80933 main.go:141] libmachine: (newest-cni-094683) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:ba:97:e4", ip: ""} in network mk-newest-cni-094683
	I0731 18:28:07.548331   80933 main.go:141] libmachine: (newest-cni-094683) DBG | unable to find defined IP address of network mk-newest-cni-094683 interface with MAC address 52:54:00:ba:97:e4
	I0731 18:28:07.548629   80933 main.go:141] libmachine: (newest-cni-094683) DBG | Using SSH client type: external
	I0731 18:28:07.548663   80933 main.go:141] libmachine: (newest-cni-094683) DBG | Using SSH private key: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/newest-cni-094683/id_rsa (-rw-------)
	I0731 18:28:07.548694   80933 main.go:141] libmachine: (newest-cni-094683) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19349-8084/.minikube/machines/newest-cni-094683/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 18:28:07.548711   80933 main.go:141] libmachine: (newest-cni-094683) DBG | About to run SSH command:
	I0731 18:28:07.548749   80933 main.go:141] libmachine: (newest-cni-094683) DBG | exit 0
	I0731 18:28:07.552990   80933 main.go:141] libmachine: (newest-cni-094683) DBG | SSH cmd err, output: exit status 255: 
	I0731 18:28:07.553009   80933 main.go:141] libmachine: (newest-cni-094683) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0731 18:28:07.553039   80933 main.go:141] libmachine: (newest-cni-094683) DBG | command : exit 0
	I0731 18:28:07.553065   80933 main.go:141] libmachine: (newest-cni-094683) DBG | err     : exit status 255
	I0731 18:28:07.553098   80933 main.go:141] libmachine: (newest-cni-094683) DBG | output  : 
	I0731 18:28:10.554457   80933 main.go:141] libmachine: (newest-cni-094683) DBG | Getting to WaitForSSH function...
	I0731 18:28:10.557089   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:28:10.557540   80933 main.go:141] libmachine: (newest-cni-094683) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:97:e4", ip: ""} in network mk-newest-cni-094683: {Iface:virbr3 ExpiryTime:2024-07-31 19:27:57 +0000 UTC Type:0 Mac:52:54:00:ba:97:e4 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:newest-cni-094683 Clientid:01:52:54:00:ba:97:e4}
	I0731 18:28:10.557570   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined IP address 192.168.39.71 and MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:28:10.557811   80933 main.go:141] libmachine: (newest-cni-094683) DBG | Using SSH client type: external
	I0731 18:28:10.557831   80933 main.go:141] libmachine: (newest-cni-094683) DBG | Using SSH private key: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/newest-cni-094683/id_rsa (-rw-------)
	I0731 18:28:10.557860   80933 main.go:141] libmachine: (newest-cni-094683) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.71 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19349-8084/.minikube/machines/newest-cni-094683/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 18:28:10.557870   80933 main.go:141] libmachine: (newest-cni-094683) DBG | About to run SSH command:
	I0731 18:28:10.557879   80933 main.go:141] libmachine: (newest-cni-094683) DBG | exit 0
	I0731 18:28:10.679021   80933 main.go:141] libmachine: (newest-cni-094683) DBG | SSH cmd err, output: <nil>: 
	I0731 18:28:10.679380   80933 main.go:141] libmachine: (newest-cni-094683) KVM machine creation complete!
	I0731 18:28:10.679691   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetConfigRaw
	I0731 18:28:10.680247   80933 main.go:141] libmachine: (newest-cni-094683) Calling .DriverName
	I0731 18:28:10.680443   80933 main.go:141] libmachine: (newest-cni-094683) Calling .DriverName
	I0731 18:28:10.680604   80933 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0731 18:28:10.680619   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetState
	I0731 18:28:10.682012   80933 main.go:141] libmachine: Detecting operating system of created instance...
	I0731 18:28:10.682024   80933 main.go:141] libmachine: Waiting for SSH to be available...
	I0731 18:28:10.682030   80933 main.go:141] libmachine: Getting to WaitForSSH function...
	I0731 18:28:10.682036   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetSSHHostname
	I0731 18:28:10.684555   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:28:10.684929   80933 main.go:141] libmachine: (newest-cni-094683) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:97:e4", ip: ""} in network mk-newest-cni-094683: {Iface:virbr3 ExpiryTime:2024-07-31 19:27:57 +0000 UTC Type:0 Mac:52:54:00:ba:97:e4 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:newest-cni-094683 Clientid:01:52:54:00:ba:97:e4}
	I0731 18:28:10.684954   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined IP address 192.168.39.71 and MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:28:10.685098   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetSSHPort
	I0731 18:28:10.685263   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetSSHKeyPath
	I0731 18:28:10.685419   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetSSHKeyPath
	I0731 18:28:10.685551   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetSSHUsername
	I0731 18:28:10.685696   80933 main.go:141] libmachine: Using SSH client type: native
	I0731 18:28:10.685874   80933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0731 18:28:10.685884   80933 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0731 18:28:10.782124   80933 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 18:28:10.782144   80933 main.go:141] libmachine: Detecting the provisioner...
	I0731 18:28:10.782153   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetSSHHostname
	I0731 18:28:10.784912   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:28:10.785296   80933 main.go:141] libmachine: (newest-cni-094683) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:97:e4", ip: ""} in network mk-newest-cni-094683: {Iface:virbr3 ExpiryTime:2024-07-31 19:27:57 +0000 UTC Type:0 Mac:52:54:00:ba:97:e4 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:newest-cni-094683 Clientid:01:52:54:00:ba:97:e4}
	I0731 18:28:10.785327   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined IP address 192.168.39.71 and MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:28:10.785536   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetSSHPort
	I0731 18:28:10.785706   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetSSHKeyPath
	I0731 18:28:10.785862   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetSSHKeyPath
	I0731 18:28:10.785991   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetSSHUsername
	I0731 18:28:10.786117   80933 main.go:141] libmachine: Using SSH client type: native
	I0731 18:28:10.786282   80933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0731 18:28:10.786293   80933 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0731 18:28:10.883598   80933 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0731 18:28:10.883688   80933 main.go:141] libmachine: found compatible host: buildroot
	I0731 18:28:10.883703   80933 main.go:141] libmachine: Provisioning with buildroot...
	I0731 18:28:10.883717   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetMachineName
	I0731 18:28:10.883962   80933 buildroot.go:166] provisioning hostname "newest-cni-094683"
	I0731 18:28:10.883984   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetMachineName
	I0731 18:28:10.884202   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetSSHHostname
	I0731 18:28:10.886886   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:28:10.887309   80933 main.go:141] libmachine: (newest-cni-094683) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:97:e4", ip: ""} in network mk-newest-cni-094683: {Iface:virbr3 ExpiryTime:2024-07-31 19:27:57 +0000 UTC Type:0 Mac:52:54:00:ba:97:e4 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:newest-cni-094683 Clientid:01:52:54:00:ba:97:e4}
	I0731 18:28:10.887338   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined IP address 192.168.39.71 and MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:28:10.887533   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetSSHPort
	I0731 18:28:10.887735   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetSSHKeyPath
	I0731 18:28:10.887992   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetSSHKeyPath
	I0731 18:28:10.888161   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetSSHUsername
	I0731 18:28:10.888428   80933 main.go:141] libmachine: Using SSH client type: native
	I0731 18:28:10.888677   80933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0731 18:28:10.888702   80933 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-094683 && echo "newest-cni-094683" | sudo tee /etc/hostname
	I0731 18:28:11.004237   80933 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-094683
	
	I0731 18:28:11.004271   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetSSHHostname
	I0731 18:28:11.006939   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:28:11.007333   80933 main.go:141] libmachine: (newest-cni-094683) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:97:e4", ip: ""} in network mk-newest-cni-094683: {Iface:virbr3 ExpiryTime:2024-07-31 19:27:57 +0000 UTC Type:0 Mac:52:54:00:ba:97:e4 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:newest-cni-094683 Clientid:01:52:54:00:ba:97:e4}
	I0731 18:28:11.007360   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined IP address 192.168.39.71 and MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:28:11.007581   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetSSHPort
	I0731 18:28:11.007787   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetSSHKeyPath
	I0731 18:28:11.007949   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetSSHKeyPath
	I0731 18:28:11.008069   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetSSHUsername
	I0731 18:28:11.008266   80933 main.go:141] libmachine: Using SSH client type: native
	I0731 18:28:11.008488   80933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0731 18:28:11.008511   80933 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-094683' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-094683/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-094683' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 18:28:11.115000   80933 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 18:28:11.115033   80933 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19349-8084/.minikube CaCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19349-8084/.minikube}
	I0731 18:28:11.115093   80933 buildroot.go:174] setting up certificates
	I0731 18:28:11.115117   80933 provision.go:84] configureAuth start
	I0731 18:28:11.115134   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetMachineName
	I0731 18:28:11.115431   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetIP
	I0731 18:28:11.118009   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:28:11.118347   80933 main.go:141] libmachine: (newest-cni-094683) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:97:e4", ip: ""} in network mk-newest-cni-094683: {Iface:virbr3 ExpiryTime:2024-07-31 19:27:57 +0000 UTC Type:0 Mac:52:54:00:ba:97:e4 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:newest-cni-094683 Clientid:01:52:54:00:ba:97:e4}
	I0731 18:28:11.118373   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined IP address 192.168.39.71 and MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:28:11.118483   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetSSHHostname
	I0731 18:28:11.120482   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:28:11.120747   80933 main.go:141] libmachine: (newest-cni-094683) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:97:e4", ip: ""} in network mk-newest-cni-094683: {Iface:virbr3 ExpiryTime:2024-07-31 19:27:57 +0000 UTC Type:0 Mac:52:54:00:ba:97:e4 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:newest-cni-094683 Clientid:01:52:54:00:ba:97:e4}
	I0731 18:28:11.120786   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined IP address 192.168.39.71 and MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:28:11.120884   80933 provision.go:143] copyHostCerts
	I0731 18:28:11.120935   80933 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem, removing ...
	I0731 18:28:11.120944   80933 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem
	I0731 18:28:11.121002   80933 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem (1082 bytes)
	I0731 18:28:11.121081   80933 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem, removing ...
	I0731 18:28:11.121088   80933 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem
	I0731 18:28:11.121111   80933 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem (1123 bytes)
	I0731 18:28:11.121159   80933 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem, removing ...
	I0731 18:28:11.121166   80933 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem
	I0731 18:28:11.121189   80933 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem (1675 bytes)
	I0731 18:28:11.121230   80933 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem org=jenkins.newest-cni-094683 san=[127.0.0.1 192.168.39.71 localhost minikube newest-cni-094683]
	I0731 18:28:11.161813   80933 provision.go:177] copyRemoteCerts
	I0731 18:28:11.161868   80933 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 18:28:11.161888   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetSSHHostname
	I0731 18:28:11.164629   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:28:11.164943   80933 main.go:141] libmachine: (newest-cni-094683) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:97:e4", ip: ""} in network mk-newest-cni-094683: {Iface:virbr3 ExpiryTime:2024-07-31 19:27:57 +0000 UTC Type:0 Mac:52:54:00:ba:97:e4 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:newest-cni-094683 Clientid:01:52:54:00:ba:97:e4}
	I0731 18:28:11.164974   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined IP address 192.168.39.71 and MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:28:11.165105   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetSSHPort
	I0731 18:28:11.165312   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetSSHKeyPath
	I0731 18:28:11.165468   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetSSHUsername
	I0731 18:28:11.165603   80933 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/newest-cni-094683/id_rsa Username:docker}
	I0731 18:28:11.245075   80933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 18:28:11.267319   80933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0731 18:28:11.288652   80933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 18:28:11.312010   80933 provision.go:87] duration metric: took 196.877806ms to configureAuth
	I0731 18:28:11.312037   80933 buildroot.go:189] setting minikube options for container-runtime
	I0731 18:28:11.312238   80933 config.go:182] Loaded profile config "newest-cni-094683": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 18:28:11.312307   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetSSHHostname
	I0731 18:28:11.315042   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:28:11.315414   80933 main.go:141] libmachine: (newest-cni-094683) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:97:e4", ip: ""} in network mk-newest-cni-094683: {Iface:virbr3 ExpiryTime:2024-07-31 19:27:57 +0000 UTC Type:0 Mac:52:54:00:ba:97:e4 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:newest-cni-094683 Clientid:01:52:54:00:ba:97:e4}
	I0731 18:28:11.315455   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined IP address 192.168.39.71 and MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:28:11.315670   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetSSHPort
	I0731 18:28:11.315874   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetSSHKeyPath
	I0731 18:28:11.316048   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetSSHKeyPath
	I0731 18:28:11.316178   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetSSHUsername
	I0731 18:28:11.316363   80933 main.go:141] libmachine: Using SSH client type: native
	I0731 18:28:11.316583   80933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0731 18:28:11.316615   80933 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 18:28:11.591455   80933 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 18:28:11.591488   80933 main.go:141] libmachine: Checking connection to Docker...
	I0731 18:28:11.591519   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetURL
	I0731 18:28:11.592781   80933 main.go:141] libmachine: (newest-cni-094683) DBG | Using libvirt version 6000000
	I0731 18:28:11.595039   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:28:11.595471   80933 main.go:141] libmachine: (newest-cni-094683) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:97:e4", ip: ""} in network mk-newest-cni-094683: {Iface:virbr3 ExpiryTime:2024-07-31 19:27:57 +0000 UTC Type:0 Mac:52:54:00:ba:97:e4 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:newest-cni-094683 Clientid:01:52:54:00:ba:97:e4}
	I0731 18:28:11.595502   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined IP address 192.168.39.71 and MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:28:11.595670   80933 main.go:141] libmachine: Docker is up and running!
	I0731 18:28:11.595688   80933 main.go:141] libmachine: Reticulating splines...
	I0731 18:28:11.595696   80933 client.go:171] duration metric: took 28.017267198s to LocalClient.Create
	I0731 18:28:11.595722   80933 start.go:167] duration metric: took 28.017332939s to libmachine.API.Create "newest-cni-094683"
	I0731 18:28:11.595731   80933 start.go:293] postStartSetup for "newest-cni-094683" (driver="kvm2")
	I0731 18:28:11.595743   80933 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 18:28:11.595759   80933 main.go:141] libmachine: (newest-cni-094683) Calling .DriverName
	I0731 18:28:11.595985   80933 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 18:28:11.596010   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetSSHHostname
	I0731 18:28:11.598527   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:28:11.598935   80933 main.go:141] libmachine: (newest-cni-094683) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:97:e4", ip: ""} in network mk-newest-cni-094683: {Iface:virbr3 ExpiryTime:2024-07-31 19:27:57 +0000 UTC Type:0 Mac:52:54:00:ba:97:e4 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:newest-cni-094683 Clientid:01:52:54:00:ba:97:e4}
	I0731 18:28:11.598957   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined IP address 192.168.39.71 and MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:28:11.599148   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetSSHPort
	I0731 18:28:11.599327   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetSSHKeyPath
	I0731 18:28:11.599468   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetSSHUsername
	I0731 18:28:11.599636   80933 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/newest-cni-094683/id_rsa Username:docker}
	I0731 18:28:11.678162   80933 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 18:28:11.681927   80933 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 18:28:11.681963   80933 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/addons for local assets ...
	I0731 18:28:11.682023   80933 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/files for local assets ...
	I0731 18:28:11.682134   80933 filesync.go:149] local asset: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem -> 152592.pem in /etc/ssl/certs
	I0731 18:28:11.682242   80933 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 18:28:11.690796   80933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /etc/ssl/certs/152592.pem (1708 bytes)
	I0731 18:28:11.714965   80933 start.go:296] duration metric: took 119.222202ms for postStartSetup
	I0731 18:28:11.715006   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetConfigRaw
	I0731 18:28:11.715632   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetIP
	I0731 18:28:11.718312   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:28:11.718689   80933 main.go:141] libmachine: (newest-cni-094683) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:97:e4", ip: ""} in network mk-newest-cni-094683: {Iface:virbr3 ExpiryTime:2024-07-31 19:27:57 +0000 UTC Type:0 Mac:52:54:00:ba:97:e4 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:newest-cni-094683 Clientid:01:52:54:00:ba:97:e4}
	I0731 18:28:11.718710   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined IP address 192.168.39.71 and MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:28:11.718941   80933 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/newest-cni-094683/config.json ...
	I0731 18:28:11.719137   80933 start.go:128] duration metric: took 28.160368373s to createHost
	I0731 18:28:11.719174   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetSSHHostname
	I0731 18:28:11.721472   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:28:11.721809   80933 main.go:141] libmachine: (newest-cni-094683) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:97:e4", ip: ""} in network mk-newest-cni-094683: {Iface:virbr3 ExpiryTime:2024-07-31 19:27:57 +0000 UTC Type:0 Mac:52:54:00:ba:97:e4 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:newest-cni-094683 Clientid:01:52:54:00:ba:97:e4}
	I0731 18:28:11.721830   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined IP address 192.168.39.71 and MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:28:11.721973   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetSSHPort
	I0731 18:28:11.722116   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetSSHKeyPath
	I0731 18:28:11.722291   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetSSHKeyPath
	I0731 18:28:11.722422   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetSSHUsername
	I0731 18:28:11.722572   80933 main.go:141] libmachine: Using SSH client type: native
	I0731 18:28:11.722752   80933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0731 18:28:11.722765   80933 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 18:28:11.819753   80933 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722450491.801704744
	
	I0731 18:28:11.819779   80933 fix.go:216] guest clock: 1722450491.801704744
	I0731 18:28:11.819803   80933 fix.go:229] Guest: 2024-07-31 18:28:11.801704744 +0000 UTC Remote: 2024-07-31 18:28:11.719150143 +0000 UTC m=+28.268719978 (delta=82.554601ms)
	I0731 18:28:11.819822   80933 fix.go:200] guest clock delta is within tolerance: 82.554601ms
	I0731 18:28:11.819829   80933 start.go:83] releasing machines lock for "newest-cni-094683", held for 28.261173086s
	I0731 18:28:11.819850   80933 main.go:141] libmachine: (newest-cni-094683) Calling .DriverName
	I0731 18:28:11.820197   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetIP
	I0731 18:28:11.822980   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:28:11.823379   80933 main.go:141] libmachine: (newest-cni-094683) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:97:e4", ip: ""} in network mk-newest-cni-094683: {Iface:virbr3 ExpiryTime:2024-07-31 19:27:57 +0000 UTC Type:0 Mac:52:54:00:ba:97:e4 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:newest-cni-094683 Clientid:01:52:54:00:ba:97:e4}
	I0731 18:28:11.823401   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined IP address 192.168.39.71 and MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:28:11.823547   80933 main.go:141] libmachine: (newest-cni-094683) Calling .DriverName
	I0731 18:28:11.824115   80933 main.go:141] libmachine: (newest-cni-094683) Calling .DriverName
	I0731 18:28:11.824284   80933 main.go:141] libmachine: (newest-cni-094683) Calling .DriverName
	I0731 18:28:11.824376   80933 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 18:28:11.824408   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetSSHHostname
	I0731 18:28:11.824511   80933 ssh_runner.go:195] Run: cat /version.json
	I0731 18:28:11.824530   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetSSHHostname
	I0731 18:28:11.827043   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:28:11.827340   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:28:11.827474   80933 main.go:141] libmachine: (newest-cni-094683) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:97:e4", ip: ""} in network mk-newest-cni-094683: {Iface:virbr3 ExpiryTime:2024-07-31 19:27:57 +0000 UTC Type:0 Mac:52:54:00:ba:97:e4 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:newest-cni-094683 Clientid:01:52:54:00:ba:97:e4}
	I0731 18:28:11.827498   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined IP address 192.168.39.71 and MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:28:11.827637   80933 main.go:141] libmachine: (newest-cni-094683) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:97:e4", ip: ""} in network mk-newest-cni-094683: {Iface:virbr3 ExpiryTime:2024-07-31 19:27:57 +0000 UTC Type:0 Mac:52:54:00:ba:97:e4 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:newest-cni-094683 Clientid:01:52:54:00:ba:97:e4}
	I0731 18:28:11.827648   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetSSHPort
	I0731 18:28:11.827663   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined IP address 192.168.39.71 and MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:28:11.827846   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetSSHPort
	I0731 18:28:11.827863   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetSSHKeyPath
	I0731 18:28:11.828045   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetSSHUsername
	I0731 18:28:11.828057   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetSSHKeyPath
	I0731 18:28:11.828238   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetSSHUsername
	I0731 18:28:11.828237   80933 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/newest-cni-094683/id_rsa Username:docker}
	I0731 18:28:11.828396   80933 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/newest-cni-094683/id_rsa Username:docker}
	I0731 18:28:11.900338   80933 ssh_runner.go:195] Run: systemctl --version
	I0731 18:28:11.939227   80933 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 18:28:12.091933   80933 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 18:28:12.098192   80933 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 18:28:12.098313   80933 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 18:28:12.112950   80933 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 18:28:12.112972   80933 start.go:495] detecting cgroup driver to use...
	I0731 18:28:12.113029   80933 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 18:28:12.128067   80933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 18:28:12.141431   80933 docker.go:217] disabling cri-docker service (if available) ...
	I0731 18:28:12.141481   80933 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 18:28:12.154135   80933 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 18:28:12.166970   80933 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 18:28:12.292287   80933 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 18:28:12.435790   80933 docker.go:233] disabling docker service ...
	I0731 18:28:12.435861   80933 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 18:28:12.449841   80933 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 18:28:12.462782   80933 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 18:28:12.601882   80933 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 18:28:12.714237   80933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 18:28:12.728199   80933 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 18:28:12.745776   80933 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0731 18:28:12.745835   80933 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:28:12.756780   80933 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 18:28:12.756844   80933 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:28:12.767679   80933 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:28:12.778521   80933 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:28:12.788266   80933 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 18:28:12.798535   80933 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:28:12.807951   80933 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:28:12.824359   80933 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:28:12.833946   80933 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 18:28:12.842772   80933 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 18:28:12.842817   80933 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 18:28:12.856053   80933 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 18:28:12.865239   80933 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:28:12.972325   80933 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 18:28:13.112920   80933 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 18:28:13.113016   80933 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 18:28:13.117672   80933 start.go:563] Will wait 60s for crictl version
	I0731 18:28:13.117718   80933 ssh_runner.go:195] Run: which crictl
	I0731 18:28:13.121042   80933 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 18:28:13.163283   80933 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 18:28:13.163364   80933 ssh_runner.go:195] Run: crio --version
	I0731 18:28:13.190621   80933 ssh_runner.go:195] Run: crio --version
	I0731 18:28:13.219791   80933 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0731 18:28:13.221005   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetIP
	I0731 18:28:13.223657   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:28:13.224025   80933 main.go:141] libmachine: (newest-cni-094683) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:97:e4", ip: ""} in network mk-newest-cni-094683: {Iface:virbr3 ExpiryTime:2024-07-31 19:27:57 +0000 UTC Type:0 Mac:52:54:00:ba:97:e4 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:newest-cni-094683 Clientid:01:52:54:00:ba:97:e4}
	I0731 18:28:13.224050   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined IP address 192.168.39.71 and MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:28:13.224263   80933 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 18:28:13.228105   80933 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:28:13.241437   80933 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0731 18:28:13.242664   80933 kubeadm.go:883] updating cluster {Name:newest-cni-094683 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:newest-cni-094683 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 18:28:13.242784   80933 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0731 18:28:13.242849   80933 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 18:28:13.273396   80933 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0731 18:28:13.273468   80933 ssh_runner.go:195] Run: which lz4
	I0731 18:28:13.277126   80933 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 18:28:13.280826   80933 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 18:28:13.280855   80933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (387176433 bytes)
	
	
	==> CRI-O <==
	Jul 31 18:28:18 embed-certs-436067 crio[725]: time="2024-07-31 18:28:18.168782789Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0872c422-45ac-4c80-b1c0-bcca29f4de6c name=/runtime.v1.RuntimeService/Version
	Jul 31 18:28:18 embed-certs-436067 crio[725]: time="2024-07-31 18:28:18.169949037Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d2d2342f-ef4c-492f-a50d-cc868afd6f00 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:28:18 embed-certs-436067 crio[725]: time="2024-07-31 18:28:18.170532099Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722450498170499512,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d2d2342f-ef4c-492f-a50d-cc868afd6f00 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:28:18 embed-certs-436067 crio[725]: time="2024-07-31 18:28:18.171134181Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a08a647a-673f-4229-afe0-5fb080b70f29 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:28:18 embed-certs-436067 crio[725]: time="2024-07-31 18:28:18.171216725Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a08a647a-673f-4229-afe0-5fb080b70f29 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:28:18 embed-certs-436067 crio[725]: time="2024-07-31 18:28:18.171392685Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e81135eed50d1b7a66d7fe0cf97169684a888d597ad1e09ac13cc822beef8463,PodSandboxId:13ceb832a21bd1dad9e9382b188da32b85c8f81bc49d2ed970691d7d8295b0f2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722449575925975759,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7d95d32-1002-4fba-b0fd-555b872efa1f,},Annotations:map[string]string{io.kubernetes.container.hash: 305b4e0b,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb37176a1402a636ef18f65e18efff58af3508014944aa89a30ef43f65a7087c,PodSandboxId:16503bb7d21bca8ce3dc91e3843567a6fac2d71496cea7ff11459c58ee0f7483,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722449575876581676,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-85spm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eecb399a-0365-436d-8f14-a7853a5a2ea3,},Annotations:map[string]string{io.kubernetes.container.hash: e95b56e6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fa65ac5c2b20c87a2c33c21a876b003c2aee98795df76fce1619374d44a2eca,PodSandboxId:dbda29967246ca2b68e1316e89d393f54987a0186e7659bec1fb3394c3474267,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722449575762374998,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fqkfd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e5a67a3-6f2b-43a5-8b94-cf48202c5958,},Annotations:map[string]string{io.kubernetes.container.hash: af4356a7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"cont
ainerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99a38e72d22388429038d3580675f16e680416f8fda306bdf4c391f75d1d6f4e,PodSandboxId:4ce18c5492bcc33609c111e5dfc539411ed97b6c88cca97ffe6bc55035c69ab2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722449575672794330,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qpb62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e57f157b-01b5-42ec-9630-b9b5ae94fe
5d,},Annotations:map[string]string{io.kubernetes.container.hash: 64ee34e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc1d1518390d9bb43923997af4c62caa816339308e4dc225f452a54a0a1b86e7,PodSandboxId:403ef356717193ee2ac40466ec7781b568ca6f3bf8e61fc6d16f2d8c80f54431,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722449555729790036,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-436067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec4df7a89391220824463c879809b59a,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c093f881541f076f977e16312dc643448d3f077778055fb2462c6952710814e7,PodSandboxId:29ce686dfcdef81bf5020ea4f2e895dd4f86ab430fa2f43be5579362af403cec,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722449555702714874,Labe
ls:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-436067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 808aaf12096cf081a2351698f977532c,},Annotations:map[string]string{io.kubernetes.container.hash: f2dabe25,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6db79734980205959c736946c757d63f009ecfd4877f566a300321787965474b,PodSandboxId:410e5e30052883f36056869049399453d5345a52810890a6564042533f131031,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722449555709471226,Labels:map[string]string{io.kubernet
es.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-436067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a55599c46901235db3664d7ea64eb319,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9a9772dcf78ae6e2ec7aaaa6496f3c491d1011757c4b667f76dc2cad3bf5b72,PodSandboxId:9cba8f262e7f1a23846ed03a0eab0905455b21ffc753005b5a1a83b4caf2c1ab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722449555631382221,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-436067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a2b92359fe7ffe52886ca4ecfa670b1,},Annotations:map[string]string{io.kubernetes.container.hash: ee5b1ef,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a08a647a-673f-4229-afe0-5fb080b70f29 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:28:18 embed-certs-436067 crio[725]: time="2024-07-31 18:28:18.207438558Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=54a13550-1a41-4c18-bf9d-6289853596f5 name=/runtime.v1.RuntimeService/Version
	Jul 31 18:28:18 embed-certs-436067 crio[725]: time="2024-07-31 18:28:18.207518228Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=54a13550-1a41-4c18-bf9d-6289853596f5 name=/runtime.v1.RuntimeService/Version
	Jul 31 18:28:18 embed-certs-436067 crio[725]: time="2024-07-31 18:28:18.208506105Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8a5cd3ab-3043-474e-8b3c-fd40c6b48f5f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:28:18 embed-certs-436067 crio[725]: time="2024-07-31 18:28:18.208900400Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722450498208880799,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8a5cd3ab-3043-474e-8b3c-fd40c6b48f5f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:28:18 embed-certs-436067 crio[725]: time="2024-07-31 18:28:18.209445236Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=92e051d6-8c09-4164-badd-25542496d32e name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:28:18 embed-certs-436067 crio[725]: time="2024-07-31 18:28:18.209495586Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=92e051d6-8c09-4164-badd-25542496d32e name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:28:18 embed-certs-436067 crio[725]: time="2024-07-31 18:28:18.209678561Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e81135eed50d1b7a66d7fe0cf97169684a888d597ad1e09ac13cc822beef8463,PodSandboxId:13ceb832a21bd1dad9e9382b188da32b85c8f81bc49d2ed970691d7d8295b0f2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722449575925975759,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7d95d32-1002-4fba-b0fd-555b872efa1f,},Annotations:map[string]string{io.kubernetes.container.hash: 305b4e0b,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb37176a1402a636ef18f65e18efff58af3508014944aa89a30ef43f65a7087c,PodSandboxId:16503bb7d21bca8ce3dc91e3843567a6fac2d71496cea7ff11459c58ee0f7483,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722449575876581676,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-85spm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eecb399a-0365-436d-8f14-a7853a5a2ea3,},Annotations:map[string]string{io.kubernetes.container.hash: e95b56e6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fa65ac5c2b20c87a2c33c21a876b003c2aee98795df76fce1619374d44a2eca,PodSandboxId:dbda29967246ca2b68e1316e89d393f54987a0186e7659bec1fb3394c3474267,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722449575762374998,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fqkfd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e5a67a3-6f2b-43a5-8b94-cf48202c5958,},Annotations:map[string]string{io.kubernetes.container.hash: af4356a7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"cont
ainerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99a38e72d22388429038d3580675f16e680416f8fda306bdf4c391f75d1d6f4e,PodSandboxId:4ce18c5492bcc33609c111e5dfc539411ed97b6c88cca97ffe6bc55035c69ab2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722449575672794330,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qpb62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e57f157b-01b5-42ec-9630-b9b5ae94fe
5d,},Annotations:map[string]string{io.kubernetes.container.hash: 64ee34e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc1d1518390d9bb43923997af4c62caa816339308e4dc225f452a54a0a1b86e7,PodSandboxId:403ef356717193ee2ac40466ec7781b568ca6f3bf8e61fc6d16f2d8c80f54431,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722449555729790036,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-436067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec4df7a89391220824463c879809b59a,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c093f881541f076f977e16312dc643448d3f077778055fb2462c6952710814e7,PodSandboxId:29ce686dfcdef81bf5020ea4f2e895dd4f86ab430fa2f43be5579362af403cec,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722449555702714874,Labe
ls:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-436067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 808aaf12096cf081a2351698f977532c,},Annotations:map[string]string{io.kubernetes.container.hash: f2dabe25,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6db79734980205959c736946c757d63f009ecfd4877f566a300321787965474b,PodSandboxId:410e5e30052883f36056869049399453d5345a52810890a6564042533f131031,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722449555709471226,Labels:map[string]string{io.kubernet
es.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-436067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a55599c46901235db3664d7ea64eb319,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9a9772dcf78ae6e2ec7aaaa6496f3c491d1011757c4b667f76dc2cad3bf5b72,PodSandboxId:9cba8f262e7f1a23846ed03a0eab0905455b21ffc753005b5a1a83b4caf2c1ab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722449555631382221,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-436067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a2b92359fe7ffe52886ca4ecfa670b1,},Annotations:map[string]string{io.kubernetes.container.hash: ee5b1ef,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=92e051d6-8c09-4164-badd-25542496d32e name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:28:18 embed-certs-436067 crio[725]: time="2024-07-31 18:28:18.243922958Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b4845c5a-3211-4237-897b-4913b52f1d1c name=/runtime.v1.RuntimeService/Version
	Jul 31 18:28:18 embed-certs-436067 crio[725]: time="2024-07-31 18:28:18.243993434Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b4845c5a-3211-4237-897b-4913b52f1d1c name=/runtime.v1.RuntimeService/Version
	Jul 31 18:28:18 embed-certs-436067 crio[725]: time="2024-07-31 18:28:18.245030855Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=27a74b9d-d661-4c70-b1c0-a5a300e96a8d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:28:18 embed-certs-436067 crio[725]: time="2024-07-31 18:28:18.245509813Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722450498245485743,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=27a74b9d-d661-4c70-b1c0-a5a300e96a8d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:28:18 embed-certs-436067 crio[725]: time="2024-07-31 18:28:18.246091103Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7598cbba-26e1-4ec8-88af-4349d0db1431 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:28:18 embed-certs-436067 crio[725]: time="2024-07-31 18:28:18.246182588Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7598cbba-26e1-4ec8-88af-4349d0db1431 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:28:18 embed-certs-436067 crio[725]: time="2024-07-31 18:28:18.246408965Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e81135eed50d1b7a66d7fe0cf97169684a888d597ad1e09ac13cc822beef8463,PodSandboxId:13ceb832a21bd1dad9e9382b188da32b85c8f81bc49d2ed970691d7d8295b0f2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722449575925975759,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7d95d32-1002-4fba-b0fd-555b872efa1f,},Annotations:map[string]string{io.kubernetes.container.hash: 305b4e0b,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb37176a1402a636ef18f65e18efff58af3508014944aa89a30ef43f65a7087c,PodSandboxId:16503bb7d21bca8ce3dc91e3843567a6fac2d71496cea7ff11459c58ee0f7483,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722449575876581676,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-85spm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eecb399a-0365-436d-8f14-a7853a5a2ea3,},Annotations:map[string]string{io.kubernetes.container.hash: e95b56e6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fa65ac5c2b20c87a2c33c21a876b003c2aee98795df76fce1619374d44a2eca,PodSandboxId:dbda29967246ca2b68e1316e89d393f54987a0186e7659bec1fb3394c3474267,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722449575762374998,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fqkfd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e5a67a3-6f2b-43a5-8b94-cf48202c5958,},Annotations:map[string]string{io.kubernetes.container.hash: af4356a7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"cont
ainerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99a38e72d22388429038d3580675f16e680416f8fda306bdf4c391f75d1d6f4e,PodSandboxId:4ce18c5492bcc33609c111e5dfc539411ed97b6c88cca97ffe6bc55035c69ab2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722449575672794330,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qpb62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e57f157b-01b5-42ec-9630-b9b5ae94fe
5d,},Annotations:map[string]string{io.kubernetes.container.hash: 64ee34e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc1d1518390d9bb43923997af4c62caa816339308e4dc225f452a54a0a1b86e7,PodSandboxId:403ef356717193ee2ac40466ec7781b568ca6f3bf8e61fc6d16f2d8c80f54431,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722449555729790036,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-436067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec4df7a89391220824463c879809b59a,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c093f881541f076f977e16312dc643448d3f077778055fb2462c6952710814e7,PodSandboxId:29ce686dfcdef81bf5020ea4f2e895dd4f86ab430fa2f43be5579362af403cec,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722449555702714874,Labe
ls:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-436067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 808aaf12096cf081a2351698f977532c,},Annotations:map[string]string{io.kubernetes.container.hash: f2dabe25,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6db79734980205959c736946c757d63f009ecfd4877f566a300321787965474b,PodSandboxId:410e5e30052883f36056869049399453d5345a52810890a6564042533f131031,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722449555709471226,Labels:map[string]string{io.kubernet
es.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-436067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a55599c46901235db3664d7ea64eb319,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9a9772dcf78ae6e2ec7aaaa6496f3c491d1011757c4b667f76dc2cad3bf5b72,PodSandboxId:9cba8f262e7f1a23846ed03a0eab0905455b21ffc753005b5a1a83b4caf2c1ab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722449555631382221,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-436067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a2b92359fe7ffe52886ca4ecfa670b1,},Annotations:map[string]string{io.kubernetes.container.hash: ee5b1ef,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7598cbba-26e1-4ec8-88af-4349d0db1431 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:28:18 embed-certs-436067 crio[725]: time="2024-07-31 18:28:18.448706874Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=6be437a0-7ad6-4ee2-8ed2-c4d74ea608c3 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 31 18:28:18 embed-certs-436067 crio[725]: time="2024-07-31 18:28:18.449035131Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:2906485312d89da63ebf86359ffe7d84e303a188aa5b0c3c51bd165118ca20f4,Metadata:&PodSandboxMetadata{Name:metrics-server-569cc877fc-pgf6q,Uid:b799fa04-0bf7-4914-9738-964b825577b5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722449575754342942,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-569cc877fc-pgf6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b799fa04-0bf7-4914-9738-964b825577b5,k8s-app: metrics-server,pod-template-hash: 569cc877fc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T18:12:55.435929524Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:13ceb832a21bd1dad9e9382b188da32b85c8f81bc49d2ed970691d7d8295b0f2,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:a7d95d32-1002-4fba-b0fd-555b872efa1f,N
amespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722449575502134106,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7d95d32-1002-4fba-b0fd-555b872efa1f,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"vol
umes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-31T18:12:55.179713976Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:16503bb7d21bca8ce3dc91e3843567a6fac2d71496cea7ff11459c58ee0f7483,Metadata:&PodSandboxMetadata{Name:kube-proxy-85spm,Uid:eecb399a-0365-436d-8f14-a7853a5a2ea3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722449575360948394,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-85spm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eecb399a-0365-436d-8f14-a7853a5a2ea3,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T18:12:54.450274000Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dbda29967246ca2b68e1316e89d393f54987a0186e7659bec1fb3394c3474267,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-fqkfd,Ui
d:2e5a67a3-6f2b-43a5-8b94-cf48202c5958,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722449575207173336,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-fqkfd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e5a67a3-6f2b-43a5-8b94-cf48202c5958,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T18:12:54.871310343Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4ce18c5492bcc33609c111e5dfc539411ed97b6c88cca97ffe6bc55035c69ab2,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-qpb62,Uid:e57f157b-01b5-42ec-9630-b9b5ae94fe5d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722449575166575935,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-qpb62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e57f157b-01b5-42ec-9630-b9b5ae94fe5d,k8s-app: kube-dns,pod-templa
te-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T18:12:54.852146748Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:29ce686dfcdef81bf5020ea4f2e895dd4f86ab430fa2f43be5579362af403cec,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-436067,Uid:808aaf12096cf081a2351698f977532c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722449555479094437,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-436067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 808aaf12096cf081a2351698f977532c,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.86:2379,kubernetes.io/config.hash: 808aaf12096cf081a2351698f977532c,kubernetes.io/config.seen: 2024-07-31T18:12:35.047237276Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:410e5e30052883f36056869049399453d5345a52810890a6564042533f131031,M
etadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-436067,Uid:a55599c46901235db3664d7ea64eb319,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722449555478537949,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-436067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a55599c46901235db3664d7ea64eb319,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a55599c46901235db3664d7ea64eb319,kubernetes.io/config.seen: 2024-07-31T18:12:35.047247253Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:403ef356717193ee2ac40466ec7781b568ca6f3bf8e61fc6d16f2d8c80f54431,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-436067,Uid:ec4df7a89391220824463c879809b59a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722449555473868559,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,
io.kubernetes.pod.name: kube-controller-manager-embed-certs-436067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec4df7a89391220824463c879809b59a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: ec4df7a89391220824463c879809b59a,kubernetes.io/config.seen: 2024-07-31T18:12:35.047245770Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9cba8f262e7f1a23846ed03a0eab0905455b21ffc753005b5a1a83b4caf2c1ab,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-436067,Uid:7a2b92359fe7ffe52886ca4ecfa670b1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722449555470333561,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-436067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a2b92359fe7ffe52886ca4ecfa670b1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50
.86:8443,kubernetes.io/config.hash: 7a2b92359fe7ffe52886ca4ecfa670b1,kubernetes.io/config.seen: 2024-07-31T18:12:35.047244050Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=6be437a0-7ad6-4ee2-8ed2-c4d74ea608c3 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 31 18:28:18 embed-certs-436067 crio[725]: time="2024-07-31 18:28:18.449691047Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2d5d01ba-7c04-436f-b5fb-1643e9dc4fdb name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:28:18 embed-certs-436067 crio[725]: time="2024-07-31 18:28:18.449746994Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2d5d01ba-7c04-436f-b5fb-1643e9dc4fdb name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:28:18 embed-certs-436067 crio[725]: time="2024-07-31 18:28:18.449929753Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e81135eed50d1b7a66d7fe0cf97169684a888d597ad1e09ac13cc822beef8463,PodSandboxId:13ceb832a21bd1dad9e9382b188da32b85c8f81bc49d2ed970691d7d8295b0f2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722449575925975759,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7d95d32-1002-4fba-b0fd-555b872efa1f,},Annotations:map[string]string{io.kubernetes.container.hash: 305b4e0b,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb37176a1402a636ef18f65e18efff58af3508014944aa89a30ef43f65a7087c,PodSandboxId:16503bb7d21bca8ce3dc91e3843567a6fac2d71496cea7ff11459c58ee0f7483,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722449575876581676,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-85spm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eecb399a-0365-436d-8f14-a7853a5a2ea3,},Annotations:map[string]string{io.kubernetes.container.hash: e95b56e6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fa65ac5c2b20c87a2c33c21a876b003c2aee98795df76fce1619374d44a2eca,PodSandboxId:dbda29967246ca2b68e1316e89d393f54987a0186e7659bec1fb3394c3474267,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722449575762374998,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fqkfd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e5a67a3-6f2b-43a5-8b94-cf48202c5958,},Annotations:map[string]string{io.kubernetes.container.hash: af4356a7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"cont
ainerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99a38e72d22388429038d3580675f16e680416f8fda306bdf4c391f75d1d6f4e,PodSandboxId:4ce18c5492bcc33609c111e5dfc539411ed97b6c88cca97ffe6bc55035c69ab2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722449575672794330,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qpb62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e57f157b-01b5-42ec-9630-b9b5ae94fe
5d,},Annotations:map[string]string{io.kubernetes.container.hash: 64ee34e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc1d1518390d9bb43923997af4c62caa816339308e4dc225f452a54a0a1b86e7,PodSandboxId:403ef356717193ee2ac40466ec7781b568ca6f3bf8e61fc6d16f2d8c80f54431,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722449555729790036,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-436067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec4df7a89391220824463c879809b59a,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c093f881541f076f977e16312dc643448d3f077778055fb2462c6952710814e7,PodSandboxId:29ce686dfcdef81bf5020ea4f2e895dd4f86ab430fa2f43be5579362af403cec,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722449555702714874,Labe
ls:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-436067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 808aaf12096cf081a2351698f977532c,},Annotations:map[string]string{io.kubernetes.container.hash: f2dabe25,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6db79734980205959c736946c757d63f009ecfd4877f566a300321787965474b,PodSandboxId:410e5e30052883f36056869049399453d5345a52810890a6564042533f131031,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722449555709471226,Labels:map[string]string{io.kubernet
es.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-436067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a55599c46901235db3664d7ea64eb319,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9a9772dcf78ae6e2ec7aaaa6496f3c491d1011757c4b667f76dc2cad3bf5b72,PodSandboxId:9cba8f262e7f1a23846ed03a0eab0905455b21ffc753005b5a1a83b4caf2c1ab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722449555631382221,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-436067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a2b92359fe7ffe52886ca4ecfa670b1,},Annotations:map[string]string{io.kubernetes.container.hash: ee5b1ef,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2d5d01ba-7c04-436f-b5fb-1643e9dc4fdb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e81135eed50d1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   13ceb832a21bd       storage-provisioner
	cb37176a1402a       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   15 minutes ago      Running             kube-proxy                0                   16503bb7d21bc       kube-proxy-85spm
	8fa65ac5c2b20       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   dbda29967246c       coredns-7db6d8ff4d-fqkfd
	99a38e72d2238       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   4ce18c5492bcc       coredns-7db6d8ff4d-qpb62
	cc1d1518390d9       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   15 minutes ago      Running             kube-controller-manager   2                   403ef35671719       kube-controller-manager-embed-certs-436067
	6db7973498020       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   15 minutes ago      Running             kube-scheduler            2                   410e5e3005288       kube-scheduler-embed-certs-436067
	c093f881541f0       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   15 minutes ago      Running             etcd                      2                   29ce686dfcdef       etcd-embed-certs-436067
	d9a9772dcf78a       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   15 minutes ago      Running             kube-apiserver            2                   9cba8f262e7f1       kube-apiserver-embed-certs-436067
	
	
	==> coredns [8fa65ac5c2b20c87a2c33c21a876b003c2aee98795df76fce1619374d44a2eca] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [99a38e72d22388429038d3580675f16e680416f8fda306bdf4c391f75d1d6f4e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-436067
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-436067
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1d737dad7efa60c56d30434fcd857dd3b14c91d9
	                    minikube.k8s.io/name=embed-certs-436067
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T18_12_42_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 18:12:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-436067
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 18:28:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 18:23:14 +0000   Wed, 31 Jul 2024 18:12:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 18:23:14 +0000   Wed, 31 Jul 2024 18:12:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 18:23:14 +0000   Wed, 31 Jul 2024 18:12:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 18:23:14 +0000   Wed, 31 Jul 2024 18:12:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.86
	  Hostname:    embed-certs-436067
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f6fa4d76d50f402ca6798d9445da3dc8
	  System UUID:                f6fa4d76-d50f-402c-a679-8d9445da3dc8
	  Boot ID:                    fde69711-7c4c-4fc8-a71c-0af26845f36a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-fqkfd                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 coredns-7db6d8ff4d-qpb62                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-embed-certs-436067                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-embed-certs-436067             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-embed-certs-436067    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-85spm                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-embed-certs-436067             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 metrics-server-569cc877fc-pgf6q               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 15m   kube-proxy       
	  Normal  NodeHasSufficientMemory  15m   kubelet          Node embed-certs-436067 status is now: NodeHasSufficientMemory
	  Normal  Starting                 15m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m   kubelet          Node embed-certs-436067 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m   kubelet          Node embed-certs-436067 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m   kubelet          Node embed-certs-436067 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m   node-controller  Node embed-certs-436067 event: Registered Node embed-certs-436067 in Controller
	
	
	==> dmesg <==
	[  +0.052278] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039889] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.811769] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.851620] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.515852] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.972241] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.055723] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070736] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +0.185374] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +0.154499] systemd-fstab-generator[680]: Ignoring "noauto" option for root device
	[  +0.297404] systemd-fstab-generator[710]: Ignoring "noauto" option for root device
	[  +4.278734] systemd-fstab-generator[806]: Ignoring "noauto" option for root device
	[  +0.059716] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.006855] systemd-fstab-generator[928]: Ignoring "noauto" option for root device
	[  +4.588837] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.290731] kauditd_printk_skb: 79 callbacks suppressed
	[Jul31 18:12] kauditd_printk_skb: 4 callbacks suppressed
	[  +2.271517] systemd-fstab-generator[3583]: Ignoring "noauto" option for root device
	[  +4.642096] kauditd_printk_skb: 57 callbacks suppressed
	[  +1.913264] systemd-fstab-generator[3909]: Ignoring "noauto" option for root device
	[ +13.311734] systemd-fstab-generator[4097]: Ignoring "noauto" option for root device
	[  +0.083136] kauditd_printk_skb: 14 callbacks suppressed
	[Jul31 18:13] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [c093f881541f076f977e16312dc643448d3f077778055fb2462c6952710814e7] <==
	{"level":"info","ts":"2024-07-31T18:12:37.06932Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1d005a24580f63ce is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-31T18:12:37.069457Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1d005a24580f63ce became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-31T18:12:37.069524Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1d005a24580f63ce received MsgPreVoteResp from 1d005a24580f63ce at term 1"}
	{"level":"info","ts":"2024-07-31T18:12:37.069559Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1d005a24580f63ce became candidate at term 2"}
	{"level":"info","ts":"2024-07-31T18:12:37.069591Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1d005a24580f63ce received MsgVoteResp from 1d005a24580f63ce at term 2"}
	{"level":"info","ts":"2024-07-31T18:12:37.069617Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1d005a24580f63ce became leader at term 2"}
	{"level":"info","ts":"2024-07-31T18:12:37.069641Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1d005a24580f63ce elected leader 1d005a24580f63ce at term 2"}
	{"level":"info","ts":"2024-07-31T18:12:37.070893Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"1d005a24580f63ce","local-member-attributes":"{Name:embed-certs-436067 ClientURLs:[https://192.168.50.86:2379]}","request-path":"/0/members/1d005a24580f63ce/attributes","cluster-id":"c5418a0fb3fcfa37","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T18:12:37.071123Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T18:12:37.071704Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T18:12:37.07191Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T18:12:37.072302Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T18:12:37.07239Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-31T18:12:37.075035Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-31T18:12:37.075331Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c5418a0fb3fcfa37","local-member-id":"1d005a24580f63ce","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T18:12:37.075466Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T18:12:37.075513Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T18:12:37.081387Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.86:2379"}
	2024/07/31 18:12:41 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-07-31T18:22:37.116665Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":676}
	{"level":"info","ts":"2024-07-31T18:22:37.126232Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":676,"took":"9.088343ms","hash":525412888,"current-db-size-bytes":2256896,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":2256896,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-07-31T18:22:37.126303Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":525412888,"revision":676,"compact-revision":-1}
	{"level":"info","ts":"2024-07-31T18:27:37.124401Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":918}
	{"level":"info","ts":"2024-07-31T18:27:37.128606Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":918,"took":"3.544388ms","hash":145333688,"current-db-size-bytes":2256896,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":1601536,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-07-31T18:27:37.128681Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":145333688,"revision":918,"compact-revision":676}
	
	
	==> kernel <==
	 18:28:19 up 20 min,  0 users,  load average: 0.37, 0.22, 0.18
	Linux embed-certs-436067 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d9a9772dcf78ae6e2ec7aaaa6496f3c491d1011757c4b667f76dc2cad3bf5b72] <==
	I0731 18:22:39.507721       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 18:23:39.507619       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 18:23:39.507691       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0731 18:23:39.507701       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 18:23:39.508877       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 18:23:39.508981       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0731 18:23:39.509001       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 18:25:39.508626       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 18:25:39.508730       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0731 18:25:39.508742       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 18:25:39.509993       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 18:25:39.510083       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0731 18:25:39.510095       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 18:27:38.512883       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 18:27:38.513255       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0731 18:27:39.513590       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 18:27:39.513645       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0731 18:27:39.513654       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 18:27:39.513693       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 18:27:39.513740       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0731 18:27:39.514861       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [cc1d1518390d9bb43923997af4c62caa816339308e4dc225f452a54a0a1b86e7] <==
	I0731 18:22:24.544138       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 18:22:54.042159       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 18:22:54.551610       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 18:23:24.047692       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 18:23:24.559667       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0731 18:23:46.262304       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="273.511µs"
	E0731 18:23:54.053122       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 18:23:54.568135       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0731 18:23:59.265343       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="184.572µs"
	E0731 18:24:24.058416       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 18:24:24.577524       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 18:24:54.063407       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 18:24:54.585696       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 18:25:24.068402       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 18:25:24.594065       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 18:25:54.074295       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 18:25:54.603995       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 18:26:24.079356       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 18:26:24.611915       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 18:26:54.085435       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 18:26:54.619690       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 18:27:24.091062       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 18:27:24.627830       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 18:27:54.097443       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 18:27:54.636358       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [cb37176a1402a636ef18f65e18efff58af3508014944aa89a30ef43f65a7087c] <==
	I0731 18:12:56.246292       1 server_linux.go:69] "Using iptables proxy"
	I0731 18:12:56.255684       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.86"]
	I0731 18:12:56.289788       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 18:12:56.289841       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 18:12:56.289858       1 server_linux.go:165] "Using iptables Proxier"
	I0731 18:12:56.291948       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 18:12:56.292137       1 server.go:872] "Version info" version="v1.30.3"
	I0731 18:12:56.292159       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 18:12:56.293688       1 config.go:192] "Starting service config controller"
	I0731 18:12:56.293724       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 18:12:56.293752       1 config.go:101] "Starting endpoint slice config controller"
	I0731 18:12:56.293768       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 18:12:56.295488       1 config.go:319] "Starting node config controller"
	I0731 18:12:56.295506       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 18:12:56.394474       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 18:12:56.394531       1 shared_informer.go:320] Caches are synced for service config
	I0731 18:12:56.396132       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6db79734980205959c736946c757d63f009ecfd4877f566a300321787965474b] <==
	W0731 18:12:38.541521       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 18:12:38.541545       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 18:12:38.541585       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 18:12:38.541618       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0731 18:12:38.541695       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 18:12:38.541719       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0731 18:12:39.382562       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 18:12:39.382590       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0731 18:12:39.385779       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 18:12:39.385811       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0731 18:12:39.435796       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 18:12:39.436013       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0731 18:12:39.466528       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 18:12:39.466672       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0731 18:12:39.481566       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0731 18:12:39.481896       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0731 18:12:39.525949       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 18:12:39.525992       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 18:12:39.657006       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 18:12:39.657333       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 18:12:39.800696       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0731 18:12:39.800830       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0731 18:12:39.804334       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0731 18:12:39.804530       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0731 18:12:42.423048       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 31 18:25:41 embed-certs-436067 kubelet[3916]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 18:25:41 embed-certs-436067 kubelet[3916]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 18:25:41 embed-certs-436067 kubelet[3916]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 18:25:54 embed-certs-436067 kubelet[3916]: E0731 18:25:54.246292    3916 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pgf6q" podUID="b799fa04-0bf7-4914-9738-964b825577b5"
	Jul 31 18:26:07 embed-certs-436067 kubelet[3916]: E0731 18:26:07.246604    3916 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pgf6q" podUID="b799fa04-0bf7-4914-9738-964b825577b5"
	Jul 31 18:26:18 embed-certs-436067 kubelet[3916]: E0731 18:26:18.247549    3916 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pgf6q" podUID="b799fa04-0bf7-4914-9738-964b825577b5"
	Jul 31 18:26:33 embed-certs-436067 kubelet[3916]: E0731 18:26:33.247057    3916 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pgf6q" podUID="b799fa04-0bf7-4914-9738-964b825577b5"
	Jul 31 18:26:41 embed-certs-436067 kubelet[3916]: E0731 18:26:41.263941    3916 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 18:26:41 embed-certs-436067 kubelet[3916]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 18:26:41 embed-certs-436067 kubelet[3916]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 18:26:41 embed-certs-436067 kubelet[3916]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 18:26:41 embed-certs-436067 kubelet[3916]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 18:26:47 embed-certs-436067 kubelet[3916]: E0731 18:26:47.247719    3916 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pgf6q" podUID="b799fa04-0bf7-4914-9738-964b825577b5"
	Jul 31 18:27:01 embed-certs-436067 kubelet[3916]: E0731 18:27:01.247343    3916 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pgf6q" podUID="b799fa04-0bf7-4914-9738-964b825577b5"
	Jul 31 18:27:12 embed-certs-436067 kubelet[3916]: E0731 18:27:12.246502    3916 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pgf6q" podUID="b799fa04-0bf7-4914-9738-964b825577b5"
	Jul 31 18:27:23 embed-certs-436067 kubelet[3916]: E0731 18:27:23.252423    3916 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pgf6q" podUID="b799fa04-0bf7-4914-9738-964b825577b5"
	Jul 31 18:27:36 embed-certs-436067 kubelet[3916]: E0731 18:27:36.247075    3916 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pgf6q" podUID="b799fa04-0bf7-4914-9738-964b825577b5"
	Jul 31 18:27:41 embed-certs-436067 kubelet[3916]: E0731 18:27:41.269288    3916 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 18:27:41 embed-certs-436067 kubelet[3916]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 18:27:41 embed-certs-436067 kubelet[3916]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 18:27:41 embed-certs-436067 kubelet[3916]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 18:27:41 embed-certs-436067 kubelet[3916]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 18:27:47 embed-certs-436067 kubelet[3916]: E0731 18:27:47.247318    3916 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pgf6q" podUID="b799fa04-0bf7-4914-9738-964b825577b5"
	Jul 31 18:28:00 embed-certs-436067 kubelet[3916]: E0731 18:28:00.247523    3916 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pgf6q" podUID="b799fa04-0bf7-4914-9738-964b825577b5"
	Jul 31 18:28:14 embed-certs-436067 kubelet[3916]: E0731 18:28:14.247546    3916 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pgf6q" podUID="b799fa04-0bf7-4914-9738-964b825577b5"
	
	
	==> storage-provisioner [e81135eed50d1b7a66d7fe0cf97169684a888d597ad1e09ac13cc822beef8463] <==
	I0731 18:12:56.208897       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0731 18:12:56.223886       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0731 18:12:56.224019       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0731 18:12:56.235758       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0731 18:12:56.235942       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-436067_416a8f7b-0ebd-4ef5-9f2d-e4f138bf005b!
	I0731 18:12:56.238247       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f7e2a240-06d4-4b1c-9dc4-46302f362726", APIVersion:"v1", ResourceVersion:"402", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-436067_416a8f7b-0ebd-4ef5-9f2d-e4f138bf005b became leader
	I0731 18:12:56.337789       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-436067_416a8f7b-0ebd-4ef5-9f2d-e4f138bf005b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-436067 -n embed-certs-436067
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-436067 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-pgf6q
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-436067 describe pod metrics-server-569cc877fc-pgf6q
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-436067 describe pod metrics-server-569cc877fc-pgf6q: exit status 1 (73.661226ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-pgf6q" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-436067 describe pod metrics-server-569cc877fc-pgf6q: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (378.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (335.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-673754 -n no-preload-673754
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-31 18:27:50.930481889 +0000 UTC m=+6474.637220081
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-673754 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-673754 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.415µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-673754 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-673754 -n no-preload-673754
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-673754 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-673754 logs -n 25: (1.159240737s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-985288                           | enable-default-cni-985288    | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 17:58 UTC |
	|         | sudo systemctl status crio                             |                              |         |         |                     |                     |
	|         | --all --full --no-pager                                |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-985288                           | enable-default-cni-985288    | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 17:58 UTC |
	|         | sudo systemctl cat crio                                |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-985288                           | enable-default-cni-985288    | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 17:58 UTC |
	|         | sudo find /etc/crio -type f                            |                              |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                          |                              |         |         |                     |                     |
	|         | \;                                                     |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-985288                           | enable-default-cni-985288    | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 17:58 UTC |
	|         | sudo crio config                                       |                              |         |         |                     |                     |
	| delete  | -p enable-default-cni-985288                           | enable-default-cni-985288    | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 17:58 UTC |
	| delete  | -p                                                     | disable-driver-mounts-280161 | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 17:58 UTC |
	|         | disable-driver-mounts-280161                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-094310 | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 18:00 UTC |
	|         | default-k8s-diff-port-094310                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-673754             | no-preload-673754            | jenkins | v1.33.1 | 31 Jul 24 17:59 UTC | 31 Jul 24 17:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-673754                                   | no-preload-673754            | jenkins | v1.33.1 | 31 Jul 24 17:59 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-094310  | default-k8s-diff-port-094310 | jenkins | v1.33.1 | 31 Jul 24 18:00 UTC | 31 Jul 24 18:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-094310 | jenkins | v1.33.1 | 31 Jul 24 18:00 UTC |                     |
	|         | default-k8s-diff-port-094310                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-436067            | embed-certs-436067           | jenkins | v1.33.1 | 31 Jul 24 18:00 UTC | 31 Jul 24 18:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-436067                                  | embed-certs-436067           | jenkins | v1.33.1 | 31 Jul 24 18:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-276459        | old-k8s-version-276459       | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-673754                  | no-preload-673754            | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-673754 --memory=2200                     | no-preload-673754            | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC | 31 Jul 24 18:13 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-094310       | default-k8s-diff-port-094310 | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-436067                 | embed-certs-436067           | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-094310 | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC | 31 Jul 24 18:12 UTC |
	|         | default-k8s-diff-port-094310                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-436067                                  | embed-certs-436067           | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC | 31 Jul 24 18:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-276459                              | old-k8s-version-276459       | jenkins | v1.33.1 | 31 Jul 24 18:03 UTC | 31 Jul 24 18:03 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-276459             | old-k8s-version-276459       | jenkins | v1.33.1 | 31 Jul 24 18:03 UTC | 31 Jul 24 18:03 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-276459                              | old-k8s-version-276459       | jenkins | v1.33.1 | 31 Jul 24 18:03 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-276459                              | old-k8s-version-276459       | jenkins | v1.33.1 | 31 Jul 24 18:27 UTC | 31 Jul 24 18:27 UTC |
	| start   | -p newest-cni-094683 --memory=2200 --alsologtostderr   | newest-cni-094683            | jenkins | v1.33.1 | 31 Jul 24 18:27 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 18:27:43
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 18:27:43.485752   80933 out.go:291] Setting OutFile to fd 1 ...
	I0731 18:27:43.485989   80933 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:27:43.486000   80933 out.go:304] Setting ErrFile to fd 2...
	I0731 18:27:43.486004   80933 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:27:43.486171   80933 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19349-8084/.minikube/bin
	I0731 18:27:43.486746   80933 out.go:298] Setting JSON to false
	I0731 18:27:43.487705   80933 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7807,"bootTime":1722442656,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 18:27:43.487759   80933 start.go:139] virtualization: kvm guest
	I0731 18:27:43.490097   80933 out.go:177] * [newest-cni-094683] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 18:27:43.491485   80933 notify.go:220] Checking for updates...
	I0731 18:27:43.491526   80933 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 18:27:43.493132   80933 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 18:27:43.494544   80933 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 18:27:43.495802   80933 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19349-8084/.minikube
	I0731 18:27:43.497173   80933 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 18:27:43.498529   80933 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 18:27:43.500299   80933 config.go:182] Loaded profile config "default-k8s-diff-port-094310": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:27:43.500407   80933 config.go:182] Loaded profile config "embed-certs-436067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:27:43.500515   80933 config.go:182] Loaded profile config "no-preload-673754": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 18:27:43.500588   80933 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 18:27:43.536849   80933 out.go:177] * Using the kvm2 driver based on user configuration
	I0731 18:27:43.538259   80933 start.go:297] selected driver: kvm2
	I0731 18:27:43.538269   80933 start.go:901] validating driver "kvm2" against <nil>
	I0731 18:27:43.538279   80933 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 18:27:43.538928   80933 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 18:27:43.538988   80933 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19349-8084/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 18:27:43.554753   80933 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 18:27:43.554813   80933 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0731 18:27:43.554837   80933 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0731 18:27:43.555096   80933 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0731 18:27:43.555198   80933 cni.go:84] Creating CNI manager for ""
	I0731 18:27:43.555217   80933 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:27:43.555231   80933 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 18:27:43.555335   80933 start.go:340] cluster config:
	{Name:newest-cni-094683 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-094683 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:27:43.555443   80933 iso.go:125] acquiring lock: {Name:mk4494bb5a63902bf12a909c3109db85229bc516 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 18:27:43.557064   80933 out.go:177] * Starting "newest-cni-094683" primary control-plane node in "newest-cni-094683" cluster
	I0731 18:27:43.558183   80933 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0731 18:27:43.558218   80933 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0731 18:27:43.558227   80933 cache.go:56] Caching tarball of preloaded images
	I0731 18:27:43.558320   80933 preload.go:172] Found /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 18:27:43.558334   80933 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0731 18:27:43.558435   80933 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/newest-cni-094683/config.json ...
	I0731 18:27:43.558464   80933 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/newest-cni-094683/config.json: {Name:mkee846a2f2bebb9197f1ea334d63f1371a10147 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:27:43.558612   80933 start.go:360] acquireMachinesLock for newest-cni-094683: {Name:mkd78eacfab7d6c3058c6674434b3d889ec957e0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 18:27:43.558645   80933 start.go:364] duration metric: took 17.977µs to acquireMachinesLock for "newest-cni-094683"
	I0731 18:27:43.558667   80933 start.go:93] Provisioning new machine with config: &{Name:newest-cni-094683 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-094683 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minik
ube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 18:27:43.558753   80933 start.go:125] createHost starting for "" (driver="kvm2")
	I0731 18:27:43.560466   80933 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 18:27:43.560645   80933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:27:43.560694   80933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:27:43.576281   80933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46655
	I0731 18:27:43.576698   80933 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:27:43.577257   80933 main.go:141] libmachine: Using API Version  1
	I0731 18:27:43.577278   80933 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:27:43.577709   80933 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:27:43.578031   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetMachineName
	I0731 18:27:43.578225   80933 main.go:141] libmachine: (newest-cni-094683) Calling .DriverName
	I0731 18:27:43.578390   80933 start.go:159] libmachine.API.Create for "newest-cni-094683" (driver="kvm2")
	I0731 18:27:43.578418   80933 client.go:168] LocalClient.Create starting
	I0731 18:27:43.578456   80933 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem
	I0731 18:27:43.578493   80933 main.go:141] libmachine: Decoding PEM data...
	I0731 18:27:43.578515   80933 main.go:141] libmachine: Parsing certificate...
	I0731 18:27:43.578598   80933 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem
	I0731 18:27:43.578626   80933 main.go:141] libmachine: Decoding PEM data...
	I0731 18:27:43.578646   80933 main.go:141] libmachine: Parsing certificate...
	I0731 18:27:43.578670   80933 main.go:141] libmachine: Running pre-create checks...
	I0731 18:27:43.578693   80933 main.go:141] libmachine: (newest-cni-094683) Calling .PreCreateCheck
	I0731 18:27:43.579091   80933 main.go:141] libmachine: (newest-cni-094683) Calling .GetConfigRaw
	I0731 18:27:43.579469   80933 main.go:141] libmachine: Creating machine...
	I0731 18:27:43.579482   80933 main.go:141] libmachine: (newest-cni-094683) Calling .Create
	I0731 18:27:43.579635   80933 main.go:141] libmachine: (newest-cni-094683) Creating KVM machine...
	I0731 18:27:43.580931   80933 main.go:141] libmachine: (newest-cni-094683) DBG | found existing default KVM network
	I0731 18:27:43.582343   80933 main.go:141] libmachine: (newest-cni-094683) DBG | I0731 18:27:43.582169   80956 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012dfa0}
	I0731 18:27:43.582362   80933 main.go:141] libmachine: (newest-cni-094683) DBG | created network xml: 
	I0731 18:27:43.582371   80933 main.go:141] libmachine: (newest-cni-094683) DBG | <network>
	I0731 18:27:43.582377   80933 main.go:141] libmachine: (newest-cni-094683) DBG |   <name>mk-newest-cni-094683</name>
	I0731 18:27:43.582383   80933 main.go:141] libmachine: (newest-cni-094683) DBG |   <dns enable='no'/>
	I0731 18:27:43.582392   80933 main.go:141] libmachine: (newest-cni-094683) DBG |   
	I0731 18:27:43.582399   80933 main.go:141] libmachine: (newest-cni-094683) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0731 18:27:43.582408   80933 main.go:141] libmachine: (newest-cni-094683) DBG |     <dhcp>
	I0731 18:27:43.582417   80933 main.go:141] libmachine: (newest-cni-094683) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0731 18:27:43.582425   80933 main.go:141] libmachine: (newest-cni-094683) DBG |     </dhcp>
	I0731 18:27:43.582457   80933 main.go:141] libmachine: (newest-cni-094683) DBG |   </ip>
	I0731 18:27:43.582480   80933 main.go:141] libmachine: (newest-cni-094683) DBG |   
	I0731 18:27:43.582493   80933 main.go:141] libmachine: (newest-cni-094683) DBG | </network>
	I0731 18:27:43.582508   80933 main.go:141] libmachine: (newest-cni-094683) DBG | 
	I0731 18:27:43.587980   80933 main.go:141] libmachine: (newest-cni-094683) DBG | trying to create private KVM network mk-newest-cni-094683 192.168.39.0/24...
	I0731 18:27:43.660320   80933 main.go:141] libmachine: (newest-cni-094683) DBG | private KVM network mk-newest-cni-094683 192.168.39.0/24 created
	I0731 18:27:43.660401   80933 main.go:141] libmachine: (newest-cni-094683) DBG | I0731 18:27:43.660292   80956 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19349-8084/.minikube
	I0731 18:27:43.660423   80933 main.go:141] libmachine: (newest-cni-094683) Setting up store path in /home/jenkins/minikube-integration/19349-8084/.minikube/machines/newest-cni-094683 ...
	I0731 18:27:43.660442   80933 main.go:141] libmachine: (newest-cni-094683) Building disk image from file:///home/jenkins/minikube-integration/19349-8084/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0731 18:27:43.660538   80933 main.go:141] libmachine: (newest-cni-094683) Downloading /home/jenkins/minikube-integration/19349-8084/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19349-8084/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0731 18:27:43.910473   80933 main.go:141] libmachine: (newest-cni-094683) DBG | I0731 18:27:43.910315   80956 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/newest-cni-094683/id_rsa...
	I0731 18:27:44.025657   80933 main.go:141] libmachine: (newest-cni-094683) DBG | I0731 18:27:44.025505   80956 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/newest-cni-094683/newest-cni-094683.rawdisk...
	I0731 18:27:44.025707   80933 main.go:141] libmachine: (newest-cni-094683) DBG | Writing magic tar header
	I0731 18:27:44.025726   80933 main.go:141] libmachine: (newest-cni-094683) DBG | Writing SSH key tar header
	I0731 18:27:44.025739   80933 main.go:141] libmachine: (newest-cni-094683) DBG | I0731 18:27:44.025660   80956 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19349-8084/.minikube/machines/newest-cni-094683 ...
	I0731 18:27:44.025847   80933 main.go:141] libmachine: (newest-cni-094683) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/newest-cni-094683
	I0731 18:27:44.025885   80933 main.go:141] libmachine: (newest-cni-094683) Setting executable bit set on /home/jenkins/minikube-integration/19349-8084/.minikube/machines/newest-cni-094683 (perms=drwx------)
	I0731 18:27:44.025903   80933 main.go:141] libmachine: (newest-cni-094683) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19349-8084/.minikube/machines
	I0731 18:27:44.025925   80933 main.go:141] libmachine: (newest-cni-094683) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19349-8084/.minikube
	I0731 18:27:44.025938   80933 main.go:141] libmachine: (newest-cni-094683) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19349-8084
	I0731 18:27:44.025953   80933 main.go:141] libmachine: (newest-cni-094683) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0731 18:27:44.025972   80933 main.go:141] libmachine: (newest-cni-094683) DBG | Checking permissions on dir: /home/jenkins
	I0731 18:27:44.025988   80933 main.go:141] libmachine: (newest-cni-094683) Setting executable bit set on /home/jenkins/minikube-integration/19349-8084/.minikube/machines (perms=drwxr-xr-x)
	I0731 18:27:44.026002   80933 main.go:141] libmachine: (newest-cni-094683) Setting executable bit set on /home/jenkins/minikube-integration/19349-8084/.minikube (perms=drwxr-xr-x)
	I0731 18:27:44.026016   80933 main.go:141] libmachine: (newest-cni-094683) Setting executable bit set on /home/jenkins/minikube-integration/19349-8084 (perms=drwxrwxr-x)
	I0731 18:27:44.026031   80933 main.go:141] libmachine: (newest-cni-094683) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0731 18:27:44.026045   80933 main.go:141] libmachine: (newest-cni-094683) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0731 18:27:44.026057   80933 main.go:141] libmachine: (newest-cni-094683) DBG | Checking permissions on dir: /home
	I0731 18:27:44.026074   80933 main.go:141] libmachine: (newest-cni-094683) DBG | Skipping /home - not owner
	I0731 18:27:44.026091   80933 main.go:141] libmachine: (newest-cni-094683) Creating domain...
	I0731 18:27:44.027250   80933 main.go:141] libmachine: (newest-cni-094683) define libvirt domain using xml: 
	I0731 18:27:44.027276   80933 main.go:141] libmachine: (newest-cni-094683) <domain type='kvm'>
	I0731 18:27:44.027287   80933 main.go:141] libmachine: (newest-cni-094683)   <name>newest-cni-094683</name>
	I0731 18:27:44.027295   80933 main.go:141] libmachine: (newest-cni-094683)   <memory unit='MiB'>2200</memory>
	I0731 18:27:44.027309   80933 main.go:141] libmachine: (newest-cni-094683)   <vcpu>2</vcpu>
	I0731 18:27:44.027327   80933 main.go:141] libmachine: (newest-cni-094683)   <features>
	I0731 18:27:44.027336   80933 main.go:141] libmachine: (newest-cni-094683)     <acpi/>
	I0731 18:27:44.027344   80933 main.go:141] libmachine: (newest-cni-094683)     <apic/>
	I0731 18:27:44.027353   80933 main.go:141] libmachine: (newest-cni-094683)     <pae/>
	I0731 18:27:44.027362   80933 main.go:141] libmachine: (newest-cni-094683)     
	I0731 18:27:44.027368   80933 main.go:141] libmachine: (newest-cni-094683)   </features>
	I0731 18:27:44.027380   80933 main.go:141] libmachine: (newest-cni-094683)   <cpu mode='host-passthrough'>
	I0731 18:27:44.027388   80933 main.go:141] libmachine: (newest-cni-094683)   
	I0731 18:27:44.027392   80933 main.go:141] libmachine: (newest-cni-094683)   </cpu>
	I0731 18:27:44.027400   80933 main.go:141] libmachine: (newest-cni-094683)   <os>
	I0731 18:27:44.027405   80933 main.go:141] libmachine: (newest-cni-094683)     <type>hvm</type>
	I0731 18:27:44.027410   80933 main.go:141] libmachine: (newest-cni-094683)     <boot dev='cdrom'/>
	I0731 18:27:44.027415   80933 main.go:141] libmachine: (newest-cni-094683)     <boot dev='hd'/>
	I0731 18:27:44.027421   80933 main.go:141] libmachine: (newest-cni-094683)     <bootmenu enable='no'/>
	I0731 18:27:44.027425   80933 main.go:141] libmachine: (newest-cni-094683)   </os>
	I0731 18:27:44.027430   80933 main.go:141] libmachine: (newest-cni-094683)   <devices>
	I0731 18:27:44.027438   80933 main.go:141] libmachine: (newest-cni-094683)     <disk type='file' device='cdrom'>
	I0731 18:27:44.027456   80933 main.go:141] libmachine: (newest-cni-094683)       <source file='/home/jenkins/minikube-integration/19349-8084/.minikube/machines/newest-cni-094683/boot2docker.iso'/>
	I0731 18:27:44.027463   80933 main.go:141] libmachine: (newest-cni-094683)       <target dev='hdc' bus='scsi'/>
	I0731 18:27:44.027502   80933 main.go:141] libmachine: (newest-cni-094683)       <readonly/>
	I0731 18:27:44.027524   80933 main.go:141] libmachine: (newest-cni-094683)     </disk>
	I0731 18:27:44.027536   80933 main.go:141] libmachine: (newest-cni-094683)     <disk type='file' device='disk'>
	I0731 18:27:44.027549   80933 main.go:141] libmachine: (newest-cni-094683)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0731 18:27:44.027564   80933 main.go:141] libmachine: (newest-cni-094683)       <source file='/home/jenkins/minikube-integration/19349-8084/.minikube/machines/newest-cni-094683/newest-cni-094683.rawdisk'/>
	I0731 18:27:44.027575   80933 main.go:141] libmachine: (newest-cni-094683)       <target dev='hda' bus='virtio'/>
	I0731 18:27:44.027586   80933 main.go:141] libmachine: (newest-cni-094683)     </disk>
	I0731 18:27:44.027606   80933 main.go:141] libmachine: (newest-cni-094683)     <interface type='network'>
	I0731 18:27:44.027619   80933 main.go:141] libmachine: (newest-cni-094683)       <source network='mk-newest-cni-094683'/>
	I0731 18:27:44.027628   80933 main.go:141] libmachine: (newest-cni-094683)       <model type='virtio'/>
	I0731 18:27:44.027636   80933 main.go:141] libmachine: (newest-cni-094683)     </interface>
	I0731 18:27:44.027647   80933 main.go:141] libmachine: (newest-cni-094683)     <interface type='network'>
	I0731 18:27:44.027659   80933 main.go:141] libmachine: (newest-cni-094683)       <source network='default'/>
	I0731 18:27:44.027669   80933 main.go:141] libmachine: (newest-cni-094683)       <model type='virtio'/>
	I0731 18:27:44.027696   80933 main.go:141] libmachine: (newest-cni-094683)     </interface>
	I0731 18:27:44.027724   80933 main.go:141] libmachine: (newest-cni-094683)     <serial type='pty'>
	I0731 18:27:44.027739   80933 main.go:141] libmachine: (newest-cni-094683)       <target port='0'/>
	I0731 18:27:44.027750   80933 main.go:141] libmachine: (newest-cni-094683)     </serial>
	I0731 18:27:44.027762   80933 main.go:141] libmachine: (newest-cni-094683)     <console type='pty'>
	I0731 18:27:44.027776   80933 main.go:141] libmachine: (newest-cni-094683)       <target type='serial' port='0'/>
	I0731 18:27:44.027791   80933 main.go:141] libmachine: (newest-cni-094683)     </console>
	I0731 18:27:44.027802   80933 main.go:141] libmachine: (newest-cni-094683)     <rng model='virtio'>
	I0731 18:27:44.027818   80933 main.go:141] libmachine: (newest-cni-094683)       <backend model='random'>/dev/random</backend>
	I0731 18:27:44.027830   80933 main.go:141] libmachine: (newest-cni-094683)     </rng>
	I0731 18:27:44.027843   80933 main.go:141] libmachine: (newest-cni-094683)     
	I0731 18:27:44.027865   80933 main.go:141] libmachine: (newest-cni-094683)     
	I0731 18:27:44.027878   80933 main.go:141] libmachine: (newest-cni-094683)   </devices>
	I0731 18:27:44.027889   80933 main.go:141] libmachine: (newest-cni-094683) </domain>
	I0731 18:27:44.027902   80933 main.go:141] libmachine: (newest-cni-094683) 
	I0731 18:27:44.032607   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined MAC address 52:54:00:ee:00:0d in network default
	I0731 18:27:44.033223   80933 main.go:141] libmachine: (newest-cni-094683) Ensuring networks are active...
	I0731 18:27:44.033248   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:27:44.034139   80933 main.go:141] libmachine: (newest-cni-094683) Ensuring network default is active
	I0731 18:27:44.034468   80933 main.go:141] libmachine: (newest-cni-094683) Ensuring network mk-newest-cni-094683 is active
	I0731 18:27:44.034924   80933 main.go:141] libmachine: (newest-cni-094683) Getting domain xml...
	I0731 18:27:44.035697   80933 main.go:141] libmachine: (newest-cni-094683) Creating domain...
	I0731 18:27:45.317910   80933 main.go:141] libmachine: (newest-cni-094683) Waiting to get IP...
	I0731 18:27:45.318942   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:27:45.319502   80933 main.go:141] libmachine: (newest-cni-094683) DBG | unable to find current IP address of domain newest-cni-094683 in network mk-newest-cni-094683
	I0731 18:27:45.319532   80933 main.go:141] libmachine: (newest-cni-094683) DBG | I0731 18:27:45.319476   80956 retry.go:31] will retry after 255.191833ms: waiting for machine to come up
	I0731 18:27:45.575958   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:27:45.576663   80933 main.go:141] libmachine: (newest-cni-094683) DBG | unable to find current IP address of domain newest-cni-094683 in network mk-newest-cni-094683
	I0731 18:27:45.576689   80933 main.go:141] libmachine: (newest-cni-094683) DBG | I0731 18:27:45.576592   80956 retry.go:31] will retry after 334.257459ms: waiting for machine to come up
	I0731 18:27:45.912189   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:27:45.912690   80933 main.go:141] libmachine: (newest-cni-094683) DBG | unable to find current IP address of domain newest-cni-094683 in network mk-newest-cni-094683
	I0731 18:27:45.912721   80933 main.go:141] libmachine: (newest-cni-094683) DBG | I0731 18:27:45.912664   80956 retry.go:31] will retry after 477.652726ms: waiting for machine to come up
	I0731 18:27:46.392307   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:27:46.392829   80933 main.go:141] libmachine: (newest-cni-094683) DBG | unable to find current IP address of domain newest-cni-094683 in network mk-newest-cni-094683
	I0731 18:27:46.392880   80933 main.go:141] libmachine: (newest-cni-094683) DBG | I0731 18:27:46.392772   80956 retry.go:31] will retry after 399.453401ms: waiting for machine to come up
	I0731 18:27:46.793289   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:27:46.793751   80933 main.go:141] libmachine: (newest-cni-094683) DBG | unable to find current IP address of domain newest-cni-094683 in network mk-newest-cni-094683
	I0731 18:27:46.793788   80933 main.go:141] libmachine: (newest-cni-094683) DBG | I0731 18:27:46.793695   80956 retry.go:31] will retry after 465.668265ms: waiting for machine to come up
	I0731 18:27:47.262724   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:27:47.263130   80933 main.go:141] libmachine: (newest-cni-094683) DBG | unable to find current IP address of domain newest-cni-094683 in network mk-newest-cni-094683
	I0731 18:27:47.263358   80933 main.go:141] libmachine: (newest-cni-094683) DBG | I0731 18:27:47.263075   80956 retry.go:31] will retry after 910.112794ms: waiting for machine to come up
	I0731 18:27:48.174979   80933 main.go:141] libmachine: (newest-cni-094683) DBG | domain newest-cni-094683 has defined MAC address 52:54:00:ba:97:e4 in network mk-newest-cni-094683
	I0731 18:27:48.175441   80933 main.go:141] libmachine: (newest-cni-094683) DBG | unable to find current IP address of domain newest-cni-094683 in network mk-newest-cni-094683
	I0731 18:27:48.175488   80933 main.go:141] libmachine: (newest-cni-094683) DBG | I0731 18:27:48.175404   80956 retry.go:31] will retry after 1.007502944s: waiting for machine to come up
	
	
	==> CRI-O <==
	Jul 31 18:27:51 no-preload-673754 crio[725]: time="2024-07-31 18:27:51.490754743Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722450471490733904,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f4e851e0-78b7-4016-8f7b-3cbf457bd891 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:27:51 no-preload-673754 crio[725]: time="2024-07-31 18:27:51.491279562Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b46f54ae-86e8-4d17-9708-43be68bd6705 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:27:51 no-preload-673754 crio[725]: time="2024-07-31 18:27:51.491353504Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b46f54ae-86e8-4d17-9708-43be68bd6705 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:27:51 no-preload-673754 crio[725]: time="2024-07-31 18:27:51.491564387Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df,PodSandboxId:9d05560d6ac01e4f56f1a23817c448011c5bc8005de8e390585dadba4a1cb1cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722449354986340172,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9f0ab70-18f4-4fac-b858-a9177077fe29,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855,PodSandboxId:ca80c1ffca60f9300fb9fad844ba123990f8649a5d487fa47df8efae1ff0aaa9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722449339791332921,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-k7clq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4be77b6-aa7c-45e2-90a1-6a8264fd5101,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3003a51718276ef12910fdbe77c2a44c14eca900a048fd6d388627fd4aeea1d,PodSandboxId:59479d972775d88e17a39762b70c70e8889bc4913d534d97e654cdbf903fd9c8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722449334714011386,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernet
es.pod.uid: da5fdf31-6093-4e84-baf1-ff5285f9798f,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847,PodSandboxId:415770964bcb598d4caf8eb7371cc11708e1adac5ff3e5ba8ed3b532b1f9b6f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722449324168779807,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hqxh6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fb623c0-4be3-421c-85
d6-1d76a90b874f,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c,PodSandboxId:9d05560d6ac01e4f56f1a23817c448011c5bc8005de8e390585dadba4a1cb1cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722449324149849313,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9f0ab70-18f4-4fac-b858-a9177077fe
29,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2,PodSandboxId:ddc8c73facbad87561a8e880f7857f8f33d3918866604fd2cc4ef72036c0afdd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722449320465111782,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-673754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abd0f70c07cfe668120b96906d35f295,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c,PodSandboxId:b298410c202be1742f59b59dd17a426655a9da051fb1b2502322a12a326678b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722449320445352200,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-673754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3de91f645103a5672ba4430b5689
209,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645,PodSandboxId:008b6aaea7d653d06f0a754e6bb559d9a4add01beade6e6c6da02f6a1af8d7f2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722449320443397326,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-673754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 949490960aadd80ed5a2380aa9b8c41d,},Annotations:map[string]string{io.kube
rnetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0,PodSandboxId:263f72a5df79e40b0bffef7cc10cdb04b0609fa0044305a7727953a383895951,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722449320394169910,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-673754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f53bf54d3ee37ea0b48ca5572890ef53,},Annotations:map[string]string{io.kubernetes.contain
er.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b46f54ae-86e8-4d17-9708-43be68bd6705 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:27:51 no-preload-673754 crio[725]: time="2024-07-31 18:27:51.529476214Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cd41cd62-abf3-47a2-b9bd-e0d00cf0b0ce name=/runtime.v1.RuntimeService/Version
	Jul 31 18:27:51 no-preload-673754 crio[725]: time="2024-07-31 18:27:51.529571024Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cd41cd62-abf3-47a2-b9bd-e0d00cf0b0ce name=/runtime.v1.RuntimeService/Version
	Jul 31 18:27:51 no-preload-673754 crio[725]: time="2024-07-31 18:27:51.530698597Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6295cf10-353b-4d3e-b9f5-9acb313ed811 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:27:51 no-preload-673754 crio[725]: time="2024-07-31 18:27:51.531037620Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722450471531016219,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6295cf10-353b-4d3e-b9f5-9acb313ed811 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:27:51 no-preload-673754 crio[725]: time="2024-07-31 18:27:51.531563095Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=be65fc01-aaa4-4c0f-ba5a-b9976734c37d name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:27:51 no-preload-673754 crio[725]: time="2024-07-31 18:27:51.531617224Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=be65fc01-aaa4-4c0f-ba5a-b9976734c37d name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:27:51 no-preload-673754 crio[725]: time="2024-07-31 18:27:51.531865325Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df,PodSandboxId:9d05560d6ac01e4f56f1a23817c448011c5bc8005de8e390585dadba4a1cb1cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722449354986340172,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9f0ab70-18f4-4fac-b858-a9177077fe29,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855,PodSandboxId:ca80c1ffca60f9300fb9fad844ba123990f8649a5d487fa47df8efae1ff0aaa9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722449339791332921,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-k7clq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4be77b6-aa7c-45e2-90a1-6a8264fd5101,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3003a51718276ef12910fdbe77c2a44c14eca900a048fd6d388627fd4aeea1d,PodSandboxId:59479d972775d88e17a39762b70c70e8889bc4913d534d97e654cdbf903fd9c8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722449334714011386,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernet
es.pod.uid: da5fdf31-6093-4e84-baf1-ff5285f9798f,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847,PodSandboxId:415770964bcb598d4caf8eb7371cc11708e1adac5ff3e5ba8ed3b532b1f9b6f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722449324168779807,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hqxh6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fb623c0-4be3-421c-85
d6-1d76a90b874f,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c,PodSandboxId:9d05560d6ac01e4f56f1a23817c448011c5bc8005de8e390585dadba4a1cb1cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722449324149849313,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9f0ab70-18f4-4fac-b858-a9177077fe
29,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2,PodSandboxId:ddc8c73facbad87561a8e880f7857f8f33d3918866604fd2cc4ef72036c0afdd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722449320465111782,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-673754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abd0f70c07cfe668120b96906d35f295,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c,PodSandboxId:b298410c202be1742f59b59dd17a426655a9da051fb1b2502322a12a326678b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722449320445352200,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-673754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3de91f645103a5672ba4430b5689
209,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645,PodSandboxId:008b6aaea7d653d06f0a754e6bb559d9a4add01beade6e6c6da02f6a1af8d7f2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722449320443397326,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-673754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 949490960aadd80ed5a2380aa9b8c41d,},Annotations:map[string]string{io.kube
rnetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0,PodSandboxId:263f72a5df79e40b0bffef7cc10cdb04b0609fa0044305a7727953a383895951,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722449320394169910,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-673754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f53bf54d3ee37ea0b48ca5572890ef53,},Annotations:map[string]string{io.kubernetes.contain
er.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=be65fc01-aaa4-4c0f-ba5a-b9976734c37d name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:27:51 no-preload-673754 crio[725]: time="2024-07-31 18:27:51.575046523Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2d4e30c5-c8ba-4583-b8dc-9d065b5e4f2c name=/runtime.v1.RuntimeService/Version
	Jul 31 18:27:51 no-preload-673754 crio[725]: time="2024-07-31 18:27:51.575203073Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2d4e30c5-c8ba-4583-b8dc-9d065b5e4f2c name=/runtime.v1.RuntimeService/Version
	Jul 31 18:27:51 no-preload-673754 crio[725]: time="2024-07-31 18:27:51.576579442Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e7372ee6-326f-4df5-aa19-67368792e2a0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:27:51 no-preload-673754 crio[725]: time="2024-07-31 18:27:51.576953029Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722450471576930110,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e7372ee6-326f-4df5-aa19-67368792e2a0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:27:51 no-preload-673754 crio[725]: time="2024-07-31 18:27:51.577473031Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=daee09a6-9897-4878-848b-482002964764 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:27:51 no-preload-673754 crio[725]: time="2024-07-31 18:27:51.577585182Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=daee09a6-9897-4878-848b-482002964764 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:27:51 no-preload-673754 crio[725]: time="2024-07-31 18:27:51.577770670Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df,PodSandboxId:9d05560d6ac01e4f56f1a23817c448011c5bc8005de8e390585dadba4a1cb1cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722449354986340172,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9f0ab70-18f4-4fac-b858-a9177077fe29,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855,PodSandboxId:ca80c1ffca60f9300fb9fad844ba123990f8649a5d487fa47df8efae1ff0aaa9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722449339791332921,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-k7clq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4be77b6-aa7c-45e2-90a1-6a8264fd5101,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3003a51718276ef12910fdbe77c2a44c14eca900a048fd6d388627fd4aeea1d,PodSandboxId:59479d972775d88e17a39762b70c70e8889bc4913d534d97e654cdbf903fd9c8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722449334714011386,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernet
es.pod.uid: da5fdf31-6093-4e84-baf1-ff5285f9798f,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847,PodSandboxId:415770964bcb598d4caf8eb7371cc11708e1adac5ff3e5ba8ed3b532b1f9b6f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722449324168779807,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hqxh6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fb623c0-4be3-421c-85
d6-1d76a90b874f,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c,PodSandboxId:9d05560d6ac01e4f56f1a23817c448011c5bc8005de8e390585dadba4a1cb1cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722449324149849313,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9f0ab70-18f4-4fac-b858-a9177077fe
29,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2,PodSandboxId:ddc8c73facbad87561a8e880f7857f8f33d3918866604fd2cc4ef72036c0afdd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722449320465111782,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-673754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abd0f70c07cfe668120b96906d35f295,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c,PodSandboxId:b298410c202be1742f59b59dd17a426655a9da051fb1b2502322a12a326678b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722449320445352200,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-673754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3de91f645103a5672ba4430b5689
209,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645,PodSandboxId:008b6aaea7d653d06f0a754e6bb559d9a4add01beade6e6c6da02f6a1af8d7f2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722449320443397326,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-673754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 949490960aadd80ed5a2380aa9b8c41d,},Annotations:map[string]string{io.kube
rnetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0,PodSandboxId:263f72a5df79e40b0bffef7cc10cdb04b0609fa0044305a7727953a383895951,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722449320394169910,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-673754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f53bf54d3ee37ea0b48ca5572890ef53,},Annotations:map[string]string{io.kubernetes.contain
er.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=daee09a6-9897-4878-848b-482002964764 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:27:51 no-preload-673754 crio[725]: time="2024-07-31 18:27:51.615959544Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=67b3be30-4c69-49b8-9bef-fa3ee171bf25 name=/runtime.v1.RuntimeService/Version
	Jul 31 18:27:51 no-preload-673754 crio[725]: time="2024-07-31 18:27:51.616057554Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=67b3be30-4c69-49b8-9bef-fa3ee171bf25 name=/runtime.v1.RuntimeService/Version
	Jul 31 18:27:51 no-preload-673754 crio[725]: time="2024-07-31 18:27:51.617457211Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f10a8f27-5bbc-4074-9baa-8b48a005c981 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:27:51 no-preload-673754 crio[725]: time="2024-07-31 18:27:51.617817206Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722450471617797739,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f10a8f27-5bbc-4074-9baa-8b48a005c981 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:27:51 no-preload-673754 crio[725]: time="2024-07-31 18:27:51.618611156Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b190884f-aaf4-4424-89c8-8405c67c03c9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:27:51 no-preload-673754 crio[725]: time="2024-07-31 18:27:51.618661921Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b190884f-aaf4-4424-89c8-8405c67c03c9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:27:51 no-preload-673754 crio[725]: time="2024-07-31 18:27:51.618868008Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df,PodSandboxId:9d05560d6ac01e4f56f1a23817c448011c5bc8005de8e390585dadba4a1cb1cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722449354986340172,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9f0ab70-18f4-4fac-b858-a9177077fe29,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855,PodSandboxId:ca80c1ffca60f9300fb9fad844ba123990f8649a5d487fa47df8efae1ff0aaa9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722449339791332921,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-k7clq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4be77b6-aa7c-45e2-90a1-6a8264fd5101,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3003a51718276ef12910fdbe77c2a44c14eca900a048fd6d388627fd4aeea1d,PodSandboxId:59479d972775d88e17a39762b70c70e8889bc4913d534d97e654cdbf903fd9c8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722449334714011386,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernet
es.pod.uid: da5fdf31-6093-4e84-baf1-ff5285f9798f,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847,PodSandboxId:415770964bcb598d4caf8eb7371cc11708e1adac5ff3e5ba8ed3b532b1f9b6f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722449324168779807,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hqxh6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fb623c0-4be3-421c-85
d6-1d76a90b874f,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c,PodSandboxId:9d05560d6ac01e4f56f1a23817c448011c5bc8005de8e390585dadba4a1cb1cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722449324149849313,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9f0ab70-18f4-4fac-b858-a9177077fe
29,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2,PodSandboxId:ddc8c73facbad87561a8e880f7857f8f33d3918866604fd2cc4ef72036c0afdd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722449320465111782,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-673754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abd0f70c07cfe668120b96906d35f295,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c,PodSandboxId:b298410c202be1742f59b59dd17a426655a9da051fb1b2502322a12a326678b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722449320445352200,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-673754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3de91f645103a5672ba4430b5689
209,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645,PodSandboxId:008b6aaea7d653d06f0a754e6bb559d9a4add01beade6e6c6da02f6a1af8d7f2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722449320443397326,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-673754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 949490960aadd80ed5a2380aa9b8c41d,},Annotations:map[string]string{io.kube
rnetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0,PodSandboxId:263f72a5df79e40b0bffef7cc10cdb04b0609fa0044305a7727953a383895951,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722449320394169910,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-673754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f53bf54d3ee37ea0b48ca5572890ef53,},Annotations:map[string]string{io.kubernetes.contain
er.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b190884f-aaf4-4424-89c8-8405c67c03c9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6f311536202ba       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      18 minutes ago      Running             storage-provisioner       3                   9d05560d6ac01       storage-provisioner
	f043eb2392c22       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      18 minutes ago      Running             coredns                   1                   ca80c1ffca60f       coredns-5cfdc65f69-k7clq
	e3003a5171827       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   18 minutes ago      Running             busybox                   1                   59479d972775d       busybox
	57bdb8e09be40       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899                                      19 minutes ago      Running             kube-proxy                1                   415770964bcb5       kube-proxy-hqxh6
	9ea2bc105f57a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Exited              storage-provisioner       2                   9d05560d6ac01       storage-provisioner
	ed1c40e21d8aa       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b                                      19 minutes ago      Running             kube-scheduler            1                   ddc8c73facbad       kube-scheduler-no-preload-673754
	ee75a53c57652       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5                                      19 minutes ago      Running             kube-controller-manager   1                   b298410c202be       kube-controller-manager-no-preload-673754
	65ef90d7b082a       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa                                      19 minutes ago      Running             etcd                      1                   008b6aaea7d65       etcd-no-preload-673754
	895465d024797       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938                                      19 minutes ago      Running             kube-apiserver            1                   263f72a5df79e       kube-apiserver-no-preload-673754
	
	
	==> coredns [f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:53099 - 39463 "HINFO IN 4454942561105742238.88549225925472576. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.011770935s
	
	
	==> describe nodes <==
	Name:               no-preload-673754
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-673754
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1d737dad7efa60c56d30434fcd857dd3b14c91d9
	                    minikube.k8s.io/name=no-preload-673754
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T17_59_07_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 17:59:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-673754
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 18:27:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 18:24:32 +0000   Wed, 31 Jul 2024 17:58:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 18:24:32 +0000   Wed, 31 Jul 2024 17:58:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 18:24:32 +0000   Wed, 31 Jul 2024 17:58:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 18:24:32 +0000   Wed, 31 Jul 2024 18:08:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.126
	  Hostname:    no-preload-673754
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 492b651246f74aa8a677f18840110d78
	  System UUID:                492b6512-46f7-4aa8-a677-f18840110d78
	  Boot ID:                    123a56b1-98f1-4fc5-b8eb-293998eff487
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-5cfdc65f69-k7clq                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-no-preload-673754                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kube-apiserver-no-preload-673754             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-no-preload-673754    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-hqxh6                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-no-preload-673754             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 metrics-server-78fcd8795b-27pkr              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 19m                kube-proxy       
	  Normal  NodeHasSufficientPID     28m (x7 over 28m)  kubelet          Node no-preload-673754 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    28m (x8 over 28m)  kubelet          Node no-preload-673754 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  28m (x8 over 28m)  kubelet          Node no-preload-673754 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node no-preload-673754 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node no-preload-673754 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m                kubelet          Node no-preload-673754 status is now: NodeHasSufficientPID
	  Normal  NodeReady                28m                kubelet          Node no-preload-673754 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node no-preload-673754 event: Registered Node no-preload-673754 in Controller
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node no-preload-673754 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node no-preload-673754 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node no-preload-673754 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node no-preload-673754 event: Registered Node no-preload-673754 in Controller
	
	
	==> dmesg <==
	[Jul31 18:08] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.056591] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042887] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.015025] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.876733] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.525125] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000013] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.683069] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.058853] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.050846] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.191576] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +0.123429] systemd-fstab-generator[677]: Ignoring "noauto" option for root device
	[  +0.269806] systemd-fstab-generator[710]: Ignoring "noauto" option for root device
	[ +14.548804] systemd-fstab-generator[1179]: Ignoring "noauto" option for root device
	[  +0.057739] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.263358] systemd-fstab-generator[1301]: Ignoring "noauto" option for root device
	[  +3.074634] kauditd_printk_skb: 97 callbacks suppressed
	[  +3.985444] systemd-fstab-generator[1931]: Ignoring "noauto" option for root device
	[  +1.452362] kauditd_printk_skb: 59 callbacks suppressed
	[  +6.728416] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.069810] kauditd_printk_skb: 9 callbacks suppressed
	
	
	==> etcd [65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645] <==
	{"level":"info","ts":"2024-07-31T18:08:40.925643Z","caller":"embed/etcd.go:570","msg":"cmux::serve","address":"192.168.61.126:2380"}
	{"level":"info","ts":"2024-07-31T18:08:40.926046Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"2456aadc51424cb5","initial-advertise-peer-urls":["https://192.168.61.126:2380"],"listen-peer-urls":["https://192.168.61.126:2380"],"advertise-client-urls":["https://192.168.61.126:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.126:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-31T18:08:40.926088Z","caller":"embed/etcd.go:858","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-31T18:08:42.248853Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2456aadc51424cb5 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-31T18:08:42.248916Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2456aadc51424cb5 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-31T18:08:42.248958Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2456aadc51424cb5 received MsgPreVoteResp from 2456aadc51424cb5 at term 2"}
	{"level":"info","ts":"2024-07-31T18:08:42.248973Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2456aadc51424cb5 became candidate at term 3"}
	{"level":"info","ts":"2024-07-31T18:08:42.248979Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2456aadc51424cb5 received MsgVoteResp from 2456aadc51424cb5 at term 3"}
	{"level":"info","ts":"2024-07-31T18:08:42.248987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2456aadc51424cb5 became leader at term 3"}
	{"level":"info","ts":"2024-07-31T18:08:42.248993Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 2456aadc51424cb5 elected leader 2456aadc51424cb5 at term 3"}
	{"level":"info","ts":"2024-07-31T18:08:42.253539Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"2456aadc51424cb5","local-member-attributes":"{Name:no-preload-673754 ClientURLs:[https://192.168.61.126:2379]}","request-path":"/0/members/2456aadc51424cb5/attributes","cluster-id":"c6330389cea17d04","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T18:08:42.253559Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T18:08:42.253683Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T18:08:42.253936Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T18:08:42.253949Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-31T18:08:42.254849Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-31T18:08:42.254872Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-31T18:08:42.255841Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.126:2379"}
	{"level":"info","ts":"2024-07-31T18:08:42.256192Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-31T18:18:42.283805Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":877}
	{"level":"info","ts":"2024-07-31T18:18:42.294193Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":877,"took":"10.010743ms","hash":326909617,"current-db-size-bytes":2834432,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":2834432,"current-db-size-in-use":"2.8 MB"}
	{"level":"info","ts":"2024-07-31T18:18:42.29425Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":326909617,"revision":877,"compact-revision":-1}
	{"level":"info","ts":"2024-07-31T18:23:42.290616Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1119}
	{"level":"info","ts":"2024-07-31T18:23:42.294424Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1119,"took":"3.490809ms","hash":4187699919,"current-db-size-bytes":2834432,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":1671168,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2024-07-31T18:23:42.29447Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4187699919,"revision":1119,"compact-revision":877}
	
	
	==> kernel <==
	 18:27:51 up 19 min,  0 users,  load average: 0.29, 0.15, 0.10
	Linux no-preload-673754 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0731 18:23:44.570706       1 handler_proxy.go:99] no RequestInfo found in the context
	E0731 18:23:44.570756       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0731 18:23:44.571759       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0731 18:23:44.571847       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 18:24:44.572753       1 handler_proxy.go:99] no RequestInfo found in the context
	E0731 18:24:44.572848       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0731 18:24:44.572941       1 handler_proxy.go:99] no RequestInfo found in the context
	E0731 18:24:44.572979       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0731 18:24:44.574014       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0731 18:24:44.574097       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 18:26:44.575060       1 handler_proxy.go:99] no RequestInfo found in the context
	W0731 18:26:44.575276       1 handler_proxy.go:99] no RequestInfo found in the context
	E0731 18:26:44.575393       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0731 18:26:44.575474       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0731 18:26:44.576647       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0731 18:26:44.576709       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c] <==
	E0731 18:22:48.298886       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 18:22:48.433427       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 18:23:18.305497       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 18:23:18.442979       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 18:23:48.312291       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 18:23:48.450715       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 18:24:18.318562       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 18:24:18.459621       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0731 18:24:32.902267       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-673754"
	E0731 18:24:48.325746       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 18:24:48.467851       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0731 18:25:02.810485       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="264.698µs"
	I0731 18:25:14.810486       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="59.849µs"
	E0731 18:25:18.331465       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 18:25:18.475257       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 18:25:48.338453       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 18:25:48.483021       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 18:26:18.344910       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 18:26:18.490694       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 18:26:48.351820       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 18:26:48.500647       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 18:27:18.358641       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 18:27:18.510509       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 18:27:48.366800       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 18:27:48.518599       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0731 18:08:44.346604       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0731 18:08:44.360572       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.61.126"]
	E0731 18:08:44.360653       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0731 18:08:44.391327       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0731 18:08:44.391401       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 18:08:44.391445       1 server_linux.go:170] "Using iptables Proxier"
	I0731 18:08:44.393636       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0731 18:08:44.393963       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0731 18:08:44.394104       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 18:08:44.395646       1 config.go:197] "Starting service config controller"
	I0731 18:08:44.395811       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 18:08:44.395872       1 config.go:104] "Starting endpoint slice config controller"
	I0731 18:08:44.395900       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 18:08:44.396596       1 config.go:326] "Starting node config controller"
	I0731 18:08:44.396646       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 18:08:44.497174       1 shared_informer.go:320] Caches are synced for node config
	I0731 18:08:44.497258       1 shared_informer.go:320] Caches are synced for service config
	I0731 18:08:44.497282       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2] <==
	I0731 18:08:41.422236       1 serving.go:386] Generated self-signed cert in-memory
	W0731 18:08:43.475175       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0731 18:08:43.475293       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 18:08:43.475325       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0731 18:08:43.475388       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0731 18:08:43.553655       1 server.go:164] "Starting Kubernetes Scheduler" version="v1.31.0-beta.0"
	I0731 18:08:43.553713       1 server.go:166] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 18:08:43.556108       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0731 18:08:43.558238       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0731 18:08:43.558272       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 18:08:43.558325       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0731 18:08:43.659235       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 31 18:25:39 no-preload-673754 kubelet[1308]: E0731 18:25:39.819791    1308 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 18:25:39 no-preload-673754 kubelet[1308]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 18:25:39 no-preload-673754 kubelet[1308]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 18:25:39 no-preload-673754 kubelet[1308]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 18:25:39 no-preload-673754 kubelet[1308]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 18:25:41 no-preload-673754 kubelet[1308]: E0731 18:25:41.798984    1308 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-27pkr" podUID="a63a156b-0446-4bd9-8619-de75edaeb481"
	Jul 31 18:25:55 no-preload-673754 kubelet[1308]: E0731 18:25:55.796621    1308 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-27pkr" podUID="a63a156b-0446-4bd9-8619-de75edaeb481"
	Jul 31 18:26:09 no-preload-673754 kubelet[1308]: E0731 18:26:09.799280    1308 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-27pkr" podUID="a63a156b-0446-4bd9-8619-de75edaeb481"
	Jul 31 18:26:24 no-preload-673754 kubelet[1308]: E0731 18:26:24.796943    1308 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-27pkr" podUID="a63a156b-0446-4bd9-8619-de75edaeb481"
	Jul 31 18:26:38 no-preload-673754 kubelet[1308]: E0731 18:26:38.796238    1308 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-27pkr" podUID="a63a156b-0446-4bd9-8619-de75edaeb481"
	Jul 31 18:26:39 no-preload-673754 kubelet[1308]: E0731 18:26:39.821674    1308 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 18:26:39 no-preload-673754 kubelet[1308]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 18:26:39 no-preload-673754 kubelet[1308]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 18:26:39 no-preload-673754 kubelet[1308]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 18:26:39 no-preload-673754 kubelet[1308]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 18:26:52 no-preload-673754 kubelet[1308]: E0731 18:26:52.796955    1308 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-27pkr" podUID="a63a156b-0446-4bd9-8619-de75edaeb481"
	Jul 31 18:27:06 no-preload-673754 kubelet[1308]: E0731 18:27:06.796980    1308 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-27pkr" podUID="a63a156b-0446-4bd9-8619-de75edaeb481"
	Jul 31 18:27:18 no-preload-673754 kubelet[1308]: E0731 18:27:18.796677    1308 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-27pkr" podUID="a63a156b-0446-4bd9-8619-de75edaeb481"
	Jul 31 18:27:32 no-preload-673754 kubelet[1308]: E0731 18:27:32.797349    1308 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-27pkr" podUID="a63a156b-0446-4bd9-8619-de75edaeb481"
	Jul 31 18:27:39 no-preload-673754 kubelet[1308]: E0731 18:27:39.822986    1308 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 18:27:39 no-preload-673754 kubelet[1308]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 18:27:39 no-preload-673754 kubelet[1308]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 18:27:39 no-preload-673754 kubelet[1308]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 18:27:39 no-preload-673754 kubelet[1308]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 18:27:45 no-preload-673754 kubelet[1308]: E0731 18:27:45.797310    1308 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-27pkr" podUID="a63a156b-0446-4bd9-8619-de75edaeb481"
	
	
	==> storage-provisioner [6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df] <==
	I0731 18:09:15.070617       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0731 18:09:15.082933       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0731 18:09:15.082991       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0731 18:09:32.482653       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0731 18:09:32.482927       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-673754_47dbf9a2-3899-425f-9beb-6ecf4e744290!
	I0731 18:09:32.483112       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1d1d23c0-0b26-4c8b-b8c6-376b082cbdb2", APIVersion:"v1", ResourceVersion:"657", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-673754_47dbf9a2-3899-425f-9beb-6ecf4e744290 became leader
	I0731 18:09:32.584873       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-673754_47dbf9a2-3899-425f-9beb-6ecf4e744290!
	
	
	==> storage-provisioner [9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c] <==
	I0731 18:08:44.267233       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0731 18:09:14.272114       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-673754 -n no-preload-673754
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-673754 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-78fcd8795b-27pkr
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-673754 describe pod metrics-server-78fcd8795b-27pkr
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-673754 describe pod metrics-server-78fcd8795b-27pkr: exit status 1 (70.274329ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-78fcd8795b-27pkr" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-673754 describe pod metrics-server-78fcd8795b-27pkr: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (335.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (139.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
E0731 18:25:35.991402   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/calico-985288/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
E0731 18:26:00.055590   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/functional-496242/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
E0731 18:26:07.908780   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/custom-flannel-985288/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
E0731 18:27:13.608631   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/bridge-985288/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.26:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.26:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-276459 -n old-k8s-version-276459
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-276459 -n old-k8s-version-276459: exit status 2 (218.780119ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-276459" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-276459 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-276459 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.388µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-276459 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-276459 -n old-k8s-version-276459
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-276459 -n old-k8s-version-276459: exit status 2 (225.163032ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-276459 logs -n 25
E0731 18:27:40.753244   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/flannel-985288/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-276459 logs -n 25: (1.576672599s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-985288                           | enable-default-cni-985288    | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 17:58 UTC |
	|         | sudo cat                                               |                              |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-985288                           | enable-default-cni-985288    | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 17:58 UTC |
	|         | sudo containerd config dump                            |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-985288                           | enable-default-cni-985288    | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 17:58 UTC |
	|         | sudo systemctl status crio                             |                              |         |         |                     |                     |
	|         | --all --full --no-pager                                |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-985288                           | enable-default-cni-985288    | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 17:58 UTC |
	|         | sudo systemctl cat crio                                |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-985288                           | enable-default-cni-985288    | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 17:58 UTC |
	|         | sudo find /etc/crio -type f                            |                              |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                          |                              |         |         |                     |                     |
	|         | \;                                                     |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-985288                           | enable-default-cni-985288    | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 17:58 UTC |
	|         | sudo crio config                                       |                              |         |         |                     |                     |
	| delete  | -p enable-default-cni-985288                           | enable-default-cni-985288    | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 17:58 UTC |
	| delete  | -p                                                     | disable-driver-mounts-280161 | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 17:58 UTC |
	|         | disable-driver-mounts-280161                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-094310 | jenkins | v1.33.1 | 31 Jul 24 17:58 UTC | 31 Jul 24 18:00 UTC |
	|         | default-k8s-diff-port-094310                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-673754             | no-preload-673754            | jenkins | v1.33.1 | 31 Jul 24 17:59 UTC | 31 Jul 24 17:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-673754                                   | no-preload-673754            | jenkins | v1.33.1 | 31 Jul 24 17:59 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-094310  | default-k8s-diff-port-094310 | jenkins | v1.33.1 | 31 Jul 24 18:00 UTC | 31 Jul 24 18:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-094310 | jenkins | v1.33.1 | 31 Jul 24 18:00 UTC |                     |
	|         | default-k8s-diff-port-094310                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-436067            | embed-certs-436067           | jenkins | v1.33.1 | 31 Jul 24 18:00 UTC | 31 Jul 24 18:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-436067                                  | embed-certs-436067           | jenkins | v1.33.1 | 31 Jul 24 18:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-276459        | old-k8s-version-276459       | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-673754                  | no-preload-673754            | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-673754 --memory=2200                     | no-preload-673754            | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC | 31 Jul 24 18:13 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-094310       | default-k8s-diff-port-094310 | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-436067                 | embed-certs-436067           | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-094310 | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC | 31 Jul 24 18:12 UTC |
	|         | default-k8s-diff-port-094310                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-436067                                  | embed-certs-436067           | jenkins | v1.33.1 | 31 Jul 24 18:02 UTC | 31 Jul 24 18:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-276459                              | old-k8s-version-276459       | jenkins | v1.33.1 | 31 Jul 24 18:03 UTC | 31 Jul 24 18:03 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-276459             | old-k8s-version-276459       | jenkins | v1.33.1 | 31 Jul 24 18:03 UTC | 31 Jul 24 18:03 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-276459                              | old-k8s-version-276459       | jenkins | v1.33.1 | 31 Jul 24 18:03 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 18:03:55
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 18:03:55.344211   74203 out.go:291] Setting OutFile to fd 1 ...
	I0731 18:03:55.344313   74203 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:03:55.344321   74203 out.go:304] Setting ErrFile to fd 2...
	I0731 18:03:55.344324   74203 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:03:55.344541   74203 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19349-8084/.minikube/bin
	I0731 18:03:55.345055   74203 out.go:298] Setting JSON to false
	I0731 18:03:55.345905   74203 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6379,"bootTime":1722442656,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 18:03:55.345962   74203 start.go:139] virtualization: kvm guest
	I0731 18:03:55.347848   74203 out.go:177] * [old-k8s-version-276459] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 18:03:55.349045   74203 notify.go:220] Checking for updates...
	I0731 18:03:55.349052   74203 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 18:03:55.350359   74203 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 18:03:55.351583   74203 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 18:03:55.352789   74203 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19349-8084/.minikube
	I0731 18:03:55.354046   74203 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 18:03:55.355244   74203 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 18:03:55.356819   74203 config.go:182] Loaded profile config "old-k8s-version-276459": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0731 18:03:55.357218   74203 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:03:55.357268   74203 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:03:55.372081   74203 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38815
	I0731 18:03:55.372424   74203 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:03:55.372950   74203 main.go:141] libmachine: Using API Version  1
	I0731 18:03:55.372972   74203 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:03:55.373263   74203 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:03:55.373466   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:03:55.375198   74203 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0731 18:03:55.376370   74203 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 18:03:55.376714   74203 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:03:55.376748   74203 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:03:55.390924   74203 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46311
	I0731 18:03:55.391380   74203 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:03:55.391853   74203 main.go:141] libmachine: Using API Version  1
	I0731 18:03:55.391875   74203 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:03:55.392165   74203 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:03:55.392389   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:03:55.425283   74203 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 18:03:55.426485   74203 start.go:297] selected driver: kvm2
	I0731 18:03:55.426517   74203 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-276459 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-276459 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:03:55.426632   74203 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 18:03:55.427322   74203 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 18:03:55.427419   74203 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19349-8084/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 18:03:55.441518   74203 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 18:03:55.441891   74203 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 18:03:55.441921   74203 cni.go:84] Creating CNI manager for ""
	I0731 18:03:55.441928   74203 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:03:55.441970   74203 start.go:340] cluster config:
	{Name:old-k8s-version-276459 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-276459 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:03:55.442088   74203 iso.go:125] acquiring lock: {Name:mk4494bb5a63902bf12a909c3109db85229bc516 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 18:03:55.443745   74203 out.go:177] * Starting "old-k8s-version-276459" primary control-plane node in "old-k8s-version-276459" cluster
	I0731 18:03:55.299338   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:03:55.445026   74203 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 18:03:55.445062   74203 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0731 18:03:55.445085   74203 cache.go:56] Caching tarball of preloaded images
	I0731 18:03:55.445157   74203 preload.go:172] Found /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 18:03:55.445167   74203 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0731 18:03:55.445250   74203 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/config.json ...
	I0731 18:03:55.445412   74203 start.go:360] acquireMachinesLock for old-k8s-version-276459: {Name:mkd78eacfab7d6c3058c6674434b3d889ec957e0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 18:03:58.371340   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:04.451379   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:07.523408   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:13.603407   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:16.675437   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:22.755418   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:25.827434   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:31.907379   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:34.979426   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:41.059417   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:44.131434   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:50.211391   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:53.283445   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:04:59.363428   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:02.435450   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:08.515394   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:11.587394   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:17.667388   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:20.739413   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:26.819368   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:29.891394   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:35.971391   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:39.043445   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:45.123378   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:48.195378   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:54.275417   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:05:57.347374   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:03.427390   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:06.499378   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:12.579395   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:15.651447   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:21.731394   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:24.803405   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:30.883468   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:33.955397   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:40.035387   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:43.107448   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:49.187413   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:52.259420   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:06:58.339413   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:07:01.411396   73479 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.126:22: connect: no route to host
	I0731 18:07:04.416121   73696 start.go:364] duration metric: took 4m18.256589549s to acquireMachinesLock for "default-k8s-diff-port-094310"
	I0731 18:07:04.416183   73696 start.go:96] Skipping create...Using existing machine configuration
	I0731 18:07:04.416192   73696 fix.go:54] fixHost starting: 
	I0731 18:07:04.416522   73696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:07:04.416570   73696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:07:04.432249   73696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32817
	I0731 18:07:04.432715   73696 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:07:04.433206   73696 main.go:141] libmachine: Using API Version  1
	I0731 18:07:04.433234   73696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:07:04.433616   73696 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:07:04.433833   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:07:04.434001   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetState
	I0731 18:07:04.436061   73696 fix.go:112] recreateIfNeeded on default-k8s-diff-port-094310: state=Stopped err=<nil>
	I0731 18:07:04.436082   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	W0731 18:07:04.436241   73696 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 18:07:04.438139   73696 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-094310" ...
	I0731 18:07:04.439463   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .Start
	I0731 18:07:04.439678   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Ensuring networks are active...
	I0731 18:07:04.440645   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Ensuring network default is active
	I0731 18:07:04.441067   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Ensuring network mk-default-k8s-diff-port-094310 is active
	I0731 18:07:04.441473   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Getting domain xml...
	I0731 18:07:04.442331   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Creating domain...
	I0731 18:07:05.660745   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting to get IP...
	I0731 18:07:05.661963   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:05.662532   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:05.662620   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:05.662524   74854 retry.go:31] will retry after 294.438382ms: waiting for machine to come up
	I0731 18:07:05.959200   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:05.959668   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:05.959699   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:05.959619   74854 retry.go:31] will retry after 331.316387ms: waiting for machine to come up
	I0731 18:07:04.413166   73479 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 18:07:04.413216   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetMachineName
	I0731 18:07:04.413580   73479 buildroot.go:166] provisioning hostname "no-preload-673754"
	I0731 18:07:04.413609   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetMachineName
	I0731 18:07:04.413827   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:07:04.415964   73479 machine.go:97] duration metric: took 4m37.431900974s to provisionDockerMachine
	I0731 18:07:04.416013   73479 fix.go:56] duration metric: took 4m37.452176305s for fixHost
	I0731 18:07:04.416023   73479 start.go:83] releasing machines lock for "no-preload-673754", held for 4m37.452227129s
	W0731 18:07:04.416048   73479 start.go:714] error starting host: provision: host is not running
	W0731 18:07:04.416143   73479 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0731 18:07:04.416157   73479 start.go:729] Will try again in 5 seconds ...
	I0731 18:07:06.292146   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:06.292533   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:06.292555   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:06.292487   74854 retry.go:31] will retry after 324.512889ms: waiting for machine to come up
	I0731 18:07:06.619045   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:06.619440   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:06.619470   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:06.619404   74854 retry.go:31] will retry after 556.332506ms: waiting for machine to come up
	I0731 18:07:07.177224   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:07.177689   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:07.177722   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:07.177631   74854 retry.go:31] will retry after 599.567638ms: waiting for machine to come up
	I0731 18:07:07.778444   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:07.778848   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:07.778885   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:07.778820   74854 retry.go:31] will retry after 944.17246ms: waiting for machine to come up
	I0731 18:07:08.724983   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:08.725484   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:08.725512   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:08.725433   74854 retry.go:31] will retry after 1.077726279s: waiting for machine to come up
	I0731 18:07:09.805196   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:09.805629   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:09.805667   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:09.805575   74854 retry.go:31] will retry after 1.140059854s: waiting for machine to come up
	I0731 18:07:10.951633   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:10.952066   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:10.952091   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:10.952028   74854 retry.go:31] will retry after 1.691707383s: waiting for machine to come up
	I0731 18:07:09.418606   73479 start.go:360] acquireMachinesLock for no-preload-673754: {Name:mkd78eacfab7d6c3058c6674434b3d889ec957e0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 18:07:12.645970   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:12.646588   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:12.646623   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:12.646525   74854 retry.go:31] will retry after 2.257630784s: waiting for machine to come up
	I0731 18:07:14.905494   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:14.905891   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:14.905922   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:14.905833   74854 retry.go:31] will retry after 2.877713561s: waiting for machine to come up
	I0731 18:07:17.786797   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:17.787173   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | unable to find current IP address of domain default-k8s-diff-port-094310 in network mk-default-k8s-diff-port-094310
	I0731 18:07:17.787194   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | I0731 18:07:17.787140   74854 retry.go:31] will retry after 3.028611559s: waiting for machine to come up
	I0731 18:07:20.817593   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:20.817898   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Found IP for machine: 192.168.72.197
	I0731 18:07:20.817921   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Reserving static IP address...
	I0731 18:07:20.817934   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has current primary IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:20.818352   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-094310", mac: "52:54:00:a9:b2:ae", ip: "192.168.72.197"} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:20.818379   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Reserved static IP address: 192.168.72.197
	I0731 18:07:20.818400   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | skip adding static IP to network mk-default-k8s-diff-port-094310 - found existing host DHCP lease matching {name: "default-k8s-diff-port-094310", mac: "52:54:00:a9:b2:ae", ip: "192.168.72.197"}
	I0731 18:07:20.818414   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Waiting for SSH to be available...
	I0731 18:07:20.818431   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | Getting to WaitForSSH function...
	I0731 18:07:20.820417   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:20.820731   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:20.820758   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:20.820893   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | Using SSH client type: external
	I0731 18:07:20.820916   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | Using SSH private key: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/default-k8s-diff-port-094310/id_rsa (-rw-------)
	I0731 18:07:20.820940   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.197 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19349-8084/.minikube/machines/default-k8s-diff-port-094310/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 18:07:20.820950   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | About to run SSH command:
	I0731 18:07:20.820959   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | exit 0
	I0731 18:07:20.943348   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | SSH cmd err, output: <nil>: 
	I0731 18:07:20.943708   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetConfigRaw
	I0731 18:07:20.944373   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetIP
	I0731 18:07:20.947080   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:20.947465   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:20.947499   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:20.947731   73696 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/default-k8s-diff-port-094310/config.json ...
	I0731 18:07:20.947909   73696 machine.go:94] provisionDockerMachine start ...
	I0731 18:07:20.947926   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:07:20.948124   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:07:20.950698   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:20.951056   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:20.951083   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:20.951228   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:07:20.951443   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:20.951608   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:20.951780   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:07:20.952016   73696 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:20.952208   73696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.197 22 <nil> <nil>}
	I0731 18:07:20.952220   73696 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 18:07:21.051082   73696 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 18:07:21.051137   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetMachineName
	I0731 18:07:21.051424   73696 buildroot.go:166] provisioning hostname "default-k8s-diff-port-094310"
	I0731 18:07:21.051454   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetMachineName
	I0731 18:07:21.051650   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:07:21.054527   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.054913   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:21.054940   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.055151   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:07:21.055377   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:21.055516   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:21.055670   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:07:21.055838   73696 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:21.056037   73696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.197 22 <nil> <nil>}
	I0731 18:07:21.056051   73696 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-094310 && echo "default-k8s-diff-port-094310" | sudo tee /etc/hostname
	I0731 18:07:22.127802   73800 start.go:364] duration metric: took 4m27.5245732s to acquireMachinesLock for "embed-certs-436067"
	I0731 18:07:22.127861   73800 start.go:96] Skipping create...Using existing machine configuration
	I0731 18:07:22.127871   73800 fix.go:54] fixHost starting: 
	I0731 18:07:22.128296   73800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:07:22.128386   73800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:07:22.144783   73800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34313
	I0731 18:07:22.145111   73800 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:07:22.145531   73800 main.go:141] libmachine: Using API Version  1
	I0731 18:07:22.145549   73800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:07:22.145894   73800 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:07:22.146086   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:07:22.146226   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetState
	I0731 18:07:22.147718   73800 fix.go:112] recreateIfNeeded on embed-certs-436067: state=Stopped err=<nil>
	I0731 18:07:22.147737   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	W0731 18:07:22.147878   73800 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 18:07:22.149896   73800 out.go:177] * Restarting existing kvm2 VM for "embed-certs-436067" ...
	I0731 18:07:21.168797   73696 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-094310
	
	I0731 18:07:21.168828   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:07:21.171672   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.172012   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:21.172043   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.172183   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:07:21.172351   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:21.172510   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:21.172633   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:07:21.172800   73696 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:21.172976   73696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.197 22 <nil> <nil>}
	I0731 18:07:21.173010   73696 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-094310' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-094310/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-094310' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 18:07:21.284583   73696 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 18:07:21.284610   73696 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19349-8084/.minikube CaCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19349-8084/.minikube}
	I0731 18:07:21.284633   73696 buildroot.go:174] setting up certificates
	I0731 18:07:21.284645   73696 provision.go:84] configureAuth start
	I0731 18:07:21.284656   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetMachineName
	I0731 18:07:21.284931   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetIP
	I0731 18:07:21.287526   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.287945   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:21.287973   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.288161   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:07:21.290169   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.290469   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:21.290495   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.290602   73696 provision.go:143] copyHostCerts
	I0731 18:07:21.290661   73696 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem, removing ...
	I0731 18:07:21.290673   73696 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem
	I0731 18:07:21.290757   73696 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem (1082 bytes)
	I0731 18:07:21.290844   73696 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem, removing ...
	I0731 18:07:21.290856   73696 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem
	I0731 18:07:21.290881   73696 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem (1123 bytes)
	I0731 18:07:21.290933   73696 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem, removing ...
	I0731 18:07:21.290939   73696 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem
	I0731 18:07:21.290959   73696 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem (1675 bytes)
	I0731 18:07:21.291005   73696 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-094310 san=[127.0.0.1 192.168.72.197 default-k8s-diff-port-094310 localhost minikube]
	I0731 18:07:21.483241   73696 provision.go:177] copyRemoteCerts
	I0731 18:07:21.483314   73696 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 18:07:21.483343   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:07:21.486231   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.486619   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:21.486659   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.486850   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:07:21.487084   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:21.487285   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:07:21.487443   73696 sshutil.go:53] new ssh client: &{IP:192.168.72.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/default-k8s-diff-port-094310/id_rsa Username:docker}
	I0731 18:07:21.568564   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 18:07:21.598766   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0731 18:07:21.621602   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 18:07:21.643361   73696 provision.go:87] duration metric: took 358.702982ms to configureAuth
	I0731 18:07:21.643393   73696 buildroot.go:189] setting minikube options for container-runtime
	I0731 18:07:21.643598   73696 config.go:182] Loaded profile config "default-k8s-diff-port-094310": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:07:21.643699   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:07:21.646487   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.646921   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:21.646967   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.647126   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:07:21.647331   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:21.647527   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:21.647675   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:07:21.647879   73696 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:21.648051   73696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.197 22 <nil> <nil>}
	I0731 18:07:21.648066   73696 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 18:07:21.896109   73696 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 18:07:21.896138   73696 machine.go:97] duration metric: took 948.216479ms to provisionDockerMachine
	I0731 18:07:21.896152   73696 start.go:293] postStartSetup for "default-k8s-diff-port-094310" (driver="kvm2")
	I0731 18:07:21.896166   73696 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 18:07:21.896185   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:07:21.896500   73696 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 18:07:21.896533   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:07:21.899447   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.899784   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:21.899817   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:21.899936   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:07:21.900136   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:21.900268   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:07:21.900415   73696 sshutil.go:53] new ssh client: &{IP:192.168.72.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/default-k8s-diff-port-094310/id_rsa Username:docker}
	I0731 18:07:21.981347   73696 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 18:07:21.985297   73696 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 18:07:21.985324   73696 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/addons for local assets ...
	I0731 18:07:21.985397   73696 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/files for local assets ...
	I0731 18:07:21.985513   73696 filesync.go:149] local asset: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem -> 152592.pem in /etc/ssl/certs
	I0731 18:07:21.985646   73696 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 18:07:21.994700   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /etc/ssl/certs/152592.pem (1708 bytes)
	I0731 18:07:22.022005   73696 start.go:296] duration metric: took 125.838186ms for postStartSetup
	I0731 18:07:22.022052   73696 fix.go:56] duration metric: took 17.605858897s for fixHost
	I0731 18:07:22.022075   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:07:22.025151   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:22.025445   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:22.025478   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:22.025622   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:07:22.025829   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:22.026023   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:22.026199   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:07:22.026390   73696 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:22.026632   73696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.197 22 <nil> <nil>}
	I0731 18:07:22.026653   73696 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 18:07:22.127643   73696 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722449242.103036947
	
	I0731 18:07:22.127668   73696 fix.go:216] guest clock: 1722449242.103036947
	I0731 18:07:22.127675   73696 fix.go:229] Guest: 2024-07-31 18:07:22.103036947 +0000 UTC Remote: 2024-07-31 18:07:22.022056299 +0000 UTC m=+275.995802468 (delta=80.980648ms)
	I0731 18:07:22.127698   73696 fix.go:200] guest clock delta is within tolerance: 80.980648ms
	I0731 18:07:22.127704   73696 start.go:83] releasing machines lock for "default-k8s-diff-port-094310", held for 17.711543911s
	I0731 18:07:22.127735   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:07:22.128006   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetIP
	I0731 18:07:22.130905   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:22.131291   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:22.131322   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:22.131568   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:07:22.132072   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:07:22.132244   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:07:22.132334   73696 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 18:07:22.132373   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:07:22.132488   73696 ssh_runner.go:195] Run: cat /version.json
	I0731 18:07:22.132511   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:07:22.134976   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:22.135269   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:22.135350   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:22.135386   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:22.135583   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:07:22.135702   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:22.135735   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:22.135751   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:22.135837   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:07:22.135945   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:07:22.135966   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:07:22.136068   73696 sshutil.go:53] new ssh client: &{IP:192.168.72.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/default-k8s-diff-port-094310/id_rsa Username:docker}
	I0731 18:07:22.136101   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:07:22.136246   73696 sshutil.go:53] new ssh client: &{IP:192.168.72.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/default-k8s-diff-port-094310/id_rsa Username:docker}
	I0731 18:07:22.245752   73696 ssh_runner.go:195] Run: systemctl --version
	I0731 18:07:22.251574   73696 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 18:07:22.391398   73696 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 18:07:22.396765   73696 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 18:07:22.396842   73696 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 18:07:22.412102   73696 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 18:07:22.412119   73696 start.go:495] detecting cgroup driver to use...
	I0731 18:07:22.412170   73696 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 18:07:22.427198   73696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 18:07:22.441511   73696 docker.go:217] disabling cri-docker service (if available) ...
	I0731 18:07:22.441589   73696 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 18:07:22.455498   73696 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 18:07:22.469702   73696 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 18:07:22.584218   73696 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 18:07:22.719105   73696 docker.go:233] disabling docker service ...
	I0731 18:07:22.719195   73696 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 18:07:22.733625   73696 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 18:07:22.746500   73696 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 18:07:22.893624   73696 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 18:07:23.012965   73696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 18:07:23.027132   73696 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 18:07:23.044766   73696 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 18:07:23.044832   73696 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:23.054276   73696 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 18:07:23.054363   73696 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:23.063873   73696 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:23.073392   73696 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:23.082908   73696 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 18:07:23.093468   73696 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:23.103419   73696 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:23.119920   73696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:23.130427   73696 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 18:07:23.139397   73696 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 18:07:23.139465   73696 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 18:07:23.152275   73696 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 18:07:23.162439   73696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:07:23.280030   73696 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 18:07:23.412019   73696 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 18:07:23.412083   73696 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 18:07:23.416884   73696 start.go:563] Will wait 60s for crictl version
	I0731 18:07:23.416930   73696 ssh_runner.go:195] Run: which crictl
	I0731 18:07:23.420518   73696 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 18:07:23.458895   73696 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 18:07:23.458976   73696 ssh_runner.go:195] Run: crio --version
	I0731 18:07:23.486961   73696 ssh_runner.go:195] Run: crio --version
	I0731 18:07:23.519648   73696 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 18:07:22.151159   73800 main.go:141] libmachine: (embed-certs-436067) Calling .Start
	I0731 18:07:22.151319   73800 main.go:141] libmachine: (embed-certs-436067) Ensuring networks are active...
	I0731 18:07:22.151951   73800 main.go:141] libmachine: (embed-certs-436067) Ensuring network default is active
	I0731 18:07:22.152245   73800 main.go:141] libmachine: (embed-certs-436067) Ensuring network mk-embed-certs-436067 is active
	I0731 18:07:22.152747   73800 main.go:141] libmachine: (embed-certs-436067) Getting domain xml...
	I0731 18:07:22.153446   73800 main.go:141] libmachine: (embed-certs-436067) Creating domain...
	I0731 18:07:23.410530   73800 main.go:141] libmachine: (embed-certs-436067) Waiting to get IP...
	I0731 18:07:23.411687   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:23.412152   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:23.412231   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:23.412133   74994 retry.go:31] will retry after 233.281104ms: waiting for machine to come up
	I0731 18:07:23.646659   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:23.647147   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:23.647174   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:23.647069   74994 retry.go:31] will retry after 307.068766ms: waiting for machine to come up
	I0731 18:07:23.955614   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:23.956140   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:23.956166   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:23.956094   74994 retry.go:31] will retry after 410.095032ms: waiting for machine to come up
	I0731 18:07:24.367793   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:24.368231   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:24.368264   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:24.368188   74994 retry.go:31] will retry after 366.242055ms: waiting for machine to come up
	I0731 18:07:23.520927   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetIP
	I0731 18:07:23.524167   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:23.524615   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:07:23.524663   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:07:23.524913   73696 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0731 18:07:23.528924   73696 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:07:23.540496   73696 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-094310 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-094310 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.197 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 18:07:23.540633   73696 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 18:07:23.540681   73696 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 18:07:23.579224   73696 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 18:07:23.579295   73696 ssh_runner.go:195] Run: which lz4
	I0731 18:07:23.583060   73696 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 18:07:23.586888   73696 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 18:07:23.586922   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 18:07:24.864241   73696 crio.go:462] duration metric: took 1.281254602s to copy over tarball
	I0731 18:07:24.864321   73696 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 18:07:24.735741   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:24.736325   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:24.736356   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:24.736275   74994 retry.go:31] will retry after 593.179812ms: waiting for machine to come up
	I0731 18:07:25.331004   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:25.331406   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:25.331470   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:25.331381   74994 retry.go:31] will retry after 778.352855ms: waiting for machine to come up
	I0731 18:07:26.111327   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:26.111828   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:26.111855   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:26.111757   74994 retry.go:31] will retry after 993.157171ms: waiting for machine to come up
	I0731 18:07:27.106111   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:27.106543   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:27.106574   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:27.106507   74994 retry.go:31] will retry after 963.581879ms: waiting for machine to come up
	I0731 18:07:28.072100   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:28.072628   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:28.072657   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:28.072560   74994 retry.go:31] will retry after 1.608497907s: waiting for machine to come up
	I0731 18:07:27.052512   73696 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.188157854s)
	I0731 18:07:27.052542   73696 crio.go:469] duration metric: took 2.188269884s to extract the tarball
	I0731 18:07:27.052557   73696 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 18:07:27.089250   73696 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 18:07:27.130507   73696 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 18:07:27.130536   73696 cache_images.go:84] Images are preloaded, skipping loading
	I0731 18:07:27.130546   73696 kubeadm.go:934] updating node { 192.168.72.197 8444 v1.30.3 crio true true} ...
	I0731 18:07:27.130666   73696 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-094310 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.197
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-094310 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 18:07:27.130751   73696 ssh_runner.go:195] Run: crio config
	I0731 18:07:27.176571   73696 cni.go:84] Creating CNI manager for ""
	I0731 18:07:27.176598   73696 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:07:27.176614   73696 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 18:07:27.176640   73696 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.197 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-094310 NodeName:default-k8s-diff-port-094310 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.197"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.197 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 18:07:27.176821   73696 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.197
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-094310"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.197
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.197"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 18:07:27.176904   73696 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 18:07:27.186582   73696 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 18:07:27.186647   73696 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 18:07:27.195571   73696 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0731 18:07:27.211103   73696 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 18:07:27.226226   73696 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0731 18:07:27.241763   73696 ssh_runner.go:195] Run: grep 192.168.72.197	control-plane.minikube.internal$ /etc/hosts
	I0731 18:07:27.245286   73696 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.197	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:07:27.256317   73696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:07:27.377904   73696 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 18:07:27.394151   73696 certs.go:68] Setting up /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/default-k8s-diff-port-094310 for IP: 192.168.72.197
	I0731 18:07:27.394181   73696 certs.go:194] generating shared ca certs ...
	I0731 18:07:27.394201   73696 certs.go:226] acquiring lock for ca certs: {Name:mk49eed439085f9c30746706460ce213570d1997 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:07:27.394382   73696 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key
	I0731 18:07:27.394451   73696 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key
	I0731 18:07:27.394465   73696 certs.go:256] generating profile certs ...
	I0731 18:07:27.394577   73696 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/default-k8s-diff-port-094310/client.key
	I0731 18:07:27.394656   73696 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/default-k8s-diff-port-094310/apiserver.key.5264b27d
	I0731 18:07:27.394703   73696 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/default-k8s-diff-port-094310/proxy-client.key
	I0731 18:07:27.394851   73696 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem (1338 bytes)
	W0731 18:07:27.394896   73696 certs.go:480] ignoring /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259_empty.pem, impossibly tiny 0 bytes
	I0731 18:07:27.394908   73696 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 18:07:27.394935   73696 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem (1082 bytes)
	I0731 18:07:27.394969   73696 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem (1123 bytes)
	I0731 18:07:27.394990   73696 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem (1675 bytes)
	I0731 18:07:27.395028   73696 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem (1708 bytes)
	I0731 18:07:27.395749   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 18:07:27.425292   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 18:07:27.452753   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 18:07:27.481508   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 18:07:27.506990   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/default-k8s-diff-port-094310/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0731 18:07:27.544385   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/default-k8s-diff-port-094310/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 18:07:27.572947   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/default-k8s-diff-port-094310/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 18:07:27.597895   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/default-k8s-diff-port-094310/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 18:07:27.619324   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /usr/share/ca-certificates/152592.pem (1708 bytes)
	I0731 18:07:27.641000   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 18:07:27.662483   73696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem --> /usr/share/ca-certificates/15259.pem (1338 bytes)
	I0731 18:07:27.684400   73696 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 18:07:27.700058   73696 ssh_runner.go:195] Run: openssl version
	I0731 18:07:27.705637   73696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/152592.pem && ln -fs /usr/share/ca-certificates/152592.pem /etc/ssl/certs/152592.pem"
	I0731 18:07:27.715558   73696 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/152592.pem
	I0731 18:07:27.719545   73696 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 16:54 /usr/share/ca-certificates/152592.pem
	I0731 18:07:27.719611   73696 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/152592.pem
	I0731 18:07:27.725076   73696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/152592.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 18:07:27.736589   73696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 18:07:27.747908   73696 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:07:27.752392   73696 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 16:42 /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:07:27.752448   73696 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:07:27.757939   73696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 18:07:27.769571   73696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15259.pem && ln -fs /usr/share/ca-certificates/15259.pem /etc/ssl/certs/15259.pem"
	I0731 18:07:27.780730   73696 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15259.pem
	I0731 18:07:27.785059   73696 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 16:54 /usr/share/ca-certificates/15259.pem
	I0731 18:07:27.785112   73696 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15259.pem
	I0731 18:07:27.790477   73696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15259.pem /etc/ssl/certs/51391683.0"
	I0731 18:07:27.801519   73696 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 18:07:27.805654   73696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 18:07:27.811381   73696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 18:07:27.816786   73696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 18:07:27.822643   73696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 18:07:27.828371   73696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 18:07:27.833908   73696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 18:07:27.839455   73696 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-094310 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-094310 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.197 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:07:27.839537   73696 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 18:07:27.839605   73696 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 18:07:27.882993   73696 cri.go:89] found id: ""
	I0731 18:07:27.883055   73696 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 18:07:27.894363   73696 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 18:07:27.894386   73696 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 18:07:27.894431   73696 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 18:07:27.905192   73696 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 18:07:27.906138   73696 kubeconfig.go:125] found "default-k8s-diff-port-094310" server: "https://192.168.72.197:8444"
	I0731 18:07:27.908339   73696 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 18:07:27.918565   73696 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.197
	I0731 18:07:27.918603   73696 kubeadm.go:1160] stopping kube-system containers ...
	I0731 18:07:27.918613   73696 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 18:07:27.918663   73696 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 18:07:27.955675   73696 cri.go:89] found id: ""
	I0731 18:07:27.955744   73696 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 18:07:27.972234   73696 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 18:07:27.981273   73696 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 18:07:27.981289   73696 kubeadm.go:157] found existing configuration files:
	
	I0731 18:07:27.981323   73696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0731 18:07:27.989775   73696 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 18:07:27.989837   73696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 18:07:27.998816   73696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0731 18:07:28.007142   73696 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 18:07:28.007197   73696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 18:07:28.016124   73696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0731 18:07:28.024471   73696 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 18:07:28.024519   73696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 18:07:28.033105   73696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0731 18:07:28.041306   73696 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 18:07:28.041355   73696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 18:07:28.049958   73696 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 18:07:28.058718   73696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:28.167720   73696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:29.013539   73696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:29.225696   73696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:29.300822   73696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:29.403471   73696 api_server.go:52] waiting for apiserver process to appear ...
	I0731 18:07:29.403567   73696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:07:29.903755   73696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:07:30.403896   73696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:07:30.904160   73696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:07:29.683622   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:29.684148   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:29.684180   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:29.684088   74994 retry.go:31] will retry after 1.813922887s: waiting for machine to come up
	I0731 18:07:31.500225   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:31.500738   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:31.500769   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:31.500694   74994 retry.go:31] will retry after 2.381670698s: waiting for machine to come up
	I0731 18:07:33.884129   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:33.884564   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:33.884587   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:33.884539   74994 retry.go:31] will retry after 3.269400744s: waiting for machine to come up
	I0731 18:07:31.404093   73696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:07:31.417483   73696 api_server.go:72] duration metric: took 2.014013675s to wait for apiserver process to appear ...
	I0731 18:07:31.417511   73696 api_server.go:88] waiting for apiserver healthz status ...
	I0731 18:07:31.417533   73696 api_server.go:253] Checking apiserver healthz at https://192.168.72.197:8444/healthz ...
	I0731 18:07:34.340211   73696 api_server.go:279] https://192.168.72.197:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 18:07:34.340240   73696 api_server.go:103] status: https://192.168.72.197:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 18:07:34.340274   73696 api_server.go:253] Checking apiserver healthz at https://192.168.72.197:8444/healthz ...
	I0731 18:07:34.426446   73696 api_server.go:279] https://192.168.72.197:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 18:07:34.426504   73696 api_server.go:103] status: https://192.168.72.197:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 18:07:34.426522   73696 api_server.go:253] Checking apiserver healthz at https://192.168.72.197:8444/healthz ...
	I0731 18:07:34.436383   73696 api_server.go:279] https://192.168.72.197:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 18:07:34.436416   73696 api_server.go:103] status: https://192.168.72.197:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 18:07:34.918371   73696 api_server.go:253] Checking apiserver healthz at https://192.168.72.197:8444/healthz ...
	I0731 18:07:34.922668   73696 api_server.go:279] https://192.168.72.197:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 18:07:34.922699   73696 api_server.go:103] status: https://192.168.72.197:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 18:07:35.418265   73696 api_server.go:253] Checking apiserver healthz at https://192.168.72.197:8444/healthz ...
	I0731 18:07:35.435931   73696 api_server.go:279] https://192.168.72.197:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 18:07:35.435966   73696 api_server.go:103] status: https://192.168.72.197:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 18:07:35.918570   73696 api_server.go:253] Checking apiserver healthz at https://192.168.72.197:8444/healthz ...
	I0731 18:07:35.923674   73696 api_server.go:279] https://192.168.72.197:8444/healthz returned 200:
	ok
	I0731 18:07:35.929781   73696 api_server.go:141] control plane version: v1.30.3
	I0731 18:07:35.929809   73696 api_server.go:131] duration metric: took 4.512290009s to wait for apiserver health ...
	I0731 18:07:35.929820   73696 cni.go:84] Creating CNI manager for ""
	I0731 18:07:35.929827   73696 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:07:35.931827   73696 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 18:07:35.933104   73696 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 18:07:35.943548   73696 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 18:07:35.961932   73696 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 18:07:35.977855   73696 system_pods.go:59] 8 kube-system pods found
	I0731 18:07:35.977894   73696 system_pods.go:61] "coredns-7db6d8ff4d-kvxmb" [df8cf19b-5e62-4c38-9124-3257fea48fbb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:07:35.977905   73696 system_pods.go:61] "etcd-default-k8s-diff-port-094310" [fe526f06-bd6c-4708-a0f3-e49b731e3a61] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 18:07:35.977915   73696 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-094310" [f0191941-87ad-4934-a02a-75b07649d5dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 18:07:35.977924   73696 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-094310" [28b4bdc4-4eea-41c0-9182-b07034d7363e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 18:07:35.977936   73696 system_pods.go:61] "kube-proxy-8bgl7" [577052d5-fe7d-4547-bfbf-d3c938884767] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 18:07:35.977946   73696 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-094310" [df25971f-b25a-4344-a91e-c4b0c9ee5282] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 18:07:35.977964   73696 system_pods.go:61] "metrics-server-569cc877fc-64hp4" [847243bf-6568-41ff-a1e4-70b0a89c63dd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 18:07:35.977978   73696 system_pods.go:61] "storage-provisioner" [6493bfa6-e40b-405c-93b6-ee5053efbdf6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 18:07:35.977991   73696 system_pods.go:74] duration metric: took 16.038231ms to wait for pod list to return data ...
	I0731 18:07:35.978003   73696 node_conditions.go:102] verifying NodePressure condition ...
	I0731 18:07:35.983206   73696 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 18:07:35.983234   73696 node_conditions.go:123] node cpu capacity is 2
	I0731 18:07:35.983251   73696 node_conditions.go:105] duration metric: took 5.239492ms to run NodePressure ...
	I0731 18:07:35.983270   73696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:37.155307   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:37.155787   73800 main.go:141] libmachine: (embed-certs-436067) DBG | unable to find current IP address of domain embed-certs-436067 in network mk-embed-certs-436067
	I0731 18:07:37.155822   73800 main.go:141] libmachine: (embed-certs-436067) DBG | I0731 18:07:37.155717   74994 retry.go:31] will retry after 3.095991533s: waiting for machine to come up
	I0731 18:07:36.249072   73696 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 18:07:36.253639   73696 kubeadm.go:739] kubelet initialised
	I0731 18:07:36.253661   73696 kubeadm.go:740] duration metric: took 4.559461ms waiting for restarted kubelet to initialise ...
	I0731 18:07:36.253669   73696 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:07:36.258632   73696 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-kvxmb" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:36.262785   73696 pod_ready.go:97] node "default-k8s-diff-port-094310" hosting pod "coredns-7db6d8ff4d-kvxmb" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-094310" has status "Ready":"False"
	I0731 18:07:36.262811   73696 pod_ready.go:81] duration metric: took 4.157359ms for pod "coredns-7db6d8ff4d-kvxmb" in "kube-system" namespace to be "Ready" ...
	E0731 18:07:36.262823   73696 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-094310" hosting pod "coredns-7db6d8ff4d-kvxmb" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-094310" has status "Ready":"False"
	I0731 18:07:36.262831   73696 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:36.269224   73696 pod_ready.go:97] node "default-k8s-diff-port-094310" hosting pod "etcd-default-k8s-diff-port-094310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-094310" has status "Ready":"False"
	I0731 18:07:36.269250   73696 pod_ready.go:81] duration metric: took 6.406018ms for pod "etcd-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	E0731 18:07:36.269263   73696 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-094310" hosting pod "etcd-default-k8s-diff-port-094310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-094310" has status "Ready":"False"
	I0731 18:07:36.269270   73696 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:36.273379   73696 pod_ready.go:97] node "default-k8s-diff-port-094310" hosting pod "kube-apiserver-default-k8s-diff-port-094310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-094310" has status "Ready":"False"
	I0731 18:07:36.273400   73696 pod_ready.go:81] duration metric: took 4.119945ms for pod "kube-apiserver-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	E0731 18:07:36.273408   73696 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-094310" hosting pod "kube-apiserver-default-k8s-diff-port-094310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-094310" has status "Ready":"False"
	I0731 18:07:36.273414   73696 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:36.365153   73696 pod_ready.go:97] node "default-k8s-diff-port-094310" hosting pod "kube-controller-manager-default-k8s-diff-port-094310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-094310" has status "Ready":"False"
	I0731 18:07:36.365183   73696 pod_ready.go:81] duration metric: took 91.758203ms for pod "kube-controller-manager-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	E0731 18:07:36.365195   73696 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-094310" hosting pod "kube-controller-manager-default-k8s-diff-port-094310" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-094310" has status "Ready":"False"
	I0731 18:07:36.365201   73696 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8bgl7" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:36.765371   73696 pod_ready.go:92] pod "kube-proxy-8bgl7" in "kube-system" namespace has status "Ready":"True"
	I0731 18:07:36.765393   73696 pod_ready.go:81] duration metric: took 400.181854ms for pod "kube-proxy-8bgl7" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:36.765405   73696 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:38.770757   73696 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-094310" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:40.772702   73696 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-094310" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:41.552094   74203 start.go:364] duration metric: took 3m46.106649241s to acquireMachinesLock for "old-k8s-version-276459"
	I0731 18:07:41.552166   74203 start.go:96] Skipping create...Using existing machine configuration
	I0731 18:07:41.552174   74203 fix.go:54] fixHost starting: 
	I0731 18:07:41.552553   74203 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:07:41.552595   74203 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:07:41.569965   74203 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43329
	I0731 18:07:41.570361   74203 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:07:41.570884   74203 main.go:141] libmachine: Using API Version  1
	I0731 18:07:41.570905   74203 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:07:41.571247   74203 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:07:41.571454   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:07:41.571605   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetState
	I0731 18:07:41.573081   74203 fix.go:112] recreateIfNeeded on old-k8s-version-276459: state=Stopped err=<nil>
	I0731 18:07:41.573114   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	W0731 18:07:41.573276   74203 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 18:07:41.575254   74203 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-276459" ...
	I0731 18:07:40.254868   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.255367   73800 main.go:141] libmachine: (embed-certs-436067) Found IP for machine: 192.168.50.86
	I0731 18:07:40.255385   73800 main.go:141] libmachine: (embed-certs-436067) Reserving static IP address...
	I0731 18:07:40.255405   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has current primary IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.255798   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "embed-certs-436067", mac: "52:54:00:87:1e:25", ip: "192.168.50.86"} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:40.255822   73800 main.go:141] libmachine: (embed-certs-436067) Reserved static IP address: 192.168.50.86
	I0731 18:07:40.255839   73800 main.go:141] libmachine: (embed-certs-436067) DBG | skip adding static IP to network mk-embed-certs-436067 - found existing host DHCP lease matching {name: "embed-certs-436067", mac: "52:54:00:87:1e:25", ip: "192.168.50.86"}
	I0731 18:07:40.255853   73800 main.go:141] libmachine: (embed-certs-436067) DBG | Getting to WaitForSSH function...
	I0731 18:07:40.255865   73800 main.go:141] libmachine: (embed-certs-436067) Waiting for SSH to be available...
	I0731 18:07:40.257994   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.258304   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:40.258331   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.258475   73800 main.go:141] libmachine: (embed-certs-436067) DBG | Using SSH client type: external
	I0731 18:07:40.258492   73800 main.go:141] libmachine: (embed-certs-436067) DBG | Using SSH private key: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/embed-certs-436067/id_rsa (-rw-------)
	I0731 18:07:40.258594   73800 main.go:141] libmachine: (embed-certs-436067) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.86 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19349-8084/.minikube/machines/embed-certs-436067/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 18:07:40.258625   73800 main.go:141] libmachine: (embed-certs-436067) DBG | About to run SSH command:
	I0731 18:07:40.258644   73800 main.go:141] libmachine: (embed-certs-436067) DBG | exit 0
	I0731 18:07:40.387051   73800 main.go:141] libmachine: (embed-certs-436067) DBG | SSH cmd err, output: <nil>: 
	I0731 18:07:40.387459   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetConfigRaw
	I0731 18:07:40.388093   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetIP
	I0731 18:07:40.390805   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.391260   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:40.391306   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.391534   73800 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/embed-certs-436067/config.json ...
	I0731 18:07:40.391769   73800 machine.go:94] provisionDockerMachine start ...
	I0731 18:07:40.391793   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:07:40.392012   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:07:40.394412   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.394809   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:40.394839   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.395029   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:07:40.395209   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:40.395372   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:40.395480   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:07:40.395624   73800 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:40.395808   73800 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.86 22 <nil> <nil>}
	I0731 18:07:40.395817   73800 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 18:07:40.503041   73800 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 18:07:40.503073   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetMachineName
	I0731 18:07:40.503326   73800 buildroot.go:166] provisioning hostname "embed-certs-436067"
	I0731 18:07:40.503352   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetMachineName
	I0731 18:07:40.503539   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:07:40.506604   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.506940   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:40.506967   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.507124   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:07:40.507296   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:40.507438   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:40.507577   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:07:40.507752   73800 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:40.507912   73800 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.86 22 <nil> <nil>}
	I0731 18:07:40.507927   73800 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-436067 && echo "embed-certs-436067" | sudo tee /etc/hostname
	I0731 18:07:40.632627   73800 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-436067
	
	I0731 18:07:40.632678   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:07:40.635632   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.635989   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:40.636017   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.636168   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:07:40.636386   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:40.636554   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:40.636751   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:07:40.636963   73800 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:40.637192   73800 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.86 22 <nil> <nil>}
	I0731 18:07:40.637213   73800 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-436067' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-436067/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-436067' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 18:07:40.755249   73800 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 18:07:40.755273   73800 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19349-8084/.minikube CaCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19349-8084/.minikube}
	I0731 18:07:40.755291   73800 buildroot.go:174] setting up certificates
	I0731 18:07:40.755301   73800 provision.go:84] configureAuth start
	I0731 18:07:40.755310   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetMachineName
	I0731 18:07:40.755602   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetIP
	I0731 18:07:40.758306   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.758705   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:40.758731   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.758865   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:07:40.760790   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.761061   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:40.761090   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.761244   73800 provision.go:143] copyHostCerts
	I0731 18:07:40.761299   73800 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem, removing ...
	I0731 18:07:40.761323   73800 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem
	I0731 18:07:40.761376   73800 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem (1082 bytes)
	I0731 18:07:40.761479   73800 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem, removing ...
	I0731 18:07:40.761488   73800 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem
	I0731 18:07:40.761509   73800 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem (1123 bytes)
	I0731 18:07:40.761562   73800 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem, removing ...
	I0731 18:07:40.761569   73800 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem
	I0731 18:07:40.761586   73800 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem (1675 bytes)
	I0731 18:07:40.761635   73800 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem org=jenkins.embed-certs-436067 san=[127.0.0.1 192.168.50.86 embed-certs-436067 localhost minikube]
	I0731 18:07:40.874612   73800 provision.go:177] copyRemoteCerts
	I0731 18:07:40.874666   73800 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 18:07:40.874691   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:07:40.877623   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.878044   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:40.878075   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:40.878206   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:07:40.878403   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:40.878556   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:07:40.878706   73800 sshutil.go:53] new ssh client: &{IP:192.168.50.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/embed-certs-436067/id_rsa Username:docker}
	I0731 18:07:40.965720   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 18:07:40.987836   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0731 18:07:41.012423   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 18:07:41.036366   73800 provision.go:87] duration metric: took 281.054266ms to configureAuth
	I0731 18:07:41.036392   73800 buildroot.go:189] setting minikube options for container-runtime
	I0731 18:07:41.036561   73800 config.go:182] Loaded profile config "embed-certs-436067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:07:41.036626   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:07:41.039204   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.039587   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:41.039615   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.039814   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:07:41.040021   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:41.040162   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:41.040293   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:07:41.040462   73800 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:41.040642   73800 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.86 22 <nil> <nil>}
	I0731 18:07:41.040663   73800 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 18:07:41.307915   73800 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 18:07:41.307945   73800 machine.go:97] duration metric: took 916.161297ms to provisionDockerMachine
	I0731 18:07:41.307958   73800 start.go:293] postStartSetup for "embed-certs-436067" (driver="kvm2")
	I0731 18:07:41.307971   73800 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 18:07:41.307990   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:07:41.308383   73800 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 18:07:41.308409   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:07:41.311172   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.311532   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:41.311559   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.311712   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:07:41.311940   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:41.312132   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:07:41.312251   73800 sshutil.go:53] new ssh client: &{IP:192.168.50.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/embed-certs-436067/id_rsa Username:docker}
	I0731 18:07:41.397229   73800 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 18:07:41.401356   73800 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 18:07:41.401380   73800 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/addons for local assets ...
	I0731 18:07:41.401458   73800 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/files for local assets ...
	I0731 18:07:41.401571   73800 filesync.go:149] local asset: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem -> 152592.pem in /etc/ssl/certs
	I0731 18:07:41.401696   73800 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 18:07:41.410540   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /etc/ssl/certs/152592.pem (1708 bytes)
	I0731 18:07:41.434298   73800 start.go:296] duration metric: took 126.324424ms for postStartSetup
	I0731 18:07:41.434342   73800 fix.go:56] duration metric: took 19.306472215s for fixHost
	I0731 18:07:41.434363   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:07:41.437502   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.438007   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:41.438038   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.438221   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:07:41.438435   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:41.438613   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:41.438752   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:07:41.438932   73800 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:41.439086   73800 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.86 22 <nil> <nil>}
	I0731 18:07:41.439095   73800 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 18:07:41.551915   73800 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722449261.529568895
	
	I0731 18:07:41.551937   73800 fix.go:216] guest clock: 1722449261.529568895
	I0731 18:07:41.551944   73800 fix.go:229] Guest: 2024-07-31 18:07:41.529568895 +0000 UTC Remote: 2024-07-31 18:07:41.434346377 +0000 UTC m=+286.960766339 (delta=95.222518ms)
	I0731 18:07:41.551999   73800 fix.go:200] guest clock delta is within tolerance: 95.222518ms
	I0731 18:07:41.552010   73800 start.go:83] releasing machines lock for "embed-certs-436067", held for 19.42417291s
	I0731 18:07:41.552036   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:07:41.552377   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetIP
	I0731 18:07:41.554945   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.555385   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:41.555415   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.555583   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:07:41.556139   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:07:41.556362   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:07:41.556448   73800 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 18:07:41.556507   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:07:41.556619   73800 ssh_runner.go:195] Run: cat /version.json
	I0731 18:07:41.556634   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:07:41.559700   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.559847   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.560160   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:41.560227   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.560277   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:07:41.560374   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:41.560440   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:41.560582   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:41.560652   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:07:41.560697   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:07:41.560745   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:07:41.560833   73800 sshutil.go:53] new ssh client: &{IP:192.168.50.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/embed-certs-436067/id_rsa Username:docker}
	I0731 18:07:41.560909   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:07:41.561060   73800 sshutil.go:53] new ssh client: &{IP:192.168.50.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/embed-certs-436067/id_rsa Username:docker}
	I0731 18:07:41.640796   73800 ssh_runner.go:195] Run: systemctl --version
	I0731 18:07:41.671461   73800 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 18:07:41.820881   73800 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 18:07:41.826610   73800 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 18:07:41.826673   73800 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 18:07:41.841766   73800 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 18:07:41.841789   73800 start.go:495] detecting cgroup driver to use...
	I0731 18:07:41.841872   73800 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 18:07:41.858636   73800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 18:07:41.873090   73800 docker.go:217] disabling cri-docker service (if available) ...
	I0731 18:07:41.873152   73800 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 18:07:41.890967   73800 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 18:07:41.907886   73800 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 18:07:42.022724   73800 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 18:07:42.173885   73800 docker.go:233] disabling docker service ...
	I0731 18:07:42.173969   73800 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 18:07:42.190959   73800 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 18:07:42.205274   73800 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 18:07:42.358130   73800 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 18:07:42.497981   73800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 18:07:42.513774   73800 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 18:07:42.532713   73800 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 18:07:42.532808   73800 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:42.544367   73800 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 18:07:42.544427   73800 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:42.556427   73800 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:42.566399   73800 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:42.576633   73800 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 18:07:42.588508   73800 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:42.600011   73800 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:42.618858   73800 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:07:42.630437   73800 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 18:07:42.641459   73800 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 18:07:42.641528   73800 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 18:07:42.655000   73800 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 18:07:42.664912   73800 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:07:42.791781   73800 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 18:07:42.936709   73800 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 18:07:42.936778   73800 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 18:07:42.941132   73800 start.go:563] Will wait 60s for crictl version
	I0731 18:07:42.941189   73800 ssh_runner.go:195] Run: which crictl
	I0731 18:07:42.944870   73800 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 18:07:42.983069   73800 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 18:07:42.983181   73800 ssh_runner.go:195] Run: crio --version
	I0731 18:07:43.011636   73800 ssh_runner.go:195] Run: crio --version
	I0731 18:07:43.043295   73800 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 18:07:43.044545   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetIP
	I0731 18:07:43.047635   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:43.048049   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:07:43.048080   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:07:43.048330   73800 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0731 18:07:43.052269   73800 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:07:43.064116   73800 kubeadm.go:883] updating cluster {Name:embed-certs-436067 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-436067 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.86 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 18:07:43.064283   73800 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 18:07:43.064361   73800 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 18:07:43.100437   73800 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 18:07:43.100516   73800 ssh_runner.go:195] Run: which lz4
	I0731 18:07:43.104627   73800 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 18:07:43.108552   73800 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 18:07:43.108586   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 18:07:44.368238   73800 crio.go:462] duration metric: took 1.263636259s to copy over tarball
	I0731 18:07:44.368322   73800 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 18:07:41.576648   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .Start
	I0731 18:07:41.576823   74203 main.go:141] libmachine: (old-k8s-version-276459) Ensuring networks are active...
	I0731 18:07:41.577511   74203 main.go:141] libmachine: (old-k8s-version-276459) Ensuring network default is active
	I0731 18:07:41.578015   74203 main.go:141] libmachine: (old-k8s-version-276459) Ensuring network mk-old-k8s-version-276459 is active
	I0731 18:07:41.578469   74203 main.go:141] libmachine: (old-k8s-version-276459) Getting domain xml...
	I0731 18:07:41.579474   74203 main.go:141] libmachine: (old-k8s-version-276459) Creating domain...
	I0731 18:07:42.876409   74203 main.go:141] libmachine: (old-k8s-version-276459) Waiting to get IP...
	I0731 18:07:42.877345   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:42.877788   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:42.877841   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:42.877763   75164 retry.go:31] will retry after 218.764988ms: waiting for machine to come up
	I0731 18:07:43.098230   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:43.098697   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:43.098722   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:43.098650   75164 retry.go:31] will retry after 285.579707ms: waiting for machine to come up
	I0731 18:07:43.386356   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:43.386897   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:43.386928   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:43.386852   75164 retry.go:31] will retry after 389.197253ms: waiting for machine to come up
	I0731 18:07:43.778183   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:43.778672   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:43.778698   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:43.778622   75164 retry.go:31] will retry after 484.5108ms: waiting for machine to come up
	I0731 18:07:44.264412   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:44.265042   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:44.265073   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:44.264955   75164 retry.go:31] will retry after 621.551625ms: waiting for machine to come up
	I0731 18:07:44.887986   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:44.888534   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:44.888563   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:44.888489   75164 retry.go:31] will retry after 610.567971ms: waiting for machine to come up
	I0731 18:07:42.773583   73696 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-094310" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:44.272853   73696 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-094310" in "kube-system" namespace has status "Ready":"True"
	I0731 18:07:44.272874   73696 pod_ready.go:81] duration metric: took 7.507462023s for pod "kube-scheduler-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:44.272886   73696 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:46.689701   73800 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.321340678s)
	I0731 18:07:46.689730   73800 crio.go:469] duration metric: took 2.321463484s to extract the tarball
	I0731 18:07:46.689738   73800 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 18:07:46.749205   73800 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 18:07:46.805950   73800 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 18:07:46.805979   73800 cache_images.go:84] Images are preloaded, skipping loading
	I0731 18:07:46.805990   73800 kubeadm.go:934] updating node { 192.168.50.86 8443 v1.30.3 crio true true} ...
	I0731 18:07:46.806135   73800 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-436067 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.86
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-436067 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 18:07:46.806233   73800 ssh_runner.go:195] Run: crio config
	I0731 18:07:46.865815   73800 cni.go:84] Creating CNI manager for ""
	I0731 18:07:46.865838   73800 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:07:46.865852   73800 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 18:07:46.865873   73800 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.86 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-436067 NodeName:embed-certs-436067 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.86"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.86 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 18:07:46.866048   73800 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.86
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-436067"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.86
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.86"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 18:07:46.866121   73800 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 18:07:46.875722   73800 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 18:07:46.875786   73800 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 18:07:46.885107   73800 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0731 18:07:46.903868   73800 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 18:07:46.919585   73800 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0731 18:07:46.939034   73800 ssh_runner.go:195] Run: grep 192.168.50.86	control-plane.minikube.internal$ /etc/hosts
	I0731 18:07:46.943460   73800 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.86	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:07:46.957699   73800 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:07:47.065714   73800 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 18:07:47.080655   73800 certs.go:68] Setting up /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/embed-certs-436067 for IP: 192.168.50.86
	I0731 18:07:47.080681   73800 certs.go:194] generating shared ca certs ...
	I0731 18:07:47.080717   73800 certs.go:226] acquiring lock for ca certs: {Name:mk49eed439085f9c30746706460ce213570d1997 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:07:47.080879   73800 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key
	I0731 18:07:47.080938   73800 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key
	I0731 18:07:47.080950   73800 certs.go:256] generating profile certs ...
	I0731 18:07:47.081046   73800 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/embed-certs-436067/client.key
	I0731 18:07:47.081113   73800 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/embed-certs-436067/apiserver.key.7b8160da
	I0731 18:07:47.081168   73800 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/embed-certs-436067/proxy-client.key
	I0731 18:07:47.081312   73800 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem (1338 bytes)
	W0731 18:07:47.081367   73800 certs.go:480] ignoring /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259_empty.pem, impossibly tiny 0 bytes
	I0731 18:07:47.081380   73800 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 18:07:47.081413   73800 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem (1082 bytes)
	I0731 18:07:47.081438   73800 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem (1123 bytes)
	I0731 18:07:47.081468   73800 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem (1675 bytes)
	I0731 18:07:47.081508   73800 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem (1708 bytes)
	I0731 18:07:47.082355   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 18:07:47.130037   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 18:07:47.171218   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 18:07:47.215745   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 18:07:47.244883   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/embed-certs-436067/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0731 18:07:47.270032   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/embed-certs-436067/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 18:07:47.294900   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/embed-certs-436067/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 18:07:47.317285   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/embed-certs-436067/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 18:07:47.343000   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 18:07:47.369906   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem --> /usr/share/ca-certificates/15259.pem (1338 bytes)
	I0731 18:07:47.392022   73800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /usr/share/ca-certificates/152592.pem (1708 bytes)
	I0731 18:07:47.414219   73800 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 18:07:47.431931   73800 ssh_runner.go:195] Run: openssl version
	I0731 18:07:47.437602   73800 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 18:07:47.447585   73800 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:07:47.451779   73800 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 16:42 /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:07:47.451833   73800 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:07:47.457309   73800 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 18:07:47.466917   73800 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15259.pem && ln -fs /usr/share/ca-certificates/15259.pem /etc/ssl/certs/15259.pem"
	I0731 18:07:47.476211   73800 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15259.pem
	I0731 18:07:47.480149   73800 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 16:54 /usr/share/ca-certificates/15259.pem
	I0731 18:07:47.480215   73800 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15259.pem
	I0731 18:07:47.485412   73800 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15259.pem /etc/ssl/certs/51391683.0"
	I0731 18:07:47.494852   73800 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/152592.pem && ln -fs /usr/share/ca-certificates/152592.pem /etc/ssl/certs/152592.pem"
	I0731 18:07:47.504407   73800 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/152592.pem
	I0731 18:07:47.509594   73800 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 16:54 /usr/share/ca-certificates/152592.pem
	I0731 18:07:47.509658   73800 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/152592.pem
	I0731 18:07:47.515728   73800 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/152592.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 18:07:47.525660   73800 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 18:07:47.529953   73800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 18:07:47.535576   73800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 18:07:47.541158   73800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 18:07:47.546633   73800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 18:07:47.551827   73800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 18:07:47.557100   73800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 18:07:47.562447   73800 kubeadm.go:392] StartCluster: {Name:embed-certs-436067 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-436067 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.86 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:07:47.562551   73800 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 18:07:47.562616   73800 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 18:07:47.610318   73800 cri.go:89] found id: ""
	I0731 18:07:47.610382   73800 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 18:07:47.623036   73800 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 18:07:47.623053   73800 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 18:07:47.623101   73800 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 18:07:47.631709   73800 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 18:07:47.632699   73800 kubeconfig.go:125] found "embed-certs-436067" server: "https://192.168.50.86:8443"
	I0731 18:07:47.634724   73800 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 18:07:47.643183   73800 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.86
	I0731 18:07:47.643207   73800 kubeadm.go:1160] stopping kube-system containers ...
	I0731 18:07:47.643218   73800 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 18:07:47.643264   73800 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 18:07:47.677438   73800 cri.go:89] found id: ""
	I0731 18:07:47.677527   73800 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 18:07:47.693427   73800 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 18:07:47.702889   73800 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 18:07:47.702907   73800 kubeadm.go:157] found existing configuration files:
	
	I0731 18:07:47.702956   73800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 18:07:47.713958   73800 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 18:07:47.714017   73800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 18:07:47.723931   73800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 18:07:47.732615   73800 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 18:07:47.732673   73800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 18:07:47.741168   73800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 18:07:47.749164   73800 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 18:07:47.749217   73800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 18:07:47.757691   73800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 18:07:47.765479   73800 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 18:07:47.765530   73800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 18:07:47.774002   73800 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 18:07:47.783757   73800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:47.890835   73800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:48.951421   73800 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.060547503s)
	I0731 18:07:48.951466   73800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:49.152745   73800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:49.224334   73800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:49.341066   73800 api_server.go:52] waiting for apiserver process to appear ...
	I0731 18:07:49.341147   73800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:07:45.500400   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:45.500938   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:45.500966   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:45.500890   75164 retry.go:31] will retry after 1.069889786s: waiting for machine to come up
	I0731 18:07:46.572634   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:46.573085   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:46.573128   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:46.572979   75164 retry.go:31] will retry after 1.047722466s: waiting for machine to come up
	I0731 18:07:47.622035   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:47.622479   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:47.622507   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:47.622435   75164 retry.go:31] will retry after 1.292658555s: waiting for machine to come up
	I0731 18:07:48.916255   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:48.916755   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:48.916778   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:48.916701   75164 retry.go:31] will retry after 2.006539925s: waiting for machine to come up
	I0731 18:07:46.281654   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:49.189881   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:49.841397   73800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:07:50.341264   73800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:07:50.409398   73800 api_server.go:72] duration metric: took 1.068329172s to wait for apiserver process to appear ...
	I0731 18:07:50.409432   73800 api_server.go:88] waiting for apiserver healthz status ...
	I0731 18:07:50.409457   73800 api_server.go:253] Checking apiserver healthz at https://192.168.50.86:8443/healthz ...
	I0731 18:07:50.410135   73800 api_server.go:269] stopped: https://192.168.50.86:8443/healthz: Get "https://192.168.50.86:8443/healthz": dial tcp 192.168.50.86:8443: connect: connection refused
	I0731 18:07:50.909802   73800 api_server.go:253] Checking apiserver healthz at https://192.168.50.86:8443/healthz ...
	I0731 18:07:52.636930   73800 api_server.go:279] https://192.168.50.86:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 18:07:52.636972   73800 api_server.go:103] status: https://192.168.50.86:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 18:07:52.636989   73800 api_server.go:253] Checking apiserver healthz at https://192.168.50.86:8443/healthz ...
	I0731 18:07:52.666947   73800 api_server.go:279] https://192.168.50.86:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 18:07:52.666980   73800 api_server.go:103] status: https://192.168.50.86:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 18:07:52.910391   73800 api_server.go:253] Checking apiserver healthz at https://192.168.50.86:8443/healthz ...
	I0731 18:07:52.916305   73800 api_server.go:279] https://192.168.50.86:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 18:07:52.916342   73800 api_server.go:103] status: https://192.168.50.86:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 18:07:53.409623   73800 api_server.go:253] Checking apiserver healthz at https://192.168.50.86:8443/healthz ...
	I0731 18:07:53.419159   73800 api_server.go:279] https://192.168.50.86:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 18:07:53.419205   73800 api_server.go:103] status: https://192.168.50.86:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 18:07:53.909654   73800 api_server.go:253] Checking apiserver healthz at https://192.168.50.86:8443/healthz ...
	I0731 18:07:53.913518   73800 api_server.go:279] https://192.168.50.86:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 18:07:53.913541   73800 api_server.go:103] status: https://192.168.50.86:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 18:07:54.409879   73800 api_server.go:253] Checking apiserver healthz at https://192.168.50.86:8443/healthz ...
	I0731 18:07:54.413948   73800 api_server.go:279] https://192.168.50.86:8443/healthz returned 200:
	ok
	I0731 18:07:54.422414   73800 api_server.go:141] control plane version: v1.30.3
	I0731 18:07:54.422444   73800 api_server.go:131] duration metric: took 4.013003689s to wait for apiserver health ...
	I0731 18:07:54.422458   73800 cni.go:84] Creating CNI manager for ""
	I0731 18:07:54.422467   73800 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:07:54.424680   73800 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 18:07:54.425887   73800 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 18:07:54.436394   73800 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 18:07:54.454533   73800 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 18:07:54.464268   73800 system_pods.go:59] 8 kube-system pods found
	I0731 18:07:54.464304   73800 system_pods.go:61] "coredns-7db6d8ff4d-h6ckp" [84faf557-0c8d-4026-b620-37265e017ed0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:07:54.464315   73800 system_pods.go:61] "etcd-embed-certs-436067" [787466df-6e3f-4209-a996-037875d63dc8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 18:07:54.464326   73800 system_pods.go:61] "kube-apiserver-embed-certs-436067" [6366e38e-21f3-41a4-af7a-433953b70eaa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 18:07:54.464335   73800 system_pods.go:61] "kube-controller-manager-embed-certs-436067" [a97f6a49-40cf-433a-8196-c433e3cda8e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 18:07:54.464341   73800 system_pods.go:61] "kube-proxy-tl9pj" [0124eb62-5c00-4f75-a73f-c3e92ddc4a42] Running
	I0731 18:07:54.464354   73800 system_pods.go:61] "kube-scheduler-embed-certs-436067" [afbb9117-f229-44ea-8939-d28c4a402c6b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 18:07:54.464366   73800 system_pods.go:61] "metrics-server-569cc877fc-fzxrw" [2ecdab2a-8ce8-4771-bd94-4e24dee34386] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 18:07:54.464374   73800 system_pods.go:61] "storage-provisioner" [29b17f6d-f9e4-4272-b6da-368431264701] Running
	I0731 18:07:54.464382   73800 system_pods.go:74] duration metric: took 9.82125ms to wait for pod list to return data ...
	I0731 18:07:54.464395   73800 node_conditions.go:102] verifying NodePressure condition ...
	I0731 18:07:54.467718   73800 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 18:07:54.467748   73800 node_conditions.go:123] node cpu capacity is 2
	I0731 18:07:54.467761   73800 node_conditions.go:105] duration metric: took 3.3602ms to run NodePressure ...
	I0731 18:07:54.467779   73800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:07:50.925369   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:50.925835   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:50.925856   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:50.925790   75164 retry.go:31] will retry after 2.875577792s: waiting for machine to come up
	I0731 18:07:53.802729   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:53.803164   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:53.803192   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:53.803122   75164 retry.go:31] will retry after 2.352020729s: waiting for machine to come up
	I0731 18:07:51.279883   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:53.279992   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:55.778812   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:54.732921   73800 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 18:07:54.736779   73800 kubeadm.go:739] kubelet initialised
	I0731 18:07:54.736798   73800 kubeadm.go:740] duration metric: took 3.850446ms waiting for restarted kubelet to initialise ...
	I0731 18:07:54.736809   73800 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:07:54.741733   73800 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-h6ckp" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:54.745722   73800 pod_ready.go:97] node "embed-certs-436067" hosting pod "coredns-7db6d8ff4d-h6ckp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436067" has status "Ready":"False"
	I0731 18:07:54.745742   73800 pod_ready.go:81] duration metric: took 3.986968ms for pod "coredns-7db6d8ff4d-h6ckp" in "kube-system" namespace to be "Ready" ...
	E0731 18:07:54.745751   73800 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-436067" hosting pod "coredns-7db6d8ff4d-h6ckp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436067" has status "Ready":"False"
	I0731 18:07:54.745757   73800 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:54.749650   73800 pod_ready.go:97] node "embed-certs-436067" hosting pod "etcd-embed-certs-436067" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436067" has status "Ready":"False"
	I0731 18:07:54.749666   73800 pod_ready.go:81] duration metric: took 3.895483ms for pod "etcd-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	E0731 18:07:54.749673   73800 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-436067" hosting pod "etcd-embed-certs-436067" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436067" has status "Ready":"False"
	I0731 18:07:54.749679   73800 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:54.753326   73800 pod_ready.go:97] node "embed-certs-436067" hosting pod "kube-apiserver-embed-certs-436067" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436067" has status "Ready":"False"
	I0731 18:07:54.753351   73800 pod_ready.go:81] duration metric: took 3.66496ms for pod "kube-apiserver-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	E0731 18:07:54.753362   73800 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-436067" hosting pod "kube-apiserver-embed-certs-436067" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436067" has status "Ready":"False"
	I0731 18:07:54.753370   73800 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:54.857956   73800 pod_ready.go:97] node "embed-certs-436067" hosting pod "kube-controller-manager-embed-certs-436067" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436067" has status "Ready":"False"
	I0731 18:07:54.857978   73800 pod_ready.go:81] duration metric: took 104.599259ms for pod "kube-controller-manager-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	E0731 18:07:54.857988   73800 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-436067" hosting pod "kube-controller-manager-embed-certs-436067" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436067" has status "Ready":"False"
	I0731 18:07:54.857995   73800 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-tl9pj" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:55.257589   73800 pod_ready.go:92] pod "kube-proxy-tl9pj" in "kube-system" namespace has status "Ready":"True"
	I0731 18:07:55.257621   73800 pod_ready.go:81] duration metric: took 399.617003ms for pod "kube-proxy-tl9pj" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:55.257630   73800 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:07:57.262770   73800 pod_ready.go:102] pod "kube-scheduler-embed-certs-436067" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:59.271094   73800 pod_ready.go:102] pod "kube-scheduler-embed-certs-436067" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:56.157721   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:56.158176   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | unable to find current IP address of domain old-k8s-version-276459 in network mk-old-k8s-version-276459
	I0731 18:07:56.158216   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | I0731 18:07:56.158110   75164 retry.go:31] will retry after 3.552824334s: waiting for machine to come up
	I0731 18:07:59.712249   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.712759   74203 main.go:141] libmachine: (old-k8s-version-276459) Found IP for machine: 192.168.39.26
	I0731 18:07:59.712783   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has current primary IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.712793   74203 main.go:141] libmachine: (old-k8s-version-276459) Reserving static IP address...
	I0731 18:07:59.713268   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "old-k8s-version-276459", mac: "52:54:00:79:9d:96", ip: "192.168.39.26"} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:07:59.713297   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | skip adding static IP to network mk-old-k8s-version-276459 - found existing host DHCP lease matching {name: "old-k8s-version-276459", mac: "52:54:00:79:9d:96", ip: "192.168.39.26"}
	I0731 18:07:59.713320   74203 main.go:141] libmachine: (old-k8s-version-276459) Reserved static IP address: 192.168.39.26
	I0731 18:07:59.713343   74203 main.go:141] libmachine: (old-k8s-version-276459) Waiting for SSH to be available...
	I0731 18:07:59.713355   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | Getting to WaitForSSH function...
	I0731 18:07:59.716068   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.716456   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:07:59.716490   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.716701   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | Using SSH client type: external
	I0731 18:07:59.716725   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | Using SSH private key: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459/id_rsa (-rw-------)
	I0731 18:07:59.716762   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.26 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 18:07:59.716776   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | About to run SSH command:
	I0731 18:07:59.716792   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | exit 0
	I0731 18:07:59.847720   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | SSH cmd err, output: <nil>: 
	I0731 18:07:59.848089   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetConfigRaw
	I0731 18:07:59.848847   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetIP
	I0731 18:07:59.851632   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.852004   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:07:59.852030   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.852321   74203 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/config.json ...
	I0731 18:07:59.852505   74203 machine.go:94] provisionDockerMachine start ...
	I0731 18:07:59.852524   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:07:59.852752   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:07:59.855198   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.855596   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:07:59.855626   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.855756   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:07:59.855920   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:07:59.856071   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:07:59.856208   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:07:59.856372   74203 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:59.856601   74203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0731 18:07:59.856614   74203 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 18:07:59.963492   74203 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 18:07:59.963524   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetMachineName
	I0731 18:07:59.963762   74203 buildroot.go:166] provisioning hostname "old-k8s-version-276459"
	I0731 18:07:59.963794   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetMachineName
	I0731 18:07:59.963992   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:07:59.967261   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.967725   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:07:59.967762   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:07:59.967938   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:07:59.968131   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:07:59.968316   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:07:59.968487   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:07:59.968687   74203 main.go:141] libmachine: Using SSH client type: native
	I0731 18:07:59.968872   74203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0731 18:07:59.968890   74203 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-276459 && echo "old-k8s-version-276459" | sudo tee /etc/hostname
	I0731 18:08:00.084360   74203 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-276459
	
	I0731 18:08:00.084390   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:08:00.087433   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.087833   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.087862   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.088016   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:08:00.088187   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.088371   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.088521   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:08:00.088719   74203 main.go:141] libmachine: Using SSH client type: native
	I0731 18:08:00.088893   74203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0731 18:08:00.088915   74203 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-276459' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-276459/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-276459' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 18:08:00.200012   74203 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 18:08:00.200038   74203 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19349-8084/.minikube CaCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19349-8084/.minikube}
	I0731 18:08:00.200069   74203 buildroot.go:174] setting up certificates
	I0731 18:08:00.200081   74203 provision.go:84] configureAuth start
	I0731 18:08:00.200093   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetMachineName
	I0731 18:08:00.200360   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetIP
	I0731 18:08:00.203352   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.203694   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.203721   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.203951   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:08:00.206061   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.206398   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.206432   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.206510   74203 provision.go:143] copyHostCerts
	I0731 18:08:00.206580   74203 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem, removing ...
	I0731 18:08:00.206591   74203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem
	I0731 18:08:00.206654   74203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem (1082 bytes)
	I0731 18:08:00.206759   74203 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem, removing ...
	I0731 18:08:00.206769   74203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem
	I0731 18:08:00.206799   74203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem (1123 bytes)
	I0731 18:08:00.206876   74203 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem, removing ...
	I0731 18:08:00.206885   74203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem
	I0731 18:08:00.206913   74203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem (1675 bytes)
	I0731 18:08:00.207047   74203 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-276459 san=[127.0.0.1 192.168.39.26 localhost minikube old-k8s-version-276459]
	I0731 18:08:00.279363   74203 provision.go:177] copyRemoteCerts
	I0731 18:08:00.279423   74203 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 18:08:00.279456   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:08:00.282234   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.282601   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.282630   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.282751   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:08:00.283004   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.283178   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:08:00.283361   74203 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459/id_rsa Username:docker}
	I0731 18:08:00.935990   73479 start.go:364] duration metric: took 51.517312901s to acquireMachinesLock for "no-preload-673754"
	I0731 18:08:00.936054   73479 start.go:96] Skipping create...Using existing machine configuration
	I0731 18:08:00.936066   73479 fix.go:54] fixHost starting: 
	I0731 18:08:00.936534   73479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:08:00.936589   73479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:08:00.954868   73479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39013
	I0731 18:08:00.955405   73479 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:08:00.955980   73479 main.go:141] libmachine: Using API Version  1
	I0731 18:08:00.956012   73479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:08:00.956386   73479 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:08:00.956589   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 18:08:00.956752   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetState
	I0731 18:08:00.958461   73479 fix.go:112] recreateIfNeeded on no-preload-673754: state=Stopped err=<nil>
	I0731 18:08:00.958485   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	W0731 18:08:00.958655   73479 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 18:08:00.960117   73479 out.go:177] * Restarting existing kvm2 VM for "no-preload-673754" ...
	I0731 18:07:57.779258   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:07:59.780834   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:00.961340   73479 main.go:141] libmachine: (no-preload-673754) Calling .Start
	I0731 18:08:00.961543   73479 main.go:141] libmachine: (no-preload-673754) Ensuring networks are active...
	I0731 18:08:00.962332   73479 main.go:141] libmachine: (no-preload-673754) Ensuring network default is active
	I0731 18:08:00.962661   73479 main.go:141] libmachine: (no-preload-673754) Ensuring network mk-no-preload-673754 is active
	I0731 18:08:00.963165   73479 main.go:141] libmachine: (no-preload-673754) Getting domain xml...
	I0731 18:08:00.963982   73479 main.go:141] libmachine: (no-preload-673754) Creating domain...
	I0731 18:08:00.365254   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 18:08:00.389729   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0731 18:08:00.413143   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 18:08:00.436040   74203 provision.go:87] duration metric: took 235.932619ms to configureAuth
	I0731 18:08:00.436080   74203 buildroot.go:189] setting minikube options for container-runtime
	I0731 18:08:00.436288   74203 config.go:182] Loaded profile config "old-k8s-version-276459": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0731 18:08:00.436403   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:08:00.439184   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.439543   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.439575   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.439734   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:08:00.439898   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.440093   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.440271   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:08:00.440450   74203 main.go:141] libmachine: Using SSH client type: native
	I0731 18:08:00.440661   74203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0731 18:08:00.440679   74203 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 18:08:00.707438   74203 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 18:08:00.707467   74203 machine.go:97] duration metric: took 854.948491ms to provisionDockerMachine
	I0731 18:08:00.707482   74203 start.go:293] postStartSetup for "old-k8s-version-276459" (driver="kvm2")
	I0731 18:08:00.707494   74203 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 18:08:00.707510   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:08:00.707811   74203 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 18:08:00.707837   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:08:00.710726   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.711285   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.711315   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.711458   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:08:00.711703   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.711895   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:08:00.712049   74203 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459/id_rsa Username:docker}
	I0731 18:08:00.793719   74203 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 18:08:00.797858   74203 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 18:08:00.797888   74203 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/addons for local assets ...
	I0731 18:08:00.797960   74203 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/files for local assets ...
	I0731 18:08:00.798038   74203 filesync.go:149] local asset: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem -> 152592.pem in /etc/ssl/certs
	I0731 18:08:00.798130   74203 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 18:08:00.807013   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /etc/ssl/certs/152592.pem (1708 bytes)
	I0731 18:08:00.829440   74203 start.go:296] duration metric: took 121.944271ms for postStartSetup
	I0731 18:08:00.829487   74203 fix.go:56] duration metric: took 19.277312964s for fixHost
	I0731 18:08:00.829518   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:08:00.832718   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.833048   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.833082   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.833317   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:08:00.833533   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.833752   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.833887   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:08:00.834189   74203 main.go:141] libmachine: Using SSH client type: native
	I0731 18:08:00.834364   74203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0731 18:08:00.834377   74203 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 18:08:00.935834   74203 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722449280.899364873
	
	I0731 18:08:00.935853   74203 fix.go:216] guest clock: 1722449280.899364873
	I0731 18:08:00.935860   74203 fix.go:229] Guest: 2024-07-31 18:08:00.899364873 +0000 UTC Remote: 2024-07-31 18:08:00.829491013 +0000 UTC m=+245.518063325 (delta=69.87386ms)
	I0731 18:08:00.935894   74203 fix.go:200] guest clock delta is within tolerance: 69.87386ms
	I0731 18:08:00.935899   74203 start.go:83] releasing machines lock for "old-k8s-version-276459", held for 19.38376262s
	I0731 18:08:00.935937   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:08:00.936220   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetIP
	I0731 18:08:00.939282   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.939691   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.939722   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.939911   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:08:00.940506   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:08:00.940704   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .DriverName
	I0731 18:08:00.940790   74203 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 18:08:00.940831   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:08:00.940960   74203 ssh_runner.go:195] Run: cat /version.json
	I0731 18:08:00.941043   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHHostname
	I0731 18:08:00.943883   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.943909   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.944361   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.944405   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:00.944429   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.944442   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:00.944542   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:08:00.944639   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHPort
	I0731 18:08:00.944766   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.944817   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHKeyPath
	I0731 18:08:00.944899   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:08:00.944979   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetSSHUsername
	I0731 18:08:00.945039   74203 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459/id_rsa Username:docker}
	I0731 18:08:00.945110   74203 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/old-k8s-version-276459/id_rsa Username:docker}
	I0731 18:08:01.023818   74203 ssh_runner.go:195] Run: systemctl --version
	I0731 18:08:01.063390   74203 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 18:08:01.205084   74203 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 18:08:01.210972   74203 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 18:08:01.211049   74203 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 18:08:01.226156   74203 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 18:08:01.226180   74203 start.go:495] detecting cgroup driver to use...
	I0731 18:08:01.226257   74203 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 18:08:01.241506   74203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 18:08:01.256615   74203 docker.go:217] disabling cri-docker service (if available) ...
	I0731 18:08:01.256671   74203 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 18:08:01.271515   74203 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 18:08:01.287213   74203 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 18:08:01.415827   74203 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 18:08:01.578122   74203 docker.go:233] disabling docker service ...
	I0731 18:08:01.578208   74203 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 18:08:01.596564   74203 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 18:08:01.611984   74203 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 18:08:01.748972   74203 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 18:08:01.896911   74203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 18:08:01.912921   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 18:08:01.931671   74203 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0731 18:08:01.931749   74203 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:01.943737   74203 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 18:08:01.943798   74203 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:01.954571   74203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:01.964733   74203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:01.976087   74203 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 18:08:01.987193   74203 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 18:08:01.996620   74203 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 18:08:01.996670   74203 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 18:08:02.011046   74203 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 18:08:02.022199   74203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:08:02.147855   74203 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 18:08:02.309868   74203 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 18:08:02.309940   74203 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 18:08:02.314966   74203 start.go:563] Will wait 60s for crictl version
	I0731 18:08:02.315031   74203 ssh_runner.go:195] Run: which crictl
	I0731 18:08:02.318685   74203 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 18:08:02.359361   74203 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 18:08:02.359460   74203 ssh_runner.go:195] Run: crio --version
	I0731 18:08:02.387053   74203 ssh_runner.go:195] Run: crio --version
	I0731 18:08:02.417054   74203 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0731 18:08:01.265323   73800 pod_ready.go:92] pod "kube-scheduler-embed-certs-436067" in "kube-system" namespace has status "Ready":"True"
	I0731 18:08:01.265363   73800 pod_ready.go:81] duration metric: took 6.007715949s for pod "kube-scheduler-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:01.265376   73800 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:03.271693   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:02.418272   74203 main.go:141] libmachine: (old-k8s-version-276459) Calling .GetIP
	I0731 18:08:02.421211   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:02.421714   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:9d:96", ip: ""} in network mk-old-k8s-version-276459: {Iface:virbr3 ExpiryTime:2024-07-31 19:07:52 +0000 UTC Type:0 Mac:52:54:00:79:9d:96 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:old-k8s-version-276459 Clientid:01:52:54:00:79:9d:96}
	I0731 18:08:02.421743   74203 main.go:141] libmachine: (old-k8s-version-276459) DBG | domain old-k8s-version-276459 has defined IP address 192.168.39.26 and MAC address 52:54:00:79:9d:96 in network mk-old-k8s-version-276459
	I0731 18:08:02.421949   74203 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 18:08:02.425878   74203 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:08:02.438082   74203 kubeadm.go:883] updating cluster {Name:old-k8s-version-276459 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-276459 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 18:08:02.438222   74203 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 18:08:02.438293   74203 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 18:08:02.484113   74203 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0731 18:08:02.484189   74203 ssh_runner.go:195] Run: which lz4
	I0731 18:08:02.488365   74203 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 18:08:02.492321   74203 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 18:08:02.492352   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0731 18:08:03.946187   74203 crio.go:462] duration metric: took 1.457852426s to copy over tarball
	I0731 18:08:03.946261   74203 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 18:08:01.781606   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:03.781786   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:02.287159   73479 main.go:141] libmachine: (no-preload-673754) Waiting to get IP...
	I0731 18:08:02.288338   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:02.288812   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:02.288879   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:02.288799   75356 retry.go:31] will retry after 229.074083ms: waiting for machine to come up
	I0731 18:08:02.519266   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:02.519697   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:02.519720   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:02.519663   75356 retry.go:31] will retry after 328.345922ms: waiting for machine to come up
	I0731 18:08:02.849290   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:02.849839   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:02.849871   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:02.849787   75356 retry.go:31] will retry after 339.030371ms: waiting for machine to come up
	I0731 18:08:03.190065   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:03.190587   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:03.190620   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:03.190539   75356 retry.go:31] will retry after 514.955663ms: waiting for machine to come up
	I0731 18:08:03.707808   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:03.708382   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:03.708418   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:03.708349   75356 retry.go:31] will retry after 543.558992ms: waiting for machine to come up
	I0731 18:08:04.253224   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:04.253760   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:04.253781   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:04.253708   75356 retry.go:31] will retry after 925.348689ms: waiting for machine to come up
	I0731 18:08:05.180439   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:05.180833   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:05.180857   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:05.180786   75356 retry.go:31] will retry after 1.014666798s: waiting for machine to come up
	I0731 18:08:06.196879   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:06.197321   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:06.197355   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:06.197258   75356 retry.go:31] will retry after 1.163649074s: waiting for machine to come up
	I0731 18:08:05.278001   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:07.771870   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:06.945760   74203 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.99946679s)
	I0731 18:08:06.945790   74203 crio.go:469] duration metric: took 2.999576832s to extract the tarball
	I0731 18:08:06.945800   74203 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 18:08:06.989081   74203 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 18:08:07.024521   74203 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0731 18:08:07.024545   74203 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 18:08:07.024615   74203 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:08:07.024645   74203 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 18:08:07.024695   74203 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 18:08:07.024729   74203 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 18:08:07.024718   74203 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0731 18:08:07.024780   74203 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0731 18:08:07.024822   74203 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0731 18:08:07.024716   74203 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 18:08:07.026228   74203 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 18:08:07.026237   74203 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 18:08:07.026242   74203 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0731 18:08:07.026263   74203 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:08:07.026231   74203 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 18:08:07.026231   74203 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0731 18:08:07.026863   74203 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 18:08:07.027091   74203 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0731 18:08:07.282735   74203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0731 18:08:07.284464   74203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0731 18:08:07.287001   74203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 18:08:07.305873   74203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0731 18:08:07.307144   74203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0731 18:08:07.311401   74203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0731 18:08:07.318119   74203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0731 18:08:07.366929   74203 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0731 18:08:07.366979   74203 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0731 18:08:07.367041   74203 ssh_runner.go:195] Run: which crictl
	I0731 18:08:07.393481   74203 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0731 18:08:07.393534   74203 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0731 18:08:07.393594   74203 ssh_runner.go:195] Run: which crictl
	I0731 18:08:07.441987   74203 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0731 18:08:07.442036   74203 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 18:08:07.442083   74203 ssh_runner.go:195] Run: which crictl
	I0731 18:08:07.449033   74203 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0731 18:08:07.449085   74203 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 18:08:07.449137   74203 ssh_runner.go:195] Run: which crictl
	I0731 18:08:07.465248   74203 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0731 18:08:07.465291   74203 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 18:08:07.465341   74203 ssh_runner.go:195] Run: which crictl
	I0731 18:08:07.476013   74203 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0731 18:08:07.476053   74203 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0731 18:08:07.476074   74203 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 18:08:07.476090   74203 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0731 18:08:07.476111   74203 ssh_runner.go:195] Run: which crictl
	I0731 18:08:07.476129   74203 ssh_runner.go:195] Run: which crictl
	I0731 18:08:07.476146   74203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0731 18:08:07.476111   74203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0731 18:08:07.476196   74203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 18:08:07.476220   74203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0731 18:08:07.476273   74203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0731 18:08:07.592532   74203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0731 18:08:07.592677   74203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0731 18:08:07.592709   74203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0731 18:08:07.592797   74203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0731 18:08:07.637254   74203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0731 18:08:07.637276   74203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0731 18:08:07.637288   74203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0731 18:08:07.637292   74203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0731 18:08:07.640419   74203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0731 18:08:07.860814   74203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:08:08.002115   74203 cache_images.go:92] duration metric: took 977.553376ms to LoadCachedImages
	W0731 18:08:08.002248   74203 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0731 18:08:08.002267   74203 kubeadm.go:934] updating node { 192.168.39.26 8443 v1.20.0 crio true true} ...
	I0731 18:08:08.002404   74203 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-276459 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.26
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-276459 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 18:08:08.002500   74203 ssh_runner.go:195] Run: crio config
	I0731 18:08:08.059237   74203 cni.go:84] Creating CNI manager for ""
	I0731 18:08:08.059264   74203 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:08:08.059281   74203 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 18:08:08.059313   74203 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.26 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-276459 NodeName:old-k8s-version-276459 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.26"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.26 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0731 18:08:08.059503   74203 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.26
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-276459"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.26
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.26"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 18:08:08.059575   74203 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0731 18:08:08.070299   74203 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 18:08:08.070388   74203 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 18:08:08.082083   74203 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0731 18:08:08.101728   74203 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 18:08:08.120721   74203 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0731 18:08:08.137997   74203 ssh_runner.go:195] Run: grep 192.168.39.26	control-plane.minikube.internal$ /etc/hosts
	I0731 18:08:08.141797   74203 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.26	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:08:08.156861   74203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:08:08.287700   74203 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 18:08:08.307598   74203 certs.go:68] Setting up /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459 for IP: 192.168.39.26
	I0731 18:08:08.307623   74203 certs.go:194] generating shared ca certs ...
	I0731 18:08:08.307644   74203 certs.go:226] acquiring lock for ca certs: {Name:mk49eed439085f9c30746706460ce213570d1997 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:08:08.307811   74203 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key
	I0731 18:08:08.307855   74203 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key
	I0731 18:08:08.307868   74203 certs.go:256] generating profile certs ...
	I0731 18:08:08.307987   74203 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/client.key
	I0731 18:08:08.308062   74203 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/apiserver.key.7c620cac
	I0731 18:08:08.308123   74203 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/proxy-client.key
	I0731 18:08:08.308283   74203 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem (1338 bytes)
	W0731 18:08:08.308315   74203 certs.go:480] ignoring /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259_empty.pem, impossibly tiny 0 bytes
	I0731 18:08:08.308324   74203 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 18:08:08.308362   74203 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem (1082 bytes)
	I0731 18:08:08.308382   74203 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem (1123 bytes)
	I0731 18:08:08.308402   74203 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem (1675 bytes)
	I0731 18:08:08.308438   74203 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem (1708 bytes)
	I0731 18:08:08.309095   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 18:08:08.355508   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 18:08:08.391999   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 18:08:08.427937   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 18:08:08.456268   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0731 18:08:08.486991   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 18:08:08.519564   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 18:08:08.557029   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/old-k8s-version-276459/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 18:08:08.583971   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /usr/share/ca-certificates/152592.pem (1708 bytes)
	I0731 18:08:08.608505   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 18:08:08.630279   74203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem --> /usr/share/ca-certificates/15259.pem (1338 bytes)
	I0731 18:08:08.655012   74203 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 18:08:08.671907   74203 ssh_runner.go:195] Run: openssl version
	I0731 18:08:08.677538   74203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/152592.pem && ln -fs /usr/share/ca-certificates/152592.pem /etc/ssl/certs/152592.pem"
	I0731 18:08:08.687877   74203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/152592.pem
	I0731 18:08:08.692201   74203 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 16:54 /usr/share/ca-certificates/152592.pem
	I0731 18:08:08.692258   74203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/152592.pem
	I0731 18:08:08.698563   74203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/152592.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 18:08:08.708986   74203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 18:08:08.719132   74203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:08:08.723242   74203 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 16:42 /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:08:08.723299   74203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:08:08.729032   74203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 18:08:08.739306   74203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15259.pem && ln -fs /usr/share/ca-certificates/15259.pem /etc/ssl/certs/15259.pem"
	I0731 18:08:08.749759   74203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15259.pem
	I0731 18:08:08.754167   74203 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 16:54 /usr/share/ca-certificates/15259.pem
	I0731 18:08:08.754228   74203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15259.pem
	I0731 18:08:08.759786   74203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15259.pem /etc/ssl/certs/51391683.0"
	I0731 18:08:08.770180   74203 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 18:08:08.775414   74203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 18:08:08.781830   74203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 18:08:08.787876   74203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 18:08:08.793927   74203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 18:08:08.800090   74203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 18:08:08.806169   74203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 18:08:08.811895   74203 kubeadm.go:392] StartCluster: {Name:old-k8s-version-276459 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-276459 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:08:08.811983   74203 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 18:08:08.812040   74203 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 18:08:08.853889   74203 cri.go:89] found id: ""
	I0731 18:08:08.853989   74203 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 18:08:08.863817   74203 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 18:08:08.863837   74203 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 18:08:08.863887   74203 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 18:08:08.873411   74203 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 18:08:08.874616   74203 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-276459" does not appear in /home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 18:08:08.875356   74203 kubeconfig.go:62] /home/jenkins/minikube-integration/19349-8084/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-276459" cluster setting kubeconfig missing "old-k8s-version-276459" context setting]
	I0731 18:08:08.876650   74203 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/kubeconfig: {Name:mkf2340bc1ada3c683f132aca37228d7588e1fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:08:08.918433   74203 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 18:08:08.931013   74203 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.26
	I0731 18:08:08.931067   74203 kubeadm.go:1160] stopping kube-system containers ...
	I0731 18:08:08.931083   74203 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 18:08:08.931163   74203 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 18:08:08.964683   74203 cri.go:89] found id: ""
	I0731 18:08:08.964759   74203 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 18:08:08.980459   74203 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 18:08:08.989969   74203 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 18:08:08.989997   74203 kubeadm.go:157] found existing configuration files:
	
	I0731 18:08:08.990049   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 18:08:08.999015   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 18:08:08.999074   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 18:08:09.008055   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 18:08:09.016532   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 18:08:09.016599   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 18:08:09.025791   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 18:08:09.034160   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 18:08:09.034227   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 18:08:09.043381   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 18:08:09.053419   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 18:08:09.053832   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 18:08:09.064966   74203 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 18:08:09.073962   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:09.198503   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:10.048258   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:10.283812   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:06.285091   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:08.779998   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:10.780198   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:07.362756   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:07.363299   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:07.363328   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:07.363231   75356 retry.go:31] will retry after 1.508296616s: waiting for machine to come up
	I0731 18:08:08.873528   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:08.874013   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:08.874051   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:08.873971   75356 retry.go:31] will retry after 2.281343566s: waiting for machine to come up
	I0731 18:08:11.157083   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:11.157578   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:11.157609   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:11.157537   75356 retry.go:31] will retry after 2.49049752s: waiting for machine to come up
	I0731 18:08:09.802010   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:12.271900   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:10.390012   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:10.477969   74203 api_server.go:52] waiting for apiserver process to appear ...
	I0731 18:08:10.478093   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:10.978427   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:11.478715   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:11.978685   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:12.478211   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:12.978218   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:13.478493   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:13.978778   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:14.478489   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:14.978983   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:13.278943   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:15.778760   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:13.650131   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:13.650459   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:13.650480   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:13.650428   75356 retry.go:31] will retry after 3.437877467s: waiting for machine to come up
	I0731 18:08:14.771879   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:17.272673   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:15.478444   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:15.978399   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:16.478641   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:16.979036   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:17.479053   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:17.978819   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:18.478280   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:18.978448   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:19.479056   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:19.978969   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:18.279604   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:20.778532   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:17.089986   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:17.090556   73479 main.go:141] libmachine: (no-preload-673754) DBG | unable to find current IP address of domain no-preload-673754 in network mk-no-preload-673754
	I0731 18:08:17.090590   73479 main.go:141] libmachine: (no-preload-673754) DBG | I0731 18:08:17.090509   75356 retry.go:31] will retry after 2.95036051s: waiting for machine to come up
	I0731 18:08:20.044455   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.044914   73479 main.go:141] libmachine: (no-preload-673754) Found IP for machine: 192.168.61.126
	I0731 18:08:20.044935   73479 main.go:141] libmachine: (no-preload-673754) Reserving static IP address...
	I0731 18:08:20.044948   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has current primary IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.045286   73479 main.go:141] libmachine: (no-preload-673754) Reserved static IP address: 192.168.61.126
	I0731 18:08:20.045308   73479 main.go:141] libmachine: (no-preload-673754) Waiting for SSH to be available...
	I0731 18:08:20.045331   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "no-preload-673754", mac: "52:54:00:5a:ec:78", ip: "192.168.61.126"} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:20.045352   73479 main.go:141] libmachine: (no-preload-673754) DBG | skip adding static IP to network mk-no-preload-673754 - found existing host DHCP lease matching {name: "no-preload-673754", mac: "52:54:00:5a:ec:78", ip: "192.168.61.126"}
	I0731 18:08:20.045367   73479 main.go:141] libmachine: (no-preload-673754) DBG | Getting to WaitForSSH function...
	I0731 18:08:20.047574   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.047913   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:20.047939   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.048069   73479 main.go:141] libmachine: (no-preload-673754) DBG | Using SSH client type: external
	I0731 18:08:20.048106   73479 main.go:141] libmachine: (no-preload-673754) DBG | Using SSH private key: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/no-preload-673754/id_rsa (-rw-------)
	I0731 18:08:20.048150   73479 main.go:141] libmachine: (no-preload-673754) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.126 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19349-8084/.minikube/machines/no-preload-673754/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 18:08:20.048168   73479 main.go:141] libmachine: (no-preload-673754) DBG | About to run SSH command:
	I0731 18:08:20.048181   73479 main.go:141] libmachine: (no-preload-673754) DBG | exit 0
	I0731 18:08:20.175606   73479 main.go:141] libmachine: (no-preload-673754) DBG | SSH cmd err, output: <nil>: 
	I0731 18:08:20.175917   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetConfigRaw
	I0731 18:08:20.176508   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetIP
	I0731 18:08:20.179035   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.179374   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:20.179404   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.179686   73479 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/no-preload-673754/config.json ...
	I0731 18:08:20.179869   73479 machine.go:94] provisionDockerMachine start ...
	I0731 18:08:20.179885   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 18:08:20.180088   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:20.182345   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.182702   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:20.182727   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.182848   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:20.183060   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:20.183227   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:20.183414   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:20.183572   73479 main.go:141] libmachine: Using SSH client type: native
	I0731 18:08:20.183747   73479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I0731 18:08:20.183757   73479 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 18:08:20.295090   73479 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 18:08:20.295149   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetMachineName
	I0731 18:08:20.295424   73479 buildroot.go:166] provisioning hostname "no-preload-673754"
	I0731 18:08:20.295454   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetMachineName
	I0731 18:08:20.295631   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:20.298467   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.298771   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:20.298815   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.298897   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:20.299094   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:20.299276   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:20.299462   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:20.299652   73479 main.go:141] libmachine: Using SSH client type: native
	I0731 18:08:20.299806   73479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I0731 18:08:20.299817   73479 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-673754 && echo "no-preload-673754" | sudo tee /etc/hostname
	I0731 18:08:20.424901   73479 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-673754
	
	I0731 18:08:20.424951   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:20.427679   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.428049   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:20.428083   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.428230   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:20.428419   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:20.428601   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:20.428767   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:20.428965   73479 main.go:141] libmachine: Using SSH client type: native
	I0731 18:08:20.429127   73479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I0731 18:08:20.429142   73479 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-673754' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-673754/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-673754' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 18:08:20.546853   73479 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 18:08:20.546884   73479 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19349-8084/.minikube CaCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19349-8084/.minikube}
	I0731 18:08:20.546938   73479 buildroot.go:174] setting up certificates
	I0731 18:08:20.546955   73479 provision.go:84] configureAuth start
	I0731 18:08:20.546971   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetMachineName
	I0731 18:08:20.547275   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetIP
	I0731 18:08:20.550019   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.550372   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:20.550400   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.550525   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:20.552914   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.553261   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:20.553290   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.553416   73479 provision.go:143] copyHostCerts
	I0731 18:08:20.553479   73479 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem, removing ...
	I0731 18:08:20.553490   73479 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem
	I0731 18:08:20.553547   73479 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/ca.pem (1082 bytes)
	I0731 18:08:20.553675   73479 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem, removing ...
	I0731 18:08:20.553687   73479 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem
	I0731 18:08:20.553718   73479 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/cert.pem (1123 bytes)
	I0731 18:08:20.553796   73479 exec_runner.go:144] found /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem, removing ...
	I0731 18:08:20.553806   73479 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem
	I0731 18:08:20.553826   73479 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19349-8084/.minikube/key.pem (1675 bytes)
	I0731 18:08:20.553883   73479 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem org=jenkins.no-preload-673754 san=[127.0.0.1 192.168.61.126 localhost minikube no-preload-673754]
	I0731 18:08:20.878891   73479 provision.go:177] copyRemoteCerts
	I0731 18:08:20.878963   73479 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 18:08:20.878990   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:20.881529   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.881868   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:20.881900   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:20.882053   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:20.882245   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:20.882450   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:20.882617   73479 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/no-preload-673754/id_rsa Username:docker}
	I0731 18:08:20.968757   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 18:08:20.992136   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0731 18:08:21.013768   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 18:08:21.035808   73479 provision.go:87] duration metric: took 488.837788ms to configureAuth
	I0731 18:08:21.035839   73479 buildroot.go:189] setting minikube options for container-runtime
	I0731 18:08:21.036018   73479 config.go:182] Loaded profile config "no-preload-673754": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 18:08:21.036099   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:21.038949   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.039335   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:21.039363   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.039556   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:21.039756   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:21.039960   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:21.040071   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:21.040219   73479 main.go:141] libmachine: Using SSH client type: native
	I0731 18:08:21.040380   73479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I0731 18:08:21.040396   73479 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 18:08:21.319623   73479 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 18:08:21.319657   73479 machine.go:97] duration metric: took 1.139776085s to provisionDockerMachine
	I0731 18:08:21.319672   73479 start.go:293] postStartSetup for "no-preload-673754" (driver="kvm2")
	I0731 18:08:21.319689   73479 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 18:08:21.319710   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 18:08:21.320049   73479 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 18:08:21.320076   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:21.322963   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.323436   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:21.323465   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.323634   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:21.323809   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:21.324003   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:21.324127   73479 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/no-preload-673754/id_rsa Username:docker}
	I0731 18:08:21.409076   73479 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 18:08:21.412884   73479 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 18:08:21.412917   73479 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/addons for local assets ...
	I0731 18:08:21.413020   73479 filesync.go:126] Scanning /home/jenkins/minikube-integration/19349-8084/.minikube/files for local assets ...
	I0731 18:08:21.413108   73479 filesync.go:149] local asset: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem -> 152592.pem in /etc/ssl/certs
	I0731 18:08:21.413233   73479 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 18:08:21.421812   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /etc/ssl/certs/152592.pem (1708 bytes)
	I0731 18:08:21.447124   73479 start.go:296] duration metric: took 127.423498ms for postStartSetup
	I0731 18:08:21.447196   73479 fix.go:56] duration metric: took 20.511108968s for fixHost
	I0731 18:08:21.447226   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:21.450022   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.450408   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:21.450431   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.450628   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:21.450846   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:21.451009   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:21.451161   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:21.451327   73479 main.go:141] libmachine: Using SSH client type: native
	I0731 18:08:21.451527   73479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I0731 18:08:21.451541   73479 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 18:08:21.563653   73479 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722449301.536356236
	
	I0731 18:08:21.563672   73479 fix.go:216] guest clock: 1722449301.536356236
	I0731 18:08:21.563679   73479 fix.go:229] Guest: 2024-07-31 18:08:21.536356236 +0000 UTC Remote: 2024-07-31 18:08:21.447206545 +0000 UTC m=+354.621330953 (delta=89.149691ms)
	I0731 18:08:21.563702   73479 fix.go:200] guest clock delta is within tolerance: 89.149691ms
	I0731 18:08:21.563709   73479 start.go:83] releasing machines lock for "no-preload-673754", held for 20.627680156s
	I0731 18:08:21.563734   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 18:08:21.563992   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetIP
	I0731 18:08:21.566875   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.567265   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:21.567290   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.567505   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 18:08:21.568045   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 18:08:21.568237   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 18:08:21.568368   73479 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 18:08:21.568408   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:21.568465   73479 ssh_runner.go:195] Run: cat /version.json
	I0731 18:08:21.568492   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:21.571178   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.571554   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:21.571603   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.571653   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.571729   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:21.571902   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:21.572071   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:21.572213   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:21.572240   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:21.572256   73479 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/no-preload-673754/id_rsa Username:docker}
	I0731 18:08:21.572373   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:21.572505   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:21.572609   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:21.572739   73479 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/no-preload-673754/id_rsa Username:docker}
	I0731 18:08:21.682894   73479 ssh_runner.go:195] Run: systemctl --version
	I0731 18:08:21.689126   73479 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 18:08:21.829572   73479 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 18:08:21.836507   73479 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 18:08:21.836589   73479 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 18:08:21.855127   73479 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 18:08:21.855176   73479 start.go:495] detecting cgroup driver to use...
	I0731 18:08:21.855256   73479 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 18:08:21.870886   73479 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 18:08:21.884762   73479 docker.go:217] disabling cri-docker service (if available) ...
	I0731 18:08:21.884833   73479 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 18:08:21.899480   73479 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 18:08:21.912438   73479 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 18:08:22.024528   73479 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 18:08:22.177400   73479 docker.go:233] disabling docker service ...
	I0731 18:08:22.177500   73479 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 18:08:22.191225   73479 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 18:08:22.204004   73479 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 18:08:22.327408   73479 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 18:08:22.449116   73479 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 18:08:22.463031   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 18:08:22.481864   73479 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0731 18:08:22.481935   73479 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:22.491687   73479 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 18:08:22.491768   73479 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:22.501686   73479 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:22.511207   73479 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:22.521390   73479 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 18:08:22.531355   73479 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:22.541544   73479 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:22.556829   73479 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:08:22.566012   73479 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 18:08:22.574865   73479 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 18:08:22.574938   73479 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 18:08:22.588125   73479 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 18:08:22.597257   73479 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:08:22.716379   73479 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 18:08:22.855465   73479 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 18:08:22.855526   73479 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 18:08:22.860016   73479 start.go:563] Will wait 60s for crictl version
	I0731 18:08:22.860088   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:08:22.863395   73479 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 18:08:22.904523   73479 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 18:08:22.904611   73479 ssh_runner.go:195] Run: crio --version
	I0731 18:08:22.934571   73479 ssh_runner.go:195] Run: crio --version
	I0731 18:08:22.965884   73479 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0731 18:08:19.771740   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:22.272491   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:20.478866   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:20.978311   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:21.478333   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:21.978289   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:22.479138   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:22.978406   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:23.478138   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:23.979189   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:24.478688   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:24.978795   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:22.779215   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:24.782366   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:22.967087   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetIP
	I0731 18:08:22.969442   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:22.969722   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:22.969746   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:22.970005   73479 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0731 18:08:22.974229   73479 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:08:22.986153   73479 kubeadm.go:883] updating cluster {Name:no-preload-673754 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-673754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.126 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 18:08:22.986292   73479 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0731 18:08:22.986321   73479 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 18:08:23.020129   73479 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0731 18:08:23.020153   73479 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 18:08:23.020215   73479 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:08:23.020234   73479 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 18:08:23.020266   73479 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 18:08:23.020322   73479 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 18:08:23.020337   73479 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0731 18:08:23.020390   73479 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0731 18:08:23.020431   73479 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 18:08:23.020457   73479 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 18:08:23.021820   73479 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:08:23.021821   73479 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0731 18:08:23.021820   73479 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 18:08:23.021901   73479 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0731 18:08:23.021978   73479 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 18:08:23.021833   73479 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 18:08:23.021826   73479 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 18:08:23.021821   73479 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 18:08:23.254700   73479 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 18:08:23.268999   73479 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0731 18:08:23.271466   73479 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0731 18:08:23.272011   73479 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 18:08:23.275695   73479 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 18:08:23.298363   73479 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 18:08:23.320031   73479 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0731 18:08:23.340960   73479 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0731 18:08:23.341004   73479 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 18:08:23.341050   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:08:23.381391   73479 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0731 18:08:23.381441   73479 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 18:08:23.381511   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:08:23.508590   73479 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0731 18:08:23.508650   73479 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 18:08:23.508676   73479 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0731 18:08:23.508702   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:08:23.508716   73479 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 18:08:23.508729   73479 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0731 18:08:23.508751   73479 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 18:08:23.508772   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:08:23.508781   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:08:23.508800   73479 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0731 18:08:23.508830   73479 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0731 18:08:23.508838   73479 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 18:08:23.508860   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:08:23.508879   73479 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0731 18:08:23.519809   73479 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 18:08:23.519834   73479 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 18:08:23.519907   73479 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 18:08:23.595474   73479 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0731 18:08:23.595484   73479 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0731 18:08:23.595590   73479 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0731 18:08:23.595628   73479 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0731 18:08:23.595683   73479 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0731 18:08:23.622893   73479 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0731 18:08:23.623024   73479 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0731 18:08:23.629140   73479 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0731 18:08:23.629173   73479 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0731 18:08:23.629242   73479 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0731 18:08:23.629246   73479 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0731 18:08:23.659281   73479 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0731 18:08:23.659321   73479 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0731 18:08:23.659336   73479 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0731 18:08:23.659379   73479 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0731 18:08:23.659385   73479 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0731 18:08:23.659425   73479 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0731 18:08:23.659381   73479 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0731 18:08:23.659465   73479 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0731 18:08:23.659494   73479 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0731 18:08:23.857129   73479 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:08:26.136212   73479 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.476802709s)
	I0731 18:08:26.136251   73479 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0731 18:08:26.136264   73479 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0: (2.476807388s)
	I0731 18:08:26.136276   73479 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0731 18:08:26.136293   73479 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0731 18:08:26.136329   73479 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0731 18:08:26.136366   73479 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.279204335s)
	I0731 18:08:26.136423   73479 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0731 18:08:26.136474   73479 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:08:26.136521   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:08:24.770974   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:26.771954   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:29.274931   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:25.478432   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:25.978823   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:26.478416   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:26.979075   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:27.478228   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:27.978970   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:28.478319   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:28.979028   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:29.479060   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:29.978544   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:27.278482   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:29.279820   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:27.993828   73479 ssh_runner.go:235] Completed: which crictl: (1.857279777s)
	I0731 18:08:27.993908   73479 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:08:27.993918   73479 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.857561411s)
	I0731 18:08:27.993947   73479 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0731 18:08:27.993981   73479 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0731 18:08:27.994029   73479 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0731 18:08:28.037163   73479 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0731 18:08:28.037288   73479 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0731 18:08:29.880343   73479 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.843037657s)
	I0731 18:08:29.880392   73479 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0731 18:08:29.880339   73479 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.886261639s)
	I0731 18:08:29.880412   73479 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0731 18:08:29.880442   73479 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0731 18:08:29.880509   73479 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0731 18:08:31.229448   73479 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.348909634s)
	I0731 18:08:31.229478   73479 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0731 18:08:31.229512   73479 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0731 18:08:31.229575   73479 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0731 18:08:31.771695   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:34.271817   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:30.478387   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:30.978443   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:31.478484   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:31.979231   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:32.478928   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:32.978790   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:33.478426   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:33.978839   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:34.478411   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:34.978378   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:31.280261   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:33.780411   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:35.783181   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:33.084098   73479 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.854499641s)
	I0731 18:08:33.084136   73479 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0731 18:08:33.084175   73479 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0731 18:08:33.084255   73479 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0731 18:08:36.378466   73479 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.294181026s)
	I0731 18:08:36.378501   73479 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0731 18:08:36.378530   73479 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0731 18:08:36.378575   73479 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0731 18:08:36.772963   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:39.270915   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:35.478287   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:35.978546   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:36.479138   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:36.979173   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:37.478319   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:37.978768   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:38.479161   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:38.979129   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:39.478128   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:39.979147   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:38.278970   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:40.279298   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:37.022757   73479 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19349-8084/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0731 18:08:37.022807   73479 cache_images.go:123] Successfully loaded all cached images
	I0731 18:08:37.022815   73479 cache_images.go:92] duration metric: took 14.002647196s to LoadCachedImages
	I0731 18:08:37.022829   73479 kubeadm.go:934] updating node { 192.168.61.126 8443 v1.31.0-beta.0 crio true true} ...
	I0731 18:08:37.022954   73479 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-673754 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.126
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-673754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 18:08:37.023035   73479 ssh_runner.go:195] Run: crio config
	I0731 18:08:37.064803   73479 cni.go:84] Creating CNI manager for ""
	I0731 18:08:37.064825   73479 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:08:37.064834   73479 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 18:08:37.064856   73479 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.126 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-673754 NodeName:no-preload-673754 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.126"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.126 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 18:08:37.065028   73479 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.126
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-673754"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.126
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.126"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 18:08:37.065108   73479 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0731 18:08:37.077141   73479 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 18:08:37.077215   73479 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 18:08:37.086553   73479 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0731 18:08:37.102646   73479 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0731 18:08:37.118113   73479 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0731 18:08:37.134702   73479 ssh_runner.go:195] Run: grep 192.168.61.126	control-plane.minikube.internal$ /etc/hosts
	I0731 18:08:37.138593   73479 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.126	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:08:37.151319   73479 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:08:37.270019   73479 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 18:08:37.287378   73479 certs.go:68] Setting up /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/no-preload-673754 for IP: 192.168.61.126
	I0731 18:08:37.287400   73479 certs.go:194] generating shared ca certs ...
	I0731 18:08:37.287413   73479 certs.go:226] acquiring lock for ca certs: {Name:mk49eed439085f9c30746706460ce213570d1997 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:08:37.287540   73479 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key
	I0731 18:08:37.287577   73479 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key
	I0731 18:08:37.287584   73479 certs.go:256] generating profile certs ...
	I0731 18:08:37.287692   73479 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/no-preload-673754/client.key
	I0731 18:08:37.287761   73479 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/no-preload-673754/apiserver.key.3fff3ffc
	I0731 18:08:37.287803   73479 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/no-preload-673754/proxy-client.key
	I0731 18:08:37.287938   73479 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem (1338 bytes)
	W0731 18:08:37.287973   73479 certs.go:480] ignoring /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259_empty.pem, impossibly tiny 0 bytes
	I0731 18:08:37.287985   73479 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 18:08:37.288020   73479 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/ca.pem (1082 bytes)
	I0731 18:08:37.288049   73479 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/cert.pem (1123 bytes)
	I0731 18:08:37.288079   73479 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/certs/key.pem (1675 bytes)
	I0731 18:08:37.288143   73479 certs.go:484] found cert: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem (1708 bytes)
	I0731 18:08:37.288831   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 18:08:37.334317   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 18:08:37.370553   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 18:08:37.403436   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 18:08:37.449133   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/no-preload-673754/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0731 18:08:37.486169   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/no-preload-673754/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 18:08:37.517241   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/no-preload-673754/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 18:08:37.541089   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/no-preload-673754/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 18:08:37.563068   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/certs/15259.pem --> /usr/share/ca-certificates/15259.pem (1338 bytes)
	I0731 18:08:37.585396   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/ssl/certs/152592.pem --> /usr/share/ca-certificates/152592.pem (1708 bytes)
	I0731 18:08:37.608142   73479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19349-8084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 18:08:37.630178   73479 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 18:08:37.645994   73479 ssh_runner.go:195] Run: openssl version
	I0731 18:08:37.651663   73479 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15259.pem && ln -fs /usr/share/ca-certificates/15259.pem /etc/ssl/certs/15259.pem"
	I0731 18:08:37.661494   73479 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15259.pem
	I0731 18:08:37.665519   73479 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 16:54 /usr/share/ca-certificates/15259.pem
	I0731 18:08:37.665575   73479 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15259.pem
	I0731 18:08:37.671143   73479 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15259.pem /etc/ssl/certs/51391683.0"
	I0731 18:08:37.681076   73479 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/152592.pem && ln -fs /usr/share/ca-certificates/152592.pem /etc/ssl/certs/152592.pem"
	I0731 18:08:37.692253   73479 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/152592.pem
	I0731 18:08:37.696802   73479 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 16:54 /usr/share/ca-certificates/152592.pem
	I0731 18:08:37.696850   73479 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/152592.pem
	I0731 18:08:37.702282   73479 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/152592.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 18:08:37.713051   73479 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 18:08:37.723644   73479 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:08:37.728170   73479 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 16:42 /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:08:37.728225   73479 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:08:37.733912   73479 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 18:08:37.744004   73479 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 18:08:37.748076   73479 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 18:08:37.753645   73479 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 18:08:37.759077   73479 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 18:08:37.764344   73479 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 18:08:37.769735   73479 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 18:08:37.775894   73479 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 18:08:37.781699   73479 kubeadm.go:392] StartCluster: {Name:no-preload-673754 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-673754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.126 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:08:37.781771   73479 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 18:08:37.781833   73479 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 18:08:37.825614   73479 cri.go:89] found id: ""
	I0731 18:08:37.825685   73479 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 18:08:37.835584   73479 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 18:08:37.835604   73479 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 18:08:37.835659   73479 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 18:08:37.844529   73479 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 18:08:37.845534   73479 kubeconfig.go:125] found "no-preload-673754" server: "https://192.168.61.126:8443"
	I0731 18:08:37.847698   73479 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 18:08:37.856360   73479 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.126
	I0731 18:08:37.856386   73479 kubeadm.go:1160] stopping kube-system containers ...
	I0731 18:08:37.856396   73479 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 18:08:37.856440   73479 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 18:08:37.894614   73479 cri.go:89] found id: ""
	I0731 18:08:37.894689   73479 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 18:08:37.910921   73479 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 18:08:37.919796   73479 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 18:08:37.919814   73479 kubeadm.go:157] found existing configuration files:
	
	I0731 18:08:37.919859   73479 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 18:08:37.928562   73479 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 18:08:37.928617   73479 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 18:08:37.937099   73479 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 18:08:37.945298   73479 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 18:08:37.945378   73479 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 18:08:37.953976   73479 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 18:08:37.962069   73479 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 18:08:37.962119   73479 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 18:08:37.970719   73479 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 18:08:37.979265   73479 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 18:08:37.979318   73479 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 18:08:37.988286   73479 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 18:08:37.997742   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:38.105503   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:39.403672   73479 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.298131314s)
	I0731 18:08:39.403710   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:39.609739   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:39.677484   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:39.773387   73479 api_server.go:52] waiting for apiserver process to appear ...
	I0731 18:08:39.773469   73479 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:40.274185   73479 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:40.774562   73479 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:40.792346   73479 api_server.go:72] duration metric: took 1.018961231s to wait for apiserver process to appear ...
	I0731 18:08:40.792368   73479 api_server.go:88] waiting for apiserver healthz status ...
	I0731 18:08:40.792384   73479 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0731 18:08:41.271890   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:43.771546   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:43.476911   73479 api_server.go:279] https://192.168.61.126:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 18:08:43.476938   73479 api_server.go:103] status: https://192.168.61.126:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 18:08:43.476952   73479 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0731 18:08:43.536762   73479 api_server.go:279] https://192.168.61.126:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 18:08:43.536794   73479 api_server.go:103] status: https://192.168.61.126:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 18:08:43.793157   73479 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0731 18:08:43.798895   73479 api_server.go:279] https://192.168.61.126:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 18:08:43.798924   73479 api_server.go:103] status: https://192.168.61.126:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 18:08:44.292527   73479 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0731 18:08:44.300596   73479 api_server.go:279] https://192.168.61.126:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 18:08:44.300632   73479 api_server.go:103] status: https://192.168.61.126:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 18:08:44.793206   73479 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0731 18:08:44.797982   73479 api_server.go:279] https://192.168.61.126:8443/healthz returned 200:
	ok
	I0731 18:08:44.806150   73479 api_server.go:141] control plane version: v1.31.0-beta.0
	I0731 18:08:44.806172   73479 api_server.go:131] duration metric: took 4.013797537s to wait for apiserver health ...
	I0731 18:08:44.806183   73479 cni.go:84] Creating CNI manager for ""
	I0731 18:08:44.806191   73479 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:08:44.807774   73479 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 18:08:40.478967   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:40.978610   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:41.479192   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:41.978737   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:42.479051   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:42.978274   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:43.478957   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:43.978973   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:44.478269   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:44.978737   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:42.778330   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:44.779163   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:44.809068   73479 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 18:08:44.823284   73479 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 18:08:44.878894   73479 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 18:08:44.892969   73479 system_pods.go:59] 8 kube-system pods found
	I0731 18:08:44.893020   73479 system_pods.go:61] "coredns-5cfdc65f69-k7clq" [e4be77b6-aa7c-45e2-90a1-6a8264fd5101] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:08:44.893031   73479 system_pods.go:61] "etcd-no-preload-673754" [d64e9194-dd33-4a28-b8c3-d25462f1dae8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 18:08:44.893042   73479 system_pods.go:61] "kube-apiserver-no-preload-673754" [4497614d-51de-4a5f-93dc-1397446fe4c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 18:08:44.893055   73479 system_pods.go:61] "kube-controller-manager-no-preload-673754" [fe7f714e-d778-49e7-b4ef-44e6efb5bc33] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 18:08:44.893067   73479 system_pods.go:61] "kube-proxy-hqxh6" [4fb623c0-4be3-421c-85d6-1d76a90b874f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 18:08:44.893078   73479 system_pods.go:61] "kube-scheduler-no-preload-673754" [558e9b78-a6a9-46b8-9db8-004126359c74] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 18:08:44.893088   73479 system_pods.go:61] "metrics-server-78fcd8795b-27pkr" [a63a156b-0446-4bd9-8619-de75edaeb481] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 18:08:44.893098   73479 system_pods.go:61] "storage-provisioner" [a9f0ab70-18f4-4fac-b858-a9177077fe29] Running
	I0731 18:08:44.893109   73479 system_pods.go:74] duration metric: took 14.191984ms to wait for pod list to return data ...
	I0731 18:08:44.893120   73479 node_conditions.go:102] verifying NodePressure condition ...
	I0731 18:08:44.908236   73479 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 18:08:44.908270   73479 node_conditions.go:123] node cpu capacity is 2
	I0731 18:08:44.908283   73479 node_conditions.go:105] duration metric: took 15.154491ms to run NodePressure ...
	I0731 18:08:44.908307   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 18:08:45.248571   73479 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 18:08:45.252305   73479 kubeadm.go:739] kubelet initialised
	I0731 18:08:45.252332   73479 kubeadm.go:740] duration metric: took 3.734022ms waiting for restarted kubelet to initialise ...
	I0731 18:08:45.252342   73479 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:08:45.256748   73479 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-k7clq" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:45.261130   73479 pod_ready.go:97] node "no-preload-673754" hosting pod "coredns-5cfdc65f69-k7clq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:45.261149   73479 pod_ready.go:81] duration metric: took 4.373068ms for pod "coredns-5cfdc65f69-k7clq" in "kube-system" namespace to be "Ready" ...
	E0731 18:08:45.261157   73479 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-673754" hosting pod "coredns-5cfdc65f69-k7clq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:45.261162   73479 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:45.265115   73479 pod_ready.go:97] node "no-preload-673754" hosting pod "etcd-no-preload-673754" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:45.265135   73479 pod_ready.go:81] duration metric: took 3.965586ms for pod "etcd-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	E0731 18:08:45.265142   73479 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-673754" hosting pod "etcd-no-preload-673754" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:45.265147   73479 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:45.269566   73479 pod_ready.go:97] node "no-preload-673754" hosting pod "kube-apiserver-no-preload-673754" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:45.269585   73479 pod_ready.go:81] duration metric: took 4.431367ms for pod "kube-apiserver-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	E0731 18:08:45.269595   73479 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-673754" hosting pod "kube-apiserver-no-preload-673754" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:45.269603   73479 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:45.281026   73479 pod_ready.go:97] node "no-preload-673754" hosting pod "kube-controller-manager-no-preload-673754" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:45.281048   73479 pod_ready.go:81] duration metric: took 11.435327ms for pod "kube-controller-manager-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	E0731 18:08:45.281057   73479 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-673754" hosting pod "kube-controller-manager-no-preload-673754" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:45.281065   73479 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-hqxh6" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:45.684313   73479 pod_ready.go:97] node "no-preload-673754" hosting pod "kube-proxy-hqxh6" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:45.684347   73479 pod_ready.go:81] duration metric: took 403.272559ms for pod "kube-proxy-hqxh6" in "kube-system" namespace to be "Ready" ...
	E0731 18:08:45.684356   73479 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-673754" hosting pod "kube-proxy-hqxh6" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:45.684362   73479 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:46.082388   73479 pod_ready.go:97] node "no-preload-673754" hosting pod "kube-scheduler-no-preload-673754" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:46.082419   73479 pod_ready.go:81] duration metric: took 398.048808ms for pod "kube-scheduler-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	E0731 18:08:46.082432   73479 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-673754" hosting pod "kube-scheduler-no-preload-673754" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:46.082442   73479 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:46.482445   73479 pod_ready.go:97] node "no-preload-673754" hosting pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:46.482472   73479 pod_ready.go:81] duration metric: took 400.02111ms for pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace to be "Ready" ...
	E0731 18:08:46.482486   73479 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-673754" hosting pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:46.482493   73479 pod_ready.go:38] duration metric: took 1.230141723s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:08:46.482509   73479 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 18:08:46.495481   73479 ops.go:34] apiserver oom_adj: -16
	I0731 18:08:46.495502   73479 kubeadm.go:597] duration metric: took 8.65989212s to restartPrimaryControlPlane
	I0731 18:08:46.495513   73479 kubeadm.go:394] duration metric: took 8.71382049s to StartCluster
	I0731 18:08:46.495533   73479 settings.go:142] acquiring lock: {Name:mk8ac8269296bf0b887a00ff74caaad180005f89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:08:46.495615   73479 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 18:08:46.497426   73479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/kubeconfig: {Name:mkf2340bc1ada3c683f132aca37228d7588e1fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:08:46.497742   73479 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.126 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 18:08:46.497816   73479 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 18:08:46.497911   73479 addons.go:69] Setting storage-provisioner=true in profile "no-preload-673754"
	I0731 18:08:46.497929   73479 addons.go:69] Setting default-storageclass=true in profile "no-preload-673754"
	I0731 18:08:46.497956   73479 addons.go:69] Setting metrics-server=true in profile "no-preload-673754"
	I0731 18:08:46.497973   73479 config.go:182] Loaded profile config "no-preload-673754": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 18:08:46.497979   73479 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-673754"
	I0731 18:08:46.497988   73479 addons.go:234] Setting addon metrics-server=true in "no-preload-673754"
	W0731 18:08:46.498008   73479 addons.go:243] addon metrics-server should already be in state true
	I0731 18:08:46.497946   73479 addons.go:234] Setting addon storage-provisioner=true in "no-preload-673754"
	I0731 18:08:46.498056   73479 host.go:66] Checking if "no-preload-673754" exists ...
	W0731 18:08:46.498064   73479 addons.go:243] addon storage-provisioner should already be in state true
	I0731 18:08:46.498109   73479 host.go:66] Checking if "no-preload-673754" exists ...
	I0731 18:08:46.498315   73479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:08:46.498315   73479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:08:46.498333   73479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:08:46.498340   73479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:08:46.498448   73479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:08:46.498470   73479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:08:46.501144   73479 out.go:177] * Verifying Kubernetes components...
	I0731 18:08:46.502755   73479 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:08:46.514922   73479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46797
	I0731 18:08:46.514923   73479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44241
	I0731 18:08:46.515418   73479 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:08:46.515618   73479 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:08:46.515928   73479 main.go:141] libmachine: Using API Version  1
	I0731 18:08:46.515950   73479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:08:46.516066   73479 main.go:141] libmachine: Using API Version  1
	I0731 18:08:46.516089   73479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:08:46.516370   73479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40219
	I0731 18:08:46.516440   73479 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:08:46.516663   73479 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:08:46.516809   73479 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:08:46.516811   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetState
	I0731 18:08:46.517213   73479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:08:46.517247   73479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:08:46.517280   73479 main.go:141] libmachine: Using API Version  1
	I0731 18:08:46.517302   73479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:08:46.517618   73479 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:08:46.518191   73479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:08:46.518220   73479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:08:46.520511   73479 addons.go:234] Setting addon default-storageclass=true in "no-preload-673754"
	W0731 18:08:46.520536   73479 addons.go:243] addon default-storageclass should already be in state true
	I0731 18:08:46.520566   73479 host.go:66] Checking if "no-preload-673754" exists ...
	I0731 18:08:46.520917   73479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:08:46.520968   73479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:08:46.533349   73479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38669
	I0731 18:08:46.533802   73479 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:08:46.534250   73479 main.go:141] libmachine: Using API Version  1
	I0731 18:08:46.534272   73479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:08:46.534582   73479 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:08:46.534720   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetState
	I0731 18:08:46.535556   73479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46055
	I0731 18:08:46.535979   73479 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:08:46.536648   73479 main.go:141] libmachine: Using API Version  1
	I0731 18:08:46.536667   73479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:08:46.537080   73479 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:08:46.537331   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 18:08:46.537398   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetState
	I0731 18:08:46.538365   73479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39435
	I0731 18:08:46.538929   73479 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:08:46.539194   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 18:08:46.539401   73479 main.go:141] libmachine: Using API Version  1
	I0731 18:08:46.539419   73479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:08:46.539766   73479 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:08:46.540360   73479 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:08:46.540447   73479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:08:46.540801   73479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:08:46.541139   73479 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0731 18:08:46.541916   73479 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 18:08:46.541932   73479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 18:08:46.541952   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:46.542506   73479 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 18:08:46.542524   73479 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 18:08:46.542541   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:46.545293   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:46.545631   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:46.545759   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:46.545829   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:46.545985   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:46.546116   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:46.546256   73479 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/no-preload-673754/id_rsa Username:docker}
	I0731 18:08:46.546384   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:46.546888   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:46.546907   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:46.546924   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:46.547090   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:46.547256   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:46.547434   73479 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/no-preload-673754/id_rsa Username:docker}
	I0731 18:08:46.570759   73479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40485
	I0731 18:08:46.571222   73479 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:08:46.571668   73479 main.go:141] libmachine: Using API Version  1
	I0731 18:08:46.571688   73479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:08:46.572207   73479 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:08:46.572367   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetState
	I0731 18:08:46.574368   73479 main.go:141] libmachine: (no-preload-673754) Calling .DriverName
	I0731 18:08:46.574582   73479 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 18:08:46.574607   73479 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 18:08:46.574627   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHHostname
	I0731 18:08:46.577768   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:46.578542   73479 main.go:141] libmachine: (no-preload-673754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:ec:78", ip: ""} in network mk-no-preload-673754: {Iface:virbr2 ExpiryTime:2024-07-31 19:08:12 +0000 UTC Type:0 Mac:52:54:00:5a:ec:78 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:no-preload-673754 Clientid:01:52:54:00:5a:ec:78}
	I0731 18:08:46.578567   73479 main.go:141] libmachine: (no-preload-673754) DBG | domain no-preload-673754 has defined IP address 192.168.61.126 and MAC address 52:54:00:5a:ec:78 in network mk-no-preload-673754
	I0731 18:08:46.578741   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHPort
	I0731 18:08:46.578911   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHKeyPath
	I0731 18:08:46.579047   73479 main.go:141] libmachine: (no-preload-673754) Calling .GetSSHUsername
	I0731 18:08:46.579459   73479 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/no-preload-673754/id_rsa Username:docker}
	I0731 18:08:46.700752   73479 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 18:08:46.720967   73479 node_ready.go:35] waiting up to 6m0s for node "no-preload-673754" to be "Ready" ...
	I0731 18:08:46.798188   73479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 18:08:46.802534   73479 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 18:08:46.802564   73479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0731 18:08:46.828038   73479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 18:08:46.859309   73479 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 18:08:46.859337   73479 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 18:08:46.921507   73479 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 18:08:46.921536   73479 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 18:08:46.958759   73479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 18:08:48.106542   73479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.278462071s)
	I0731 18:08:48.106599   73479 main.go:141] libmachine: Making call to close driver server
	I0731 18:08:48.106608   73479 main.go:141] libmachine: (no-preload-673754) Calling .Close
	I0731 18:08:48.107151   73479 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:08:48.107177   73479 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:08:48.107187   73479 main.go:141] libmachine: Making call to close driver server
	I0731 18:08:48.107196   73479 main.go:141] libmachine: (no-preload-673754) Calling .Close
	I0731 18:08:48.107601   73479 main.go:141] libmachine: (no-preload-673754) DBG | Closing plugin on server side
	I0731 18:08:48.107604   73479 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:08:48.107631   73479 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:08:48.107831   73479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.309610972s)
	I0731 18:08:48.107872   73479 main.go:141] libmachine: Making call to close driver server
	I0731 18:08:48.107882   73479 main.go:141] libmachine: (no-preload-673754) Calling .Close
	I0731 18:08:48.108105   73479 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:08:48.108119   73479 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:08:48.108138   73479 main.go:141] libmachine: Making call to close driver server
	I0731 18:08:48.108150   73479 main.go:141] libmachine: (no-preload-673754) Calling .Close
	I0731 18:08:48.108351   73479 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:08:48.108367   73479 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:08:48.118038   73479 main.go:141] libmachine: Making call to close driver server
	I0731 18:08:48.118055   73479 main.go:141] libmachine: (no-preload-673754) Calling .Close
	I0731 18:08:48.118329   73479 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:08:48.118349   73479 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:08:48.128563   73479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.169765123s)
	I0731 18:08:48.128606   73479 main.go:141] libmachine: Making call to close driver server
	I0731 18:08:48.128619   73479 main.go:141] libmachine: (no-preload-673754) Calling .Close
	I0731 18:08:48.128901   73479 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:08:48.128915   73479 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:08:48.128924   73479 main.go:141] libmachine: Making call to close driver server
	I0731 18:08:48.128932   73479 main.go:141] libmachine: (no-preload-673754) Calling .Close
	I0731 18:08:48.129137   73479 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:08:48.129152   73479 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:08:48.129162   73479 addons.go:475] Verifying addon metrics-server=true in "no-preload-673754"
	I0731 18:08:48.129174   73479 main.go:141] libmachine: (no-preload-673754) DBG | Closing plugin on server side
	I0731 18:08:48.130887   73479 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0731 18:08:46.271648   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:48.271754   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:45.478411   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:45.978802   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:46.478407   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:46.978134   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:47.479125   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:47.978991   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:48.478597   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:48.978742   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:49.479320   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:49.978288   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:46.779263   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:48.779361   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:48.131964   73479 addons.go:510] duration metric: took 1.634151286s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0731 18:08:48.725682   73479 node_ready.go:53] node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:51.231081   73479 node_ready.go:53] node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:50.771387   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:52.771438   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:50.478112   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:50.978272   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:51.478319   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:51.978880   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:52.479176   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:52.979001   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:53.478508   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:53.978517   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:54.478857   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:54.978290   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:51.278348   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:53.278456   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:55.278495   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:53.725153   73479 node_ready.go:53] node "no-preload-673754" has status "Ready":"False"
	I0731 18:08:54.224475   73479 node_ready.go:49] node "no-preload-673754" has status "Ready":"True"
	I0731 18:08:54.224505   73479 node_ready.go:38] duration metric: took 7.503503116s for node "no-preload-673754" to be "Ready" ...
	I0731 18:08:54.224517   73479 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:08:54.231434   73479 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-k7clq" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:56.237804   73479 pod_ready.go:102] pod "coredns-5cfdc65f69-k7clq" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:54.772597   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:57.271778   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:55.478727   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:55.978552   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:56.478246   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:56.978732   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:57.478262   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:57.978216   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:58.478212   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:58.978406   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:59.478270   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:59.978221   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:08:57.781459   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:00.278913   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:08:58.740148   73479 pod_ready.go:102] pod "coredns-5cfdc65f69-k7clq" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:01.237849   73479 pod_ready.go:92] pod "coredns-5cfdc65f69-k7clq" in "kube-system" namespace has status "Ready":"True"
	I0731 18:09:01.237874   73479 pod_ready.go:81] duration metric: took 7.00641308s for pod "coredns-5cfdc65f69-k7clq" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.237887   73479 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.242105   73479 pod_ready.go:92] pod "etcd-no-preload-673754" in "kube-system" namespace has status "Ready":"True"
	I0731 18:09:01.242122   73479 pod_ready.go:81] duration metric: took 4.229266ms for pod "etcd-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.242133   73479 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.246652   73479 pod_ready.go:92] pod "kube-apiserver-no-preload-673754" in "kube-system" namespace has status "Ready":"True"
	I0731 18:09:01.246674   73479 pod_ready.go:81] duration metric: took 4.534937ms for pod "kube-apiserver-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.246686   73479 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.251284   73479 pod_ready.go:92] pod "kube-controller-manager-no-preload-673754" in "kube-system" namespace has status "Ready":"True"
	I0731 18:09:01.251302   73479 pod_ready.go:81] duration metric: took 4.608584ms for pod "kube-controller-manager-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.251321   73479 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hqxh6" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.255030   73479 pod_ready.go:92] pod "kube-proxy-hqxh6" in "kube-system" namespace has status "Ready":"True"
	I0731 18:09:01.255045   73479 pod_ready.go:81] duration metric: took 3.718917ms for pod "kube-proxy-hqxh6" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.255052   73479 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.636799   73479 pod_ready.go:92] pod "kube-scheduler-no-preload-673754" in "kube-system" namespace has status "Ready":"True"
	I0731 18:09:01.636826   73479 pod_ready.go:81] duration metric: took 381.767881ms for pod "kube-scheduler-no-preload-673754" in "kube-system" namespace to be "Ready" ...
	I0731 18:09:01.636835   73479 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace to be "Ready" ...
	I0731 18:08:59.771686   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:02.271396   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:00.478785   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:00.978350   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:01.478635   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:01.978192   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:02.478480   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:02.979021   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:03.478366   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:03.978984   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:04.479143   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:04.978913   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:02.279613   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:04.778482   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:03.642978   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:05.644941   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:04.771938   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:07.271165   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:05.478608   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:05.978345   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:06.478435   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:06.978551   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:07.478131   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:07.978354   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:08.478977   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:08.979122   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:09.478279   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:09.978350   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:06.780364   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:09.278573   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:08.142974   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:10.643136   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:09.771950   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:11.772464   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:13.773164   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:10.479086   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:10.479175   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:10.516364   74203 cri.go:89] found id: ""
	I0731 18:09:10.516389   74203 logs.go:276] 0 containers: []
	W0731 18:09:10.516405   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:10.516411   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:10.516464   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:10.549398   74203 cri.go:89] found id: ""
	I0731 18:09:10.549422   74203 logs.go:276] 0 containers: []
	W0731 18:09:10.549433   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:10.549440   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:10.549503   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:10.584290   74203 cri.go:89] found id: ""
	I0731 18:09:10.584314   74203 logs.go:276] 0 containers: []
	W0731 18:09:10.584322   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:10.584327   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:10.584381   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:10.615832   74203 cri.go:89] found id: ""
	I0731 18:09:10.615860   74203 logs.go:276] 0 containers: []
	W0731 18:09:10.615871   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:10.615878   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:10.615941   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:10.647597   74203 cri.go:89] found id: ""
	I0731 18:09:10.647617   74203 logs.go:276] 0 containers: []
	W0731 18:09:10.647624   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:10.647629   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:10.647686   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:10.680981   74203 cri.go:89] found id: ""
	I0731 18:09:10.681016   74203 logs.go:276] 0 containers: []
	W0731 18:09:10.681027   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:10.681033   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:10.681093   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:10.713798   74203 cri.go:89] found id: ""
	I0731 18:09:10.713839   74203 logs.go:276] 0 containers: []
	W0731 18:09:10.713851   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:10.713865   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:10.713937   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:10.746378   74203 cri.go:89] found id: ""
	I0731 18:09:10.746405   74203 logs.go:276] 0 containers: []
	W0731 18:09:10.746413   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:10.746423   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:10.746439   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:10.799156   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:10.799187   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:10.812388   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:10.812413   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:10.932251   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:10.932271   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:10.932285   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:10.996810   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:10.996840   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:13.533936   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:13.549194   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:13.549250   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:13.599350   74203 cri.go:89] found id: ""
	I0731 18:09:13.599389   74203 logs.go:276] 0 containers: []
	W0731 18:09:13.599400   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:13.599407   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:13.599466   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:13.651736   74203 cri.go:89] found id: ""
	I0731 18:09:13.651771   74203 logs.go:276] 0 containers: []
	W0731 18:09:13.651791   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:13.651798   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:13.651855   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:13.699804   74203 cri.go:89] found id: ""
	I0731 18:09:13.699832   74203 logs.go:276] 0 containers: []
	W0731 18:09:13.699841   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:13.699846   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:13.699906   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:13.732760   74203 cri.go:89] found id: ""
	I0731 18:09:13.732781   74203 logs.go:276] 0 containers: []
	W0731 18:09:13.732788   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:13.732794   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:13.732849   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:13.766865   74203 cri.go:89] found id: ""
	I0731 18:09:13.766892   74203 logs.go:276] 0 containers: []
	W0731 18:09:13.766902   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:13.766910   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:13.766964   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:13.804706   74203 cri.go:89] found id: ""
	I0731 18:09:13.804733   74203 logs.go:276] 0 containers: []
	W0731 18:09:13.804743   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:13.804757   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:13.804821   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:13.838432   74203 cri.go:89] found id: ""
	I0731 18:09:13.838461   74203 logs.go:276] 0 containers: []
	W0731 18:09:13.838472   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:13.838479   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:13.838534   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:13.870455   74203 cri.go:89] found id: ""
	I0731 18:09:13.870480   74203 logs.go:276] 0 containers: []
	W0731 18:09:13.870490   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:13.870498   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:13.870510   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:13.922911   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:13.922947   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:13.936075   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:13.936098   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:14.006766   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:14.006790   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:14.006810   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:14.071066   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:14.071100   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:11.278892   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:13.279644   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:15.280298   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:12.643341   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:14.643636   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:16.280976   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:18.772338   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:16.615212   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:16.627439   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:16.627499   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:16.660764   74203 cri.go:89] found id: ""
	I0731 18:09:16.660785   74203 logs.go:276] 0 containers: []
	W0731 18:09:16.660792   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:16.660798   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:16.660842   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:16.697154   74203 cri.go:89] found id: ""
	I0731 18:09:16.697182   74203 logs.go:276] 0 containers: []
	W0731 18:09:16.697196   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:16.697201   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:16.697259   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:16.730263   74203 cri.go:89] found id: ""
	I0731 18:09:16.730284   74203 logs.go:276] 0 containers: []
	W0731 18:09:16.730291   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:16.730318   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:16.730369   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:16.765226   74203 cri.go:89] found id: ""
	I0731 18:09:16.765249   74203 logs.go:276] 0 containers: []
	W0731 18:09:16.765257   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:16.765262   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:16.765336   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:16.800502   74203 cri.go:89] found id: ""
	I0731 18:09:16.800528   74203 logs.go:276] 0 containers: []
	W0731 18:09:16.800535   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:16.800541   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:16.800599   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:16.837391   74203 cri.go:89] found id: ""
	I0731 18:09:16.837418   74203 logs.go:276] 0 containers: []
	W0731 18:09:16.837427   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:16.837435   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:16.837490   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:16.867606   74203 cri.go:89] found id: ""
	I0731 18:09:16.867628   74203 logs.go:276] 0 containers: []
	W0731 18:09:16.867637   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:16.867642   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:16.867696   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:16.901639   74203 cri.go:89] found id: ""
	I0731 18:09:16.901669   74203 logs.go:276] 0 containers: []
	W0731 18:09:16.901681   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:16.901693   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:16.901707   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:16.951692   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:16.951729   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:16.965069   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:16.965101   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:17.040337   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:17.040358   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:17.040371   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:17.115058   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:17.115093   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:19.651538   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:19.663682   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:19.663739   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:19.697851   74203 cri.go:89] found id: ""
	I0731 18:09:19.697879   74203 logs.go:276] 0 containers: []
	W0731 18:09:19.697894   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:19.697900   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:19.697996   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:19.732745   74203 cri.go:89] found id: ""
	I0731 18:09:19.732772   74203 logs.go:276] 0 containers: []
	W0731 18:09:19.732783   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:19.732790   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:19.732855   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:19.763843   74203 cri.go:89] found id: ""
	I0731 18:09:19.763865   74203 logs.go:276] 0 containers: []
	W0731 18:09:19.763873   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:19.763878   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:19.763934   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:19.797398   74203 cri.go:89] found id: ""
	I0731 18:09:19.797422   74203 logs.go:276] 0 containers: []
	W0731 18:09:19.797429   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:19.797434   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:19.797504   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:19.833239   74203 cri.go:89] found id: ""
	I0731 18:09:19.833268   74203 logs.go:276] 0 containers: []
	W0731 18:09:19.833278   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:19.833284   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:19.833340   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:19.866135   74203 cri.go:89] found id: ""
	I0731 18:09:19.866163   74203 logs.go:276] 0 containers: []
	W0731 18:09:19.866173   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:19.866181   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:19.866242   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:19.900581   74203 cri.go:89] found id: ""
	I0731 18:09:19.900606   74203 logs.go:276] 0 containers: []
	W0731 18:09:19.900615   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:19.900621   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:19.900720   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:19.936451   74203 cri.go:89] found id: ""
	I0731 18:09:19.936475   74203 logs.go:276] 0 containers: []
	W0731 18:09:19.936487   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:19.936496   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:19.936508   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:19.990522   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:19.990559   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:20.003460   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:20.003487   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:20.070869   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:20.070893   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:20.070912   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:20.148316   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:20.148354   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:17.779144   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:19.781539   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:17.143894   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:19.642139   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:21.642234   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:21.271074   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:23.771002   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:22.685964   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:22.698740   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:22.698814   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:22.735321   74203 cri.go:89] found id: ""
	I0731 18:09:22.735350   74203 logs.go:276] 0 containers: []
	W0731 18:09:22.735360   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:22.735367   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:22.735428   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:22.767689   74203 cri.go:89] found id: ""
	I0731 18:09:22.767718   74203 logs.go:276] 0 containers: []
	W0731 18:09:22.767729   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:22.767736   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:22.767795   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:22.804010   74203 cri.go:89] found id: ""
	I0731 18:09:22.804036   74203 logs.go:276] 0 containers: []
	W0731 18:09:22.804045   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:22.804050   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:22.804101   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:22.836820   74203 cri.go:89] found id: ""
	I0731 18:09:22.836847   74203 logs.go:276] 0 containers: []
	W0731 18:09:22.836858   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:22.836874   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:22.836933   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:22.870163   74203 cri.go:89] found id: ""
	I0731 18:09:22.870187   74203 logs.go:276] 0 containers: []
	W0731 18:09:22.870194   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:22.870199   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:22.870270   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:22.905926   74203 cri.go:89] found id: ""
	I0731 18:09:22.905951   74203 logs.go:276] 0 containers: []
	W0731 18:09:22.905959   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:22.905965   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:22.906020   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:22.938926   74203 cri.go:89] found id: ""
	I0731 18:09:22.938949   74203 logs.go:276] 0 containers: []
	W0731 18:09:22.938957   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:22.938963   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:22.939008   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:22.975150   74203 cri.go:89] found id: ""
	I0731 18:09:22.975185   74203 logs.go:276] 0 containers: []
	W0731 18:09:22.975194   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:22.975204   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:22.975219   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:23.043265   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:23.043290   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:23.043302   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:23.122681   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:23.122717   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:23.161745   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:23.161769   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:23.211274   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:23.211305   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:22.278664   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:24.778771   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:23.643871   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:26.143509   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:25.771922   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:27.772156   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:25.724702   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:25.739335   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:25.739415   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:25.778238   74203 cri.go:89] found id: ""
	I0731 18:09:25.778264   74203 logs.go:276] 0 containers: []
	W0731 18:09:25.778274   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:25.778282   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:25.778349   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:25.816530   74203 cri.go:89] found id: ""
	I0731 18:09:25.816566   74203 logs.go:276] 0 containers: []
	W0731 18:09:25.816579   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:25.816587   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:25.816652   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:25.853524   74203 cri.go:89] found id: ""
	I0731 18:09:25.853562   74203 logs.go:276] 0 containers: []
	W0731 18:09:25.853575   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:25.853583   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:25.853661   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:25.889690   74203 cri.go:89] found id: ""
	I0731 18:09:25.889719   74203 logs.go:276] 0 containers: []
	W0731 18:09:25.889728   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:25.889734   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:25.889803   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:25.922409   74203 cri.go:89] found id: ""
	I0731 18:09:25.922441   74203 logs.go:276] 0 containers: []
	W0731 18:09:25.922452   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:25.922459   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:25.922512   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:25.956849   74203 cri.go:89] found id: ""
	I0731 18:09:25.956876   74203 logs.go:276] 0 containers: []
	W0731 18:09:25.956886   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:25.956893   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:25.956958   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:25.994190   74203 cri.go:89] found id: ""
	I0731 18:09:25.994212   74203 logs.go:276] 0 containers: []
	W0731 18:09:25.994220   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:25.994225   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:25.994270   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:26.027980   74203 cri.go:89] found id: ""
	I0731 18:09:26.028005   74203 logs.go:276] 0 containers: []
	W0731 18:09:26.028014   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:26.028025   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:26.028044   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:26.076627   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:26.076661   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:26.089439   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:26.089464   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:26.167298   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:26.167319   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:26.167333   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:26.244611   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:26.244644   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:28.787238   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:28.800136   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:28.800221   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:28.843038   74203 cri.go:89] found id: ""
	I0731 18:09:28.843062   74203 logs.go:276] 0 containers: []
	W0731 18:09:28.843070   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:28.843076   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:28.843154   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:28.876979   74203 cri.go:89] found id: ""
	I0731 18:09:28.877010   74203 logs.go:276] 0 containers: []
	W0731 18:09:28.877021   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:28.877028   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:28.877095   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:28.913105   74203 cri.go:89] found id: ""
	I0731 18:09:28.913137   74203 logs.go:276] 0 containers: []
	W0731 18:09:28.913147   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:28.913155   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:28.913216   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:28.949113   74203 cri.go:89] found id: ""
	I0731 18:09:28.949144   74203 logs.go:276] 0 containers: []
	W0731 18:09:28.949153   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:28.949160   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:28.949208   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:28.983159   74203 cri.go:89] found id: ""
	I0731 18:09:28.983187   74203 logs.go:276] 0 containers: []
	W0731 18:09:28.983195   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:28.983200   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:28.983276   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:29.016316   74203 cri.go:89] found id: ""
	I0731 18:09:29.016356   74203 logs.go:276] 0 containers: []
	W0731 18:09:29.016364   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:29.016370   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:29.016419   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:29.050015   74203 cri.go:89] found id: ""
	I0731 18:09:29.050047   74203 logs.go:276] 0 containers: []
	W0731 18:09:29.050058   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:29.050069   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:29.050124   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:29.084711   74203 cri.go:89] found id: ""
	I0731 18:09:29.084739   74203 logs.go:276] 0 containers: []
	W0731 18:09:29.084749   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:29.084760   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:29.084777   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:29.135474   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:29.135516   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:29.149989   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:29.150022   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:29.223652   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:29.223676   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:29.223688   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:29.307949   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:29.307983   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:26.779082   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:29.280030   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:28.143957   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:30.643349   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:30.271524   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:32.271862   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:31.848760   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:31.861409   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:31.861470   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:31.894485   74203 cri.go:89] found id: ""
	I0731 18:09:31.894505   74203 logs.go:276] 0 containers: []
	W0731 18:09:31.894513   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:31.894518   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:31.894563   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:31.926760   74203 cri.go:89] found id: ""
	I0731 18:09:31.926784   74203 logs.go:276] 0 containers: []
	W0731 18:09:31.926791   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:31.926797   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:31.926857   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:31.963010   74203 cri.go:89] found id: ""
	I0731 18:09:31.963042   74203 logs.go:276] 0 containers: []
	W0731 18:09:31.963055   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:31.963062   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:31.963165   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:31.995221   74203 cri.go:89] found id: ""
	I0731 18:09:31.995249   74203 logs.go:276] 0 containers: []
	W0731 18:09:31.995260   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:31.995268   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:31.995333   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:32.033912   74203 cri.go:89] found id: ""
	I0731 18:09:32.033942   74203 logs.go:276] 0 containers: []
	W0731 18:09:32.033955   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:32.033963   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:32.034038   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:32.066416   74203 cri.go:89] found id: ""
	I0731 18:09:32.066446   74203 logs.go:276] 0 containers: []
	W0731 18:09:32.066477   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:32.066486   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:32.066549   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:32.100097   74203 cri.go:89] found id: ""
	I0731 18:09:32.100121   74203 logs.go:276] 0 containers: []
	W0731 18:09:32.100129   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:32.100135   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:32.100191   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:32.133061   74203 cri.go:89] found id: ""
	I0731 18:09:32.133088   74203 logs.go:276] 0 containers: []
	W0731 18:09:32.133096   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:32.133106   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:32.133120   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:32.169869   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:32.169897   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:32.218668   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:32.218707   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:32.231016   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:32.231039   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:32.304319   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:32.304342   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:32.304353   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:34.880423   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:34.893775   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:34.893853   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:34.925073   74203 cri.go:89] found id: ""
	I0731 18:09:34.925101   74203 logs.go:276] 0 containers: []
	W0731 18:09:34.925109   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:34.925115   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:34.925178   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:34.960870   74203 cri.go:89] found id: ""
	I0731 18:09:34.960896   74203 logs.go:276] 0 containers: []
	W0731 18:09:34.960904   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:34.960910   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:34.960961   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:34.996290   74203 cri.go:89] found id: ""
	I0731 18:09:34.996332   74203 logs.go:276] 0 containers: []
	W0731 18:09:34.996341   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:34.996347   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:34.996401   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:35.027900   74203 cri.go:89] found id: ""
	I0731 18:09:35.027932   74203 logs.go:276] 0 containers: []
	W0731 18:09:35.027940   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:35.027945   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:35.028004   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:35.060533   74203 cri.go:89] found id: ""
	I0731 18:09:35.060562   74203 logs.go:276] 0 containers: []
	W0731 18:09:35.060579   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:35.060586   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:35.060653   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:35.095307   74203 cri.go:89] found id: ""
	I0731 18:09:35.095339   74203 logs.go:276] 0 containers: []
	W0731 18:09:35.095348   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:35.095355   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:35.095421   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:35.127060   74203 cri.go:89] found id: ""
	I0731 18:09:35.127082   74203 logs.go:276] 0 containers: []
	W0731 18:09:35.127090   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:35.127095   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:35.127169   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:35.161300   74203 cri.go:89] found id: ""
	I0731 18:09:35.161328   74203 logs.go:276] 0 containers: []
	W0731 18:09:35.161339   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:35.161350   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:35.161369   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:35.233033   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:35.233060   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:35.233074   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:35.313279   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:35.313311   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:31.779160   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:33.779209   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:32.644329   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:35.143744   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:34.774758   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:37.271690   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:35.356120   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:35.356145   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:35.408231   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:35.408263   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:37.921242   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:37.933986   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:37.934044   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:37.964524   74203 cri.go:89] found id: ""
	I0731 18:09:37.964558   74203 logs.go:276] 0 containers: []
	W0731 18:09:37.964567   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:37.964574   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:37.964632   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:37.998157   74203 cri.go:89] found id: ""
	I0731 18:09:37.998183   74203 logs.go:276] 0 containers: []
	W0731 18:09:37.998191   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:37.998196   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:37.998257   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:38.034611   74203 cri.go:89] found id: ""
	I0731 18:09:38.034637   74203 logs.go:276] 0 containers: []
	W0731 18:09:38.034645   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:38.034650   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:38.034708   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:38.068005   74203 cri.go:89] found id: ""
	I0731 18:09:38.068029   74203 logs.go:276] 0 containers: []
	W0731 18:09:38.068039   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:38.068047   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:38.068104   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:38.106110   74203 cri.go:89] found id: ""
	I0731 18:09:38.106133   74203 logs.go:276] 0 containers: []
	W0731 18:09:38.106141   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:38.106146   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:38.106192   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:38.138337   74203 cri.go:89] found id: ""
	I0731 18:09:38.138364   74203 logs.go:276] 0 containers: []
	W0731 18:09:38.138375   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:38.138383   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:38.138440   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:38.171517   74203 cri.go:89] found id: ""
	I0731 18:09:38.171546   74203 logs.go:276] 0 containers: []
	W0731 18:09:38.171557   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:38.171564   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:38.171643   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:38.208708   74203 cri.go:89] found id: ""
	I0731 18:09:38.208733   74203 logs.go:276] 0 containers: []
	W0731 18:09:38.208741   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:38.208750   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:38.208760   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:38.243711   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:38.243736   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:38.298673   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:38.298705   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:38.311936   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:38.311962   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:38.384023   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:38.384049   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:38.384067   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:36.278948   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:38.279423   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:40.281213   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:37.644041   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:40.143131   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:39.772098   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:42.272096   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:40.959426   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:40.972581   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:40.972645   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:41.008917   74203 cri.go:89] found id: ""
	I0731 18:09:41.008941   74203 logs.go:276] 0 containers: []
	W0731 18:09:41.008950   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:41.008957   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:41.009018   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:41.045342   74203 cri.go:89] found id: ""
	I0731 18:09:41.045375   74203 logs.go:276] 0 containers: []
	W0731 18:09:41.045384   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:41.045390   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:41.045454   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:41.081385   74203 cri.go:89] found id: ""
	I0731 18:09:41.081409   74203 logs.go:276] 0 containers: []
	W0731 18:09:41.081417   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:41.081423   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:41.081469   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:41.118028   74203 cri.go:89] found id: ""
	I0731 18:09:41.118051   74203 logs.go:276] 0 containers: []
	W0731 18:09:41.118062   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:41.118067   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:41.118114   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:41.154162   74203 cri.go:89] found id: ""
	I0731 18:09:41.154190   74203 logs.go:276] 0 containers: []
	W0731 18:09:41.154201   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:41.154209   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:41.154271   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:41.190789   74203 cri.go:89] found id: ""
	I0731 18:09:41.190814   74203 logs.go:276] 0 containers: []
	W0731 18:09:41.190822   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:41.190827   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:41.190887   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:41.226281   74203 cri.go:89] found id: ""
	I0731 18:09:41.226312   74203 logs.go:276] 0 containers: []
	W0731 18:09:41.226321   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:41.226327   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:41.226382   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:41.258270   74203 cri.go:89] found id: ""
	I0731 18:09:41.258299   74203 logs.go:276] 0 containers: []
	W0731 18:09:41.258309   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:41.258321   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:41.258335   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:41.342713   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:41.342749   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:41.389772   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:41.389795   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:41.442645   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:41.442676   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:41.455850   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:41.455874   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:41.522017   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:44.022439   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:44.035190   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:44.035258   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:44.070759   74203 cri.go:89] found id: ""
	I0731 18:09:44.070783   74203 logs.go:276] 0 containers: []
	W0731 18:09:44.070790   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:44.070796   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:44.070857   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:44.105313   74203 cri.go:89] found id: ""
	I0731 18:09:44.105350   74203 logs.go:276] 0 containers: []
	W0731 18:09:44.105358   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:44.105364   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:44.105416   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:44.140159   74203 cri.go:89] found id: ""
	I0731 18:09:44.140208   74203 logs.go:276] 0 containers: []
	W0731 18:09:44.140220   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:44.140229   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:44.140301   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:44.176407   74203 cri.go:89] found id: ""
	I0731 18:09:44.176429   74203 logs.go:276] 0 containers: []
	W0731 18:09:44.176437   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:44.176442   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:44.176490   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:44.210875   74203 cri.go:89] found id: ""
	I0731 18:09:44.210899   74203 logs.go:276] 0 containers: []
	W0731 18:09:44.210907   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:44.210916   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:44.210969   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:44.247021   74203 cri.go:89] found id: ""
	I0731 18:09:44.247045   74203 logs.go:276] 0 containers: []
	W0731 18:09:44.247055   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:44.247061   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:44.247141   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:44.282983   74203 cri.go:89] found id: ""
	I0731 18:09:44.283011   74203 logs.go:276] 0 containers: []
	W0731 18:09:44.283021   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:44.283029   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:44.283092   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:44.319717   74203 cri.go:89] found id: ""
	I0731 18:09:44.319742   74203 logs.go:276] 0 containers: []
	W0731 18:09:44.319750   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:44.319766   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:44.319781   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:44.398602   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:44.398636   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:44.435350   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:44.435384   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:44.488021   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:44.488053   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:44.501790   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:44.501813   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:44.578374   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:42.779304   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:45.279008   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:42.143287   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:44.144123   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:46.643499   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:44.771059   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:46.771846   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:48.772300   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:47.079192   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:47.093516   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:47.093597   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:47.132872   74203 cri.go:89] found id: ""
	I0731 18:09:47.132899   74203 logs.go:276] 0 containers: []
	W0731 18:09:47.132907   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:47.132913   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:47.132969   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:47.167428   74203 cri.go:89] found id: ""
	I0731 18:09:47.167460   74203 logs.go:276] 0 containers: []
	W0731 18:09:47.167472   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:47.167480   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:47.167551   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:47.202206   74203 cri.go:89] found id: ""
	I0731 18:09:47.202237   74203 logs.go:276] 0 containers: []
	W0731 18:09:47.202250   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:47.202256   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:47.202308   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:47.238513   74203 cri.go:89] found id: ""
	I0731 18:09:47.238537   74203 logs.go:276] 0 containers: []
	W0731 18:09:47.238545   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:47.238551   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:47.238604   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:47.271732   74203 cri.go:89] found id: ""
	I0731 18:09:47.271755   74203 logs.go:276] 0 containers: []
	W0731 18:09:47.271764   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:47.271770   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:47.271828   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:47.305906   74203 cri.go:89] found id: ""
	I0731 18:09:47.305932   74203 logs.go:276] 0 containers: []
	W0731 18:09:47.305943   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:47.305948   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:47.305996   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:47.338427   74203 cri.go:89] found id: ""
	I0731 18:09:47.338452   74203 logs.go:276] 0 containers: []
	W0731 18:09:47.338461   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:47.338468   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:47.338526   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:47.374909   74203 cri.go:89] found id: ""
	I0731 18:09:47.374943   74203 logs.go:276] 0 containers: []
	W0731 18:09:47.374954   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:47.374963   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:47.374976   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:47.387739   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:47.387765   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:47.480479   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:47.480505   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:47.480519   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:47.562857   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:47.562890   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:47.608435   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:47.608466   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:50.164351   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:50.177485   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:50.177546   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:50.211474   74203 cri.go:89] found id: ""
	I0731 18:09:50.211502   74203 logs.go:276] 0 containers: []
	W0731 18:09:50.211512   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:50.211520   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:50.211583   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:50.248167   74203 cri.go:89] found id: ""
	I0731 18:09:50.248190   74203 logs.go:276] 0 containers: []
	W0731 18:09:50.248197   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:50.248203   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:50.248250   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:50.286323   74203 cri.go:89] found id: ""
	I0731 18:09:50.286358   74203 logs.go:276] 0 containers: []
	W0731 18:09:50.286366   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:50.286372   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:50.286420   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:50.316634   74203 cri.go:89] found id: ""
	I0731 18:09:50.316661   74203 logs.go:276] 0 containers: []
	W0731 18:09:50.316670   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:50.316675   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:50.316726   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:47.279198   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:49.280511   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:49.144581   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:51.642915   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:51.272079   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:53.272815   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:50.349881   74203 cri.go:89] found id: ""
	I0731 18:09:50.349909   74203 logs.go:276] 0 containers: []
	W0731 18:09:50.349919   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:50.349926   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:50.349989   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:50.384147   74203 cri.go:89] found id: ""
	I0731 18:09:50.384181   74203 logs.go:276] 0 containers: []
	W0731 18:09:50.384194   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:50.384203   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:50.384272   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:50.418024   74203 cri.go:89] found id: ""
	I0731 18:09:50.418052   74203 logs.go:276] 0 containers: []
	W0731 18:09:50.418062   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:50.418069   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:50.418130   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:50.454484   74203 cri.go:89] found id: ""
	I0731 18:09:50.454517   74203 logs.go:276] 0 containers: []
	W0731 18:09:50.454525   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:50.454533   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:50.454544   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:50.505508   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:50.505545   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:50.518504   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:50.518529   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:50.587950   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:50.587974   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:50.587989   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:50.669268   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:50.669302   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:53.209229   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:53.222114   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:53.222175   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:53.255330   74203 cri.go:89] found id: ""
	I0731 18:09:53.255356   74203 logs.go:276] 0 containers: []
	W0731 18:09:53.255365   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:53.255371   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:53.255432   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:53.290354   74203 cri.go:89] found id: ""
	I0731 18:09:53.290375   74203 logs.go:276] 0 containers: []
	W0731 18:09:53.290382   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:53.290387   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:53.290438   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:53.323621   74203 cri.go:89] found id: ""
	I0731 18:09:53.323645   74203 logs.go:276] 0 containers: []
	W0731 18:09:53.323653   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:53.323658   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:53.323718   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:53.355850   74203 cri.go:89] found id: ""
	I0731 18:09:53.355877   74203 logs.go:276] 0 containers: []
	W0731 18:09:53.355887   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:53.355894   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:53.355957   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:53.388686   74203 cri.go:89] found id: ""
	I0731 18:09:53.388716   74203 logs.go:276] 0 containers: []
	W0731 18:09:53.388726   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:53.388733   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:53.388785   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:53.426924   74203 cri.go:89] found id: ""
	I0731 18:09:53.426952   74203 logs.go:276] 0 containers: []
	W0731 18:09:53.426961   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:53.426967   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:53.427019   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:53.462041   74203 cri.go:89] found id: ""
	I0731 18:09:53.462067   74203 logs.go:276] 0 containers: []
	W0731 18:09:53.462078   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:53.462084   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:53.462145   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:53.493810   74203 cri.go:89] found id: ""
	I0731 18:09:53.493833   74203 logs.go:276] 0 containers: []
	W0731 18:09:53.493842   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:53.493852   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:53.493867   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:53.530019   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:53.530053   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:53.580749   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:53.580782   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:53.594457   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:53.594482   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:53.662096   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:53.662116   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:53.662134   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:51.778292   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:53.779043   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:53.643914   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:56.142699   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:55.772106   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:58.271063   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:56.238479   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:56.251272   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:56.251350   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:56.287380   74203 cri.go:89] found id: ""
	I0731 18:09:56.287406   74203 logs.go:276] 0 containers: []
	W0731 18:09:56.287414   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:56.287419   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:56.287471   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:56.322490   74203 cri.go:89] found id: ""
	I0731 18:09:56.322512   74203 logs.go:276] 0 containers: []
	W0731 18:09:56.322520   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:56.322526   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:56.322572   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:56.355845   74203 cri.go:89] found id: ""
	I0731 18:09:56.355874   74203 logs.go:276] 0 containers: []
	W0731 18:09:56.355885   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:56.355895   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:56.355958   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:56.388304   74203 cri.go:89] found id: ""
	I0731 18:09:56.388330   74203 logs.go:276] 0 containers: []
	W0731 18:09:56.388340   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:56.388348   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:56.388411   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:56.420837   74203 cri.go:89] found id: ""
	I0731 18:09:56.420867   74203 logs.go:276] 0 containers: []
	W0731 18:09:56.420877   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:56.420884   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:56.420950   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:56.453095   74203 cri.go:89] found id: ""
	I0731 18:09:56.453135   74203 logs.go:276] 0 containers: []
	W0731 18:09:56.453146   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:56.453155   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:56.453214   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:56.484245   74203 cri.go:89] found id: ""
	I0731 18:09:56.484272   74203 logs.go:276] 0 containers: []
	W0731 18:09:56.484282   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:56.484296   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:56.484366   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:56.519473   74203 cri.go:89] found id: ""
	I0731 18:09:56.519501   74203 logs.go:276] 0 containers: []
	W0731 18:09:56.519508   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:56.519516   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:56.519530   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:56.532178   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:56.532203   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:56.600092   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:56.600122   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:56.600137   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:56.679176   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:56.679208   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:56.715464   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:56.715499   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:59.267214   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:09:59.280666   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:09:59.280740   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:09:59.312898   74203 cri.go:89] found id: ""
	I0731 18:09:59.312928   74203 logs.go:276] 0 containers: []
	W0731 18:09:59.312940   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:09:59.312947   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:09:59.313013   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:09:59.347881   74203 cri.go:89] found id: ""
	I0731 18:09:59.347907   74203 logs.go:276] 0 containers: []
	W0731 18:09:59.347915   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:09:59.347919   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:09:59.347978   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:09:59.382566   74203 cri.go:89] found id: ""
	I0731 18:09:59.382603   74203 logs.go:276] 0 containers: []
	W0731 18:09:59.382615   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:09:59.382629   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:09:59.382691   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:09:59.417123   74203 cri.go:89] found id: ""
	I0731 18:09:59.417148   74203 logs.go:276] 0 containers: []
	W0731 18:09:59.417157   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:09:59.417163   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:09:59.417220   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:09:59.452674   74203 cri.go:89] found id: ""
	I0731 18:09:59.452699   74203 logs.go:276] 0 containers: []
	W0731 18:09:59.452709   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:09:59.452715   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:09:59.452775   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:09:59.488879   74203 cri.go:89] found id: ""
	I0731 18:09:59.488905   74203 logs.go:276] 0 containers: []
	W0731 18:09:59.488913   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:09:59.488921   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:09:59.488981   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:09:59.521773   74203 cri.go:89] found id: ""
	I0731 18:09:59.521801   74203 logs.go:276] 0 containers: []
	W0731 18:09:59.521809   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:09:59.521816   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:09:59.521876   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:09:59.566619   74203 cri.go:89] found id: ""
	I0731 18:09:59.566649   74203 logs.go:276] 0 containers: []
	W0731 18:09:59.566659   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:09:59.566670   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:09:59.566687   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:09:59.638301   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:09:59.638351   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:09:59.638367   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:09:59.721561   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:09:59.721597   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:09:59.759371   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:09:59.759402   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:09:59.811223   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:09:59.811255   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:09:56.280351   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:58.777896   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:00.779028   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:09:58.144006   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:00.643536   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:00.772456   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:03.270710   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:02.325339   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:02.337908   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:02.337963   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:02.369343   74203 cri.go:89] found id: ""
	I0731 18:10:02.369369   74203 logs.go:276] 0 containers: []
	W0731 18:10:02.369378   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:02.369384   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:02.369442   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:02.406207   74203 cri.go:89] found id: ""
	I0731 18:10:02.406234   74203 logs.go:276] 0 containers: []
	W0731 18:10:02.406242   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:02.406247   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:02.406297   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:02.442001   74203 cri.go:89] found id: ""
	I0731 18:10:02.442031   74203 logs.go:276] 0 containers: []
	W0731 18:10:02.442041   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:02.442049   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:02.442109   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:02.478407   74203 cri.go:89] found id: ""
	I0731 18:10:02.478431   74203 logs.go:276] 0 containers: []
	W0731 18:10:02.478439   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:02.478444   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:02.478491   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:02.513832   74203 cri.go:89] found id: ""
	I0731 18:10:02.513875   74203 logs.go:276] 0 containers: []
	W0731 18:10:02.513888   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:02.513896   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:02.513962   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:02.550830   74203 cri.go:89] found id: ""
	I0731 18:10:02.550856   74203 logs.go:276] 0 containers: []
	W0731 18:10:02.550867   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:02.550874   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:02.550937   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:02.584649   74203 cri.go:89] found id: ""
	I0731 18:10:02.584676   74203 logs.go:276] 0 containers: []
	W0731 18:10:02.584684   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:02.584691   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:02.584752   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:02.617436   74203 cri.go:89] found id: ""
	I0731 18:10:02.617464   74203 logs.go:276] 0 containers: []
	W0731 18:10:02.617475   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:02.617485   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:02.617500   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:02.671571   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:02.671609   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:02.686657   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:02.686694   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:02.755974   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:02.756008   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:02.756025   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:02.837976   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:02.838012   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:02.779666   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:04.779994   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:02.644075   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:05.142859   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:05.272500   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:07.771599   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:05.375212   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:05.388635   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:05.388703   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:05.427583   74203 cri.go:89] found id: ""
	I0731 18:10:05.427610   74203 logs.go:276] 0 containers: []
	W0731 18:10:05.427617   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:05.427622   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:05.427673   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:05.462550   74203 cri.go:89] found id: ""
	I0731 18:10:05.462575   74203 logs.go:276] 0 containers: []
	W0731 18:10:05.462583   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:05.462589   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:05.462645   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:05.501768   74203 cri.go:89] found id: ""
	I0731 18:10:05.501790   74203 logs.go:276] 0 containers: []
	W0731 18:10:05.501797   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:05.501802   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:05.501860   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:05.539692   74203 cri.go:89] found id: ""
	I0731 18:10:05.539719   74203 logs.go:276] 0 containers: []
	W0731 18:10:05.539731   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:05.539737   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:05.539798   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:05.573844   74203 cri.go:89] found id: ""
	I0731 18:10:05.573872   74203 logs.go:276] 0 containers: []
	W0731 18:10:05.573884   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:05.573891   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:05.573953   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:05.607827   74203 cri.go:89] found id: ""
	I0731 18:10:05.607848   74203 logs.go:276] 0 containers: []
	W0731 18:10:05.607858   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:05.607863   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:05.607913   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:05.639644   74203 cri.go:89] found id: ""
	I0731 18:10:05.639673   74203 logs.go:276] 0 containers: []
	W0731 18:10:05.639684   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:05.639691   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:05.639753   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:05.673164   74203 cri.go:89] found id: ""
	I0731 18:10:05.673188   74203 logs.go:276] 0 containers: []
	W0731 18:10:05.673195   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:05.673203   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:05.673215   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:05.755189   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:05.755221   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:05.793686   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:05.793715   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:05.844930   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:05.844965   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:05.859150   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:05.859176   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:05.929945   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:08.430669   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:08.444918   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:08.444989   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:08.482598   74203 cri.go:89] found id: ""
	I0731 18:10:08.482625   74203 logs.go:276] 0 containers: []
	W0731 18:10:08.482635   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:08.482642   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:08.482708   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:08.519687   74203 cri.go:89] found id: ""
	I0731 18:10:08.519717   74203 logs.go:276] 0 containers: []
	W0731 18:10:08.519726   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:08.519734   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:08.519795   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:08.551600   74203 cri.go:89] found id: ""
	I0731 18:10:08.551638   74203 logs.go:276] 0 containers: []
	W0731 18:10:08.551649   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:08.551657   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:08.551713   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:08.585233   74203 cri.go:89] found id: ""
	I0731 18:10:08.585263   74203 logs.go:276] 0 containers: []
	W0731 18:10:08.585274   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:08.585282   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:08.585343   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:08.622464   74203 cri.go:89] found id: ""
	I0731 18:10:08.622492   74203 logs.go:276] 0 containers: []
	W0731 18:10:08.622502   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:08.622510   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:08.622569   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:08.658360   74203 cri.go:89] found id: ""
	I0731 18:10:08.658390   74203 logs.go:276] 0 containers: []
	W0731 18:10:08.658402   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:08.658410   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:08.658471   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:08.692076   74203 cri.go:89] found id: ""
	I0731 18:10:08.692100   74203 logs.go:276] 0 containers: []
	W0731 18:10:08.692109   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:08.692116   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:08.692179   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:08.729584   74203 cri.go:89] found id: ""
	I0731 18:10:08.729612   74203 logs.go:276] 0 containers: []
	W0731 18:10:08.729622   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:08.729633   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:08.729647   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:08.806395   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:08.806457   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:08.806485   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:08.884008   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:08.884046   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:08.924359   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:08.924398   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:08.978161   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:08.978195   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:07.279327   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:09.281214   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:07.143145   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:09.143995   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:11.643254   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:09.773024   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:12.272862   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:14.273615   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:11.491784   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:11.504711   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:11.504784   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:11.541314   74203 cri.go:89] found id: ""
	I0731 18:10:11.541353   74203 logs.go:276] 0 containers: []
	W0731 18:10:11.541361   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:11.541366   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:11.541424   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:11.576481   74203 cri.go:89] found id: ""
	I0731 18:10:11.576509   74203 logs.go:276] 0 containers: []
	W0731 18:10:11.576527   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:11.576535   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:11.576597   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:11.610370   74203 cri.go:89] found id: ""
	I0731 18:10:11.610395   74203 logs.go:276] 0 containers: []
	W0731 18:10:11.610404   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:11.610412   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:11.610470   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:11.645559   74203 cri.go:89] found id: ""
	I0731 18:10:11.645586   74203 logs.go:276] 0 containers: []
	W0731 18:10:11.645593   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:11.645598   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:11.645654   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:11.677576   74203 cri.go:89] found id: ""
	I0731 18:10:11.677613   74203 logs.go:276] 0 containers: []
	W0731 18:10:11.677624   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:11.677631   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:11.677681   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:11.710173   74203 cri.go:89] found id: ""
	I0731 18:10:11.710199   74203 logs.go:276] 0 containers: []
	W0731 18:10:11.710208   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:11.710215   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:11.710273   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:11.743722   74203 cri.go:89] found id: ""
	I0731 18:10:11.743752   74203 logs.go:276] 0 containers: []
	W0731 18:10:11.743763   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:11.743782   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:11.743857   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:11.776730   74203 cri.go:89] found id: ""
	I0731 18:10:11.776752   74203 logs.go:276] 0 containers: []
	W0731 18:10:11.776759   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:11.776766   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:11.776777   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:11.846385   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:11.846404   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:11.846415   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:11.923748   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:11.923779   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:11.959700   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:11.959734   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:12.009971   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:12.010002   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:14.524097   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:14.537349   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:14.537449   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:14.569907   74203 cri.go:89] found id: ""
	I0731 18:10:14.569934   74203 logs.go:276] 0 containers: []
	W0731 18:10:14.569941   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:14.569947   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:14.569999   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:14.605058   74203 cri.go:89] found id: ""
	I0731 18:10:14.605085   74203 logs.go:276] 0 containers: []
	W0731 18:10:14.605095   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:14.605102   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:14.605155   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:14.640941   74203 cri.go:89] found id: ""
	I0731 18:10:14.640964   74203 logs.go:276] 0 containers: []
	W0731 18:10:14.640975   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:14.640982   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:14.641039   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:14.678774   74203 cri.go:89] found id: ""
	I0731 18:10:14.678803   74203 logs.go:276] 0 containers: []
	W0731 18:10:14.678814   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:14.678822   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:14.678880   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:14.714123   74203 cri.go:89] found id: ""
	I0731 18:10:14.714152   74203 logs.go:276] 0 containers: []
	W0731 18:10:14.714163   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:14.714171   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:14.714230   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:14.750212   74203 cri.go:89] found id: ""
	I0731 18:10:14.750243   74203 logs.go:276] 0 containers: []
	W0731 18:10:14.750255   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:14.750262   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:14.750322   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:14.786820   74203 cri.go:89] found id: ""
	I0731 18:10:14.786842   74203 logs.go:276] 0 containers: []
	W0731 18:10:14.786850   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:14.786856   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:14.786904   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:14.819667   74203 cri.go:89] found id: ""
	I0731 18:10:14.819689   74203 logs.go:276] 0 containers: []
	W0731 18:10:14.819697   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:14.819705   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:14.819725   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:14.832525   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:14.832550   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:14.901190   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:14.901216   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:14.901229   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:14.977123   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:14.977158   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:15.014882   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:15.014912   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:11.779007   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:14.279638   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:14.142303   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:16.143713   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:16.770910   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:18.771058   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:17.564989   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:17.578676   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:17.578740   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:17.610077   74203 cri.go:89] found id: ""
	I0731 18:10:17.610103   74203 logs.go:276] 0 containers: []
	W0731 18:10:17.610112   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:17.610117   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:17.610169   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:17.643143   74203 cri.go:89] found id: ""
	I0731 18:10:17.643166   74203 logs.go:276] 0 containers: []
	W0731 18:10:17.643173   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:17.643179   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:17.643225   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:17.677979   74203 cri.go:89] found id: ""
	I0731 18:10:17.678002   74203 logs.go:276] 0 containers: []
	W0731 18:10:17.678010   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:17.678016   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:17.678086   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:17.711905   74203 cri.go:89] found id: ""
	I0731 18:10:17.711941   74203 logs.go:276] 0 containers: []
	W0731 18:10:17.711953   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:17.711960   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:17.712013   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:17.745842   74203 cri.go:89] found id: ""
	I0731 18:10:17.745870   74203 logs.go:276] 0 containers: []
	W0731 18:10:17.745881   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:17.745889   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:17.745949   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:17.778170   74203 cri.go:89] found id: ""
	I0731 18:10:17.778242   74203 logs.go:276] 0 containers: []
	W0731 18:10:17.778260   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:17.778272   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:17.778340   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:17.810717   74203 cri.go:89] found id: ""
	I0731 18:10:17.810744   74203 logs.go:276] 0 containers: []
	W0731 18:10:17.810755   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:17.810762   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:17.810823   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:17.843237   74203 cri.go:89] found id: ""
	I0731 18:10:17.843268   74203 logs.go:276] 0 containers: []
	W0731 18:10:17.843278   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:17.843288   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:17.843303   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:17.894338   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:17.894376   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:17.907898   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:17.907927   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:17.977115   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:17.977133   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:17.977145   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:18.059924   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:18.059968   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:16.279697   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:18.780698   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:18.144063   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:20.643891   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:20.772956   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:23.270974   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:20.600903   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:20.613609   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:20.613680   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:20.646352   74203 cri.go:89] found id: ""
	I0731 18:10:20.646379   74203 logs.go:276] 0 containers: []
	W0731 18:10:20.646388   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:20.646395   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:20.646453   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:20.680448   74203 cri.go:89] found id: ""
	I0731 18:10:20.680475   74203 logs.go:276] 0 containers: []
	W0731 18:10:20.680486   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:20.680493   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:20.680555   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:20.716330   74203 cri.go:89] found id: ""
	I0731 18:10:20.716365   74203 logs.go:276] 0 containers: []
	W0731 18:10:20.716378   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:20.716387   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:20.716448   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:20.748630   74203 cri.go:89] found id: ""
	I0731 18:10:20.748657   74203 logs.go:276] 0 containers: []
	W0731 18:10:20.748665   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:20.748670   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:20.748736   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:20.787769   74203 cri.go:89] found id: ""
	I0731 18:10:20.787793   74203 logs.go:276] 0 containers: []
	W0731 18:10:20.787802   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:20.787809   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:20.787869   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:20.819884   74203 cri.go:89] found id: ""
	I0731 18:10:20.819911   74203 logs.go:276] 0 containers: []
	W0731 18:10:20.819921   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:20.819929   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:20.819988   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:20.853414   74203 cri.go:89] found id: ""
	I0731 18:10:20.853437   74203 logs.go:276] 0 containers: []
	W0731 18:10:20.853445   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:20.853450   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:20.853508   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:20.889198   74203 cri.go:89] found id: ""
	I0731 18:10:20.889224   74203 logs.go:276] 0 containers: []
	W0731 18:10:20.889231   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:20.889239   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:20.889251   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:20.903240   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:20.903268   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:20.971003   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:20.971032   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:20.971051   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:21.045856   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:21.045888   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:21.086089   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:21.086121   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:23.639664   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:23.652573   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:23.652632   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:23.684719   74203 cri.go:89] found id: ""
	I0731 18:10:23.684746   74203 logs.go:276] 0 containers: []
	W0731 18:10:23.684757   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:23.684765   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:23.684820   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:23.717315   74203 cri.go:89] found id: ""
	I0731 18:10:23.717350   74203 logs.go:276] 0 containers: []
	W0731 18:10:23.717362   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:23.717369   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:23.717432   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:23.750251   74203 cri.go:89] found id: ""
	I0731 18:10:23.750275   74203 logs.go:276] 0 containers: []
	W0731 18:10:23.750286   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:23.750293   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:23.750397   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:23.785700   74203 cri.go:89] found id: ""
	I0731 18:10:23.785726   74203 logs.go:276] 0 containers: []
	W0731 18:10:23.785737   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:23.785745   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:23.785792   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:23.816856   74203 cri.go:89] found id: ""
	I0731 18:10:23.816885   74203 logs.go:276] 0 containers: []
	W0731 18:10:23.816895   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:23.816902   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:23.816965   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:23.849931   74203 cri.go:89] found id: ""
	I0731 18:10:23.849962   74203 logs.go:276] 0 containers: []
	W0731 18:10:23.849972   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:23.849980   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:23.850043   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:23.881413   74203 cri.go:89] found id: ""
	I0731 18:10:23.881444   74203 logs.go:276] 0 containers: []
	W0731 18:10:23.881452   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:23.881458   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:23.881516   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:23.914272   74203 cri.go:89] found id: ""
	I0731 18:10:23.914303   74203 logs.go:276] 0 containers: []
	W0731 18:10:23.914313   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:23.914325   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:23.914352   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:23.979988   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:23.980015   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:23.980027   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:24.057159   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:24.057198   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:24.097567   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:24.097603   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:24.154740   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:24.154781   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:21.279091   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:23.779103   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:25.779754   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:23.142423   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:25.642901   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:25.272277   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:27.771221   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:26.670324   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:26.683866   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:26.683951   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:26.717671   74203 cri.go:89] found id: ""
	I0731 18:10:26.717722   74203 logs.go:276] 0 containers: []
	W0731 18:10:26.717733   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:26.717739   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:26.717790   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:26.751201   74203 cri.go:89] found id: ""
	I0731 18:10:26.751228   74203 logs.go:276] 0 containers: []
	W0731 18:10:26.751236   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:26.751246   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:26.751315   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:26.784768   74203 cri.go:89] found id: ""
	I0731 18:10:26.784793   74203 logs.go:276] 0 containers: []
	W0731 18:10:26.784803   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:26.784811   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:26.784868   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:26.822269   74203 cri.go:89] found id: ""
	I0731 18:10:26.822298   74203 logs.go:276] 0 containers: []
	W0731 18:10:26.822307   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:26.822315   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:26.822378   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:26.854405   74203 cri.go:89] found id: ""
	I0731 18:10:26.854427   74203 logs.go:276] 0 containers: []
	W0731 18:10:26.854434   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:26.854441   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:26.854490   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:26.888975   74203 cri.go:89] found id: ""
	I0731 18:10:26.889000   74203 logs.go:276] 0 containers: []
	W0731 18:10:26.889007   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:26.889013   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:26.889085   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:26.922940   74203 cri.go:89] found id: ""
	I0731 18:10:26.922967   74203 logs.go:276] 0 containers: []
	W0731 18:10:26.922976   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:26.922981   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:26.923040   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:26.955717   74203 cri.go:89] found id: ""
	I0731 18:10:26.955743   74203 logs.go:276] 0 containers: []
	W0731 18:10:26.955754   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:26.955764   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:26.955779   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:27.006453   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:27.006481   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:27.019136   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:27.019159   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:27.086988   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:27.087014   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:27.087031   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:27.161574   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:27.161604   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:29.705620   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:29.718718   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:29.718775   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:29.751079   74203 cri.go:89] found id: ""
	I0731 18:10:29.751123   74203 logs.go:276] 0 containers: []
	W0731 18:10:29.751134   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:29.751142   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:29.751198   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:29.790944   74203 cri.go:89] found id: ""
	I0731 18:10:29.790971   74203 logs.go:276] 0 containers: []
	W0731 18:10:29.790982   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:29.790988   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:29.791041   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:29.827921   74203 cri.go:89] found id: ""
	I0731 18:10:29.827951   74203 logs.go:276] 0 containers: []
	W0731 18:10:29.827965   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:29.827971   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:29.828031   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:29.861365   74203 cri.go:89] found id: ""
	I0731 18:10:29.861398   74203 logs.go:276] 0 containers: []
	W0731 18:10:29.861409   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:29.861417   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:29.861472   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:29.894509   74203 cri.go:89] found id: ""
	I0731 18:10:29.894537   74203 logs.go:276] 0 containers: []
	W0731 18:10:29.894546   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:29.894552   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:29.894614   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:29.926793   74203 cri.go:89] found id: ""
	I0731 18:10:29.926821   74203 logs.go:276] 0 containers: []
	W0731 18:10:29.926832   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:29.926839   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:29.926904   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:29.963765   74203 cri.go:89] found id: ""
	I0731 18:10:29.963792   74203 logs.go:276] 0 containers: []
	W0731 18:10:29.963802   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:29.963809   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:29.963870   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:29.998577   74203 cri.go:89] found id: ""
	I0731 18:10:29.998604   74203 logs.go:276] 0 containers: []
	W0731 18:10:29.998611   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:29.998619   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:29.998630   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:30.050035   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:30.050072   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:30.064147   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:30.064178   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:30.136990   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:30.137012   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:30.137030   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:30.214687   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:30.214719   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:28.279257   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:30.778466   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:27.644082   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:30.144191   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:29.772316   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:32.272651   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:32.753503   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:32.766795   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:32.766873   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:32.812134   74203 cri.go:89] found id: ""
	I0731 18:10:32.812161   74203 logs.go:276] 0 containers: []
	W0731 18:10:32.812169   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:32.812175   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:32.812229   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:32.846997   74203 cri.go:89] found id: ""
	I0731 18:10:32.847029   74203 logs.go:276] 0 containers: []
	W0731 18:10:32.847039   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:32.847044   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:32.847092   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:32.884093   74203 cri.go:89] found id: ""
	I0731 18:10:32.884123   74203 logs.go:276] 0 containers: []
	W0731 18:10:32.884132   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:32.884138   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:32.884188   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:32.920160   74203 cri.go:89] found id: ""
	I0731 18:10:32.920186   74203 logs.go:276] 0 containers: []
	W0731 18:10:32.920197   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:32.920204   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:32.920263   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:32.952750   74203 cri.go:89] found id: ""
	I0731 18:10:32.952777   74203 logs.go:276] 0 containers: []
	W0731 18:10:32.952788   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:32.952795   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:32.952865   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:32.989086   74203 cri.go:89] found id: ""
	I0731 18:10:32.989115   74203 logs.go:276] 0 containers: []
	W0731 18:10:32.989125   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:32.989135   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:32.989189   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:33.021554   74203 cri.go:89] found id: ""
	I0731 18:10:33.021590   74203 logs.go:276] 0 containers: []
	W0731 18:10:33.021602   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:33.021609   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:33.021662   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:33.061097   74203 cri.go:89] found id: ""
	I0731 18:10:33.061128   74203 logs.go:276] 0 containers: []
	W0731 18:10:33.061141   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:33.061160   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:33.061174   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:33.113497   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:33.113534   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:33.126816   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:33.126842   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:33.196713   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:33.196733   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:33.196744   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:33.277697   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:33.277724   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:33.279738   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:35.780181   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:32.643177   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:35.143606   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:34.771678   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:36.772167   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:39.272752   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:35.817143   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:35.829760   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:35.829820   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:35.862974   74203 cri.go:89] found id: ""
	I0731 18:10:35.863002   74203 logs.go:276] 0 containers: []
	W0731 18:10:35.863014   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:35.863022   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:35.863078   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:35.898547   74203 cri.go:89] found id: ""
	I0731 18:10:35.898576   74203 logs.go:276] 0 containers: []
	W0731 18:10:35.898584   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:35.898590   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:35.898651   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:35.930351   74203 cri.go:89] found id: ""
	I0731 18:10:35.930379   74203 logs.go:276] 0 containers: []
	W0731 18:10:35.930390   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:35.930396   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:35.930463   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:35.962623   74203 cri.go:89] found id: ""
	I0731 18:10:35.962652   74203 logs.go:276] 0 containers: []
	W0731 18:10:35.962663   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:35.962670   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:35.962727   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:35.998213   74203 cri.go:89] found id: ""
	I0731 18:10:35.998233   74203 logs.go:276] 0 containers: []
	W0731 18:10:35.998240   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:35.998245   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:35.998291   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:36.032670   74203 cri.go:89] found id: ""
	I0731 18:10:36.032695   74203 logs.go:276] 0 containers: []
	W0731 18:10:36.032703   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:36.032709   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:36.032757   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:36.066349   74203 cri.go:89] found id: ""
	I0731 18:10:36.066381   74203 logs.go:276] 0 containers: []
	W0731 18:10:36.066392   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:36.066399   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:36.066461   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:36.104137   74203 cri.go:89] found id: ""
	I0731 18:10:36.104168   74203 logs.go:276] 0 containers: []
	W0731 18:10:36.104180   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:36.104200   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:36.104215   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:36.155814   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:36.155844   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:36.168885   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:36.168912   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:36.235950   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:36.235972   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:36.235987   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:36.318382   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:36.318414   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:38.853972   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:38.867018   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:38.867089   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:38.902069   74203 cri.go:89] found id: ""
	I0731 18:10:38.902097   74203 logs.go:276] 0 containers: []
	W0731 18:10:38.902109   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:38.902115   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:38.902181   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:38.935272   74203 cri.go:89] found id: ""
	I0731 18:10:38.935296   74203 logs.go:276] 0 containers: []
	W0731 18:10:38.935316   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:38.935329   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:38.935387   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:38.968582   74203 cri.go:89] found id: ""
	I0731 18:10:38.968610   74203 logs.go:276] 0 containers: []
	W0731 18:10:38.968621   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:38.968629   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:38.968688   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:38.999740   74203 cri.go:89] found id: ""
	I0731 18:10:38.999770   74203 logs.go:276] 0 containers: []
	W0731 18:10:38.999780   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:38.999787   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:38.999845   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:39.032964   74203 cri.go:89] found id: ""
	I0731 18:10:39.032993   74203 logs.go:276] 0 containers: []
	W0731 18:10:39.033008   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:39.033015   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:39.033099   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:39.064121   74203 cri.go:89] found id: ""
	I0731 18:10:39.064149   74203 logs.go:276] 0 containers: []
	W0731 18:10:39.064158   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:39.064164   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:39.064222   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:39.098462   74203 cri.go:89] found id: ""
	I0731 18:10:39.098488   74203 logs.go:276] 0 containers: []
	W0731 18:10:39.098498   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:39.098505   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:39.098564   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:39.130627   74203 cri.go:89] found id: ""
	I0731 18:10:39.130653   74203 logs.go:276] 0 containers: []
	W0731 18:10:39.130663   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:39.130674   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:39.130687   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:39.223664   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:39.223698   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:39.260502   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:39.260533   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:39.315643   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:39.315675   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:39.329731   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:39.329761   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:39.395078   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:38.278911   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:40.779921   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:37.643246   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:39.643862   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:41.772051   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:44.271544   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:41.895698   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:41.910111   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:41.910191   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:41.943700   74203 cri.go:89] found id: ""
	I0731 18:10:41.943732   74203 logs.go:276] 0 containers: []
	W0731 18:10:41.943743   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:41.943751   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:41.943812   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:41.976848   74203 cri.go:89] found id: ""
	I0731 18:10:41.976879   74203 logs.go:276] 0 containers: []
	W0731 18:10:41.976888   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:41.976894   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:41.976967   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:42.009424   74203 cri.go:89] found id: ""
	I0731 18:10:42.009451   74203 logs.go:276] 0 containers: []
	W0731 18:10:42.009462   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:42.009477   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:42.009546   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:42.047233   74203 cri.go:89] found id: ""
	I0731 18:10:42.047260   74203 logs.go:276] 0 containers: []
	W0731 18:10:42.047268   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:42.047274   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:42.047342   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:42.079900   74203 cri.go:89] found id: ""
	I0731 18:10:42.079928   74203 logs.go:276] 0 containers: []
	W0731 18:10:42.079938   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:42.079945   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:42.080025   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:42.114122   74203 cri.go:89] found id: ""
	I0731 18:10:42.114152   74203 logs.go:276] 0 containers: []
	W0731 18:10:42.114164   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:42.114172   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:42.114224   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:42.148741   74203 cri.go:89] found id: ""
	I0731 18:10:42.148768   74203 logs.go:276] 0 containers: []
	W0731 18:10:42.148780   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:42.148789   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:42.148853   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:42.184739   74203 cri.go:89] found id: ""
	I0731 18:10:42.184762   74203 logs.go:276] 0 containers: []
	W0731 18:10:42.184769   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:42.184777   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:42.184791   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:42.254676   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:42.254694   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:42.254706   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:42.334936   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:42.334978   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:42.371511   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:42.371540   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:42.421800   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:42.421831   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:44.934983   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:44.947212   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:44.947293   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:44.979722   74203 cri.go:89] found id: ""
	I0731 18:10:44.979748   74203 logs.go:276] 0 containers: []
	W0731 18:10:44.979760   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:44.979767   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:44.979819   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:45.011594   74203 cri.go:89] found id: ""
	I0731 18:10:45.011620   74203 logs.go:276] 0 containers: []
	W0731 18:10:45.011630   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:45.011637   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:45.011803   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:45.043174   74203 cri.go:89] found id: ""
	I0731 18:10:45.043197   74203 logs.go:276] 0 containers: []
	W0731 18:10:45.043207   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:45.043214   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:45.043278   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:45.074629   74203 cri.go:89] found id: ""
	I0731 18:10:45.074652   74203 logs.go:276] 0 containers: []
	W0731 18:10:45.074662   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:45.074669   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:45.074727   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:45.108917   74203 cri.go:89] found id: ""
	I0731 18:10:45.108944   74203 logs.go:276] 0 containers: []
	W0731 18:10:45.108952   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:45.108959   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:45.109018   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:45.142200   74203 cri.go:89] found id: ""
	I0731 18:10:45.142227   74203 logs.go:276] 0 containers: []
	W0731 18:10:45.142237   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:45.142244   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:45.142306   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:45.177076   74203 cri.go:89] found id: ""
	I0731 18:10:45.177101   74203 logs.go:276] 0 containers: []
	W0731 18:10:45.177109   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:45.177114   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:45.177168   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:45.209352   74203 cri.go:89] found id: ""
	I0731 18:10:45.209376   74203 logs.go:276] 0 containers: []
	W0731 18:10:45.209383   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:45.209392   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:45.209407   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:45.257966   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:45.257998   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:45.272429   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:45.272462   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 18:10:43.279626   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:45.778975   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:42.145247   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:44.642278   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:46.644897   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:46.771785   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:48.772117   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	W0731 18:10:45.347952   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:45.347973   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:45.347988   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:45.428556   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:45.428609   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:47.971089   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:47.986677   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:47.986749   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:48.020396   74203 cri.go:89] found id: ""
	I0731 18:10:48.020426   74203 logs.go:276] 0 containers: []
	W0731 18:10:48.020438   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:48.020446   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:48.020512   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:48.058129   74203 cri.go:89] found id: ""
	I0731 18:10:48.058161   74203 logs.go:276] 0 containers: []
	W0731 18:10:48.058172   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:48.058180   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:48.058249   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:48.091894   74203 cri.go:89] found id: ""
	I0731 18:10:48.091922   74203 logs.go:276] 0 containers: []
	W0731 18:10:48.091932   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:48.091939   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:48.091998   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:48.124757   74203 cri.go:89] found id: ""
	I0731 18:10:48.124788   74203 logs.go:276] 0 containers: []
	W0731 18:10:48.124798   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:48.124807   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:48.124871   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:48.159145   74203 cri.go:89] found id: ""
	I0731 18:10:48.159172   74203 logs.go:276] 0 containers: []
	W0731 18:10:48.159184   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:48.159191   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:48.159253   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:48.200024   74203 cri.go:89] found id: ""
	I0731 18:10:48.200051   74203 logs.go:276] 0 containers: []
	W0731 18:10:48.200061   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:48.200069   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:48.200128   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:48.233838   74203 cri.go:89] found id: ""
	I0731 18:10:48.233870   74203 logs.go:276] 0 containers: []
	W0731 18:10:48.233880   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:48.233886   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:48.233941   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:48.265786   74203 cri.go:89] found id: ""
	I0731 18:10:48.265812   74203 logs.go:276] 0 containers: []
	W0731 18:10:48.265821   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:48.265832   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:48.265846   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:48.280422   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:48.280449   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:48.346774   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:48.346796   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:48.346808   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:48.424017   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:48.424052   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:48.464139   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:48.464166   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:47.781556   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:50.278635   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:49.143684   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:51.144631   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:51.272847   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:53.771397   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:51.013681   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:51.028745   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:51.028814   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:51.062656   74203 cri.go:89] found id: ""
	I0731 18:10:51.062683   74203 logs.go:276] 0 containers: []
	W0731 18:10:51.062691   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:51.062700   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:51.062749   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:51.099203   74203 cri.go:89] found id: ""
	I0731 18:10:51.099228   74203 logs.go:276] 0 containers: []
	W0731 18:10:51.099237   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:51.099243   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:51.099310   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:51.133507   74203 cri.go:89] found id: ""
	I0731 18:10:51.133533   74203 logs.go:276] 0 containers: []
	W0731 18:10:51.133540   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:51.133546   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:51.133596   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:51.169935   74203 cri.go:89] found id: ""
	I0731 18:10:51.169954   74203 logs.go:276] 0 containers: []
	W0731 18:10:51.169961   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:51.169966   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:51.170012   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:51.202877   74203 cri.go:89] found id: ""
	I0731 18:10:51.202903   74203 logs.go:276] 0 containers: []
	W0731 18:10:51.202913   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:51.202919   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:51.202988   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:51.239913   74203 cri.go:89] found id: ""
	I0731 18:10:51.239939   74203 logs.go:276] 0 containers: []
	W0731 18:10:51.239949   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:51.239957   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:51.240018   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:51.272024   74203 cri.go:89] found id: ""
	I0731 18:10:51.272095   74203 logs.go:276] 0 containers: []
	W0731 18:10:51.272115   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:51.272123   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:51.272185   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:51.307016   74203 cri.go:89] found id: ""
	I0731 18:10:51.307043   74203 logs.go:276] 0 containers: []
	W0731 18:10:51.307053   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:51.307063   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:51.307079   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:51.364018   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:51.364066   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:51.384277   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:51.384303   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:51.472657   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:51.472679   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:51.472696   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:51.548408   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:51.548439   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:54.086526   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:54.099293   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:54.099368   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:54.129927   74203 cri.go:89] found id: ""
	I0731 18:10:54.129954   74203 logs.go:276] 0 containers: []
	W0731 18:10:54.129965   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:54.129972   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:54.130042   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:54.166428   74203 cri.go:89] found id: ""
	I0731 18:10:54.166457   74203 logs.go:276] 0 containers: []
	W0731 18:10:54.166468   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:54.166476   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:54.166538   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:54.204523   74203 cri.go:89] found id: ""
	I0731 18:10:54.204549   74203 logs.go:276] 0 containers: []
	W0731 18:10:54.204556   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:54.204562   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:54.204619   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:54.241706   74203 cri.go:89] found id: ""
	I0731 18:10:54.241730   74203 logs.go:276] 0 containers: []
	W0731 18:10:54.241737   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:54.241744   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:54.241802   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:54.277154   74203 cri.go:89] found id: ""
	I0731 18:10:54.277178   74203 logs.go:276] 0 containers: []
	W0731 18:10:54.277187   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:54.277193   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:54.277255   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:54.310198   74203 cri.go:89] found id: ""
	I0731 18:10:54.310223   74203 logs.go:276] 0 containers: []
	W0731 18:10:54.310231   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:54.310237   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:54.310283   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:54.344807   74203 cri.go:89] found id: ""
	I0731 18:10:54.344837   74203 logs.go:276] 0 containers: []
	W0731 18:10:54.344847   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:54.344854   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:54.344915   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:54.383358   74203 cri.go:89] found id: ""
	I0731 18:10:54.383391   74203 logs.go:276] 0 containers: []
	W0731 18:10:54.383400   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:54.383410   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:54.383424   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:54.431876   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:54.431908   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:54.444797   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:54.444824   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:54.518816   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:10:54.518839   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:54.518855   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:54.600072   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:54.600109   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:52.279006   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:54.279520   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:53.643093   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:56.143250   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:56.272955   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:58.771584   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:57.141070   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:10:57.155903   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:10:57.155975   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:10:57.189406   74203 cri.go:89] found id: ""
	I0731 18:10:57.189428   74203 logs.go:276] 0 containers: []
	W0731 18:10:57.189435   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:10:57.189441   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:10:57.189510   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:10:57.221507   74203 cri.go:89] found id: ""
	I0731 18:10:57.221531   74203 logs.go:276] 0 containers: []
	W0731 18:10:57.221540   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:10:57.221547   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:10:57.221614   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:10:57.257843   74203 cri.go:89] found id: ""
	I0731 18:10:57.257868   74203 logs.go:276] 0 containers: []
	W0731 18:10:57.257880   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:10:57.257887   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:10:57.257939   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:10:57.292697   74203 cri.go:89] found id: ""
	I0731 18:10:57.292728   74203 logs.go:276] 0 containers: []
	W0731 18:10:57.292737   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:10:57.292744   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:10:57.292802   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:57.325705   74203 cri.go:89] found id: ""
	I0731 18:10:57.325728   74203 logs.go:276] 0 containers: []
	W0731 18:10:57.325735   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:10:57.325740   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:10:57.325787   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:10:57.357436   74203 cri.go:89] found id: ""
	I0731 18:10:57.357463   74203 logs.go:276] 0 containers: []
	W0731 18:10:57.357471   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:10:57.357477   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:10:57.357534   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:10:57.388215   74203 cri.go:89] found id: ""
	I0731 18:10:57.388240   74203 logs.go:276] 0 containers: []
	W0731 18:10:57.388249   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:10:57.388256   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:10:57.388315   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:10:57.419609   74203 cri.go:89] found id: ""
	I0731 18:10:57.419631   74203 logs.go:276] 0 containers: []
	W0731 18:10:57.419643   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:10:57.419652   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:10:57.419663   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:10:57.497157   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:10:57.497188   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:10:57.533512   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:10:57.533552   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:10:57.587866   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:10:57.587904   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:10:57.601191   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:10:57.601222   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:10:57.681899   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:00.182160   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:00.195509   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:00.195598   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:00.230650   74203 cri.go:89] found id: ""
	I0731 18:11:00.230674   74203 logs.go:276] 0 containers: []
	W0731 18:11:00.230682   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:00.230689   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:00.230747   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:00.268629   74203 cri.go:89] found id: ""
	I0731 18:11:00.268656   74203 logs.go:276] 0 containers: []
	W0731 18:11:00.268666   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:00.268672   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:00.268740   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:00.301805   74203 cri.go:89] found id: ""
	I0731 18:11:00.301827   74203 logs.go:276] 0 containers: []
	W0731 18:11:00.301836   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:00.301843   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:00.301901   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:00.333844   74203 cri.go:89] found id: ""
	I0731 18:11:00.333871   74203 logs.go:276] 0 containers: []
	W0731 18:11:00.333882   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:00.333889   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:00.333949   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:10:56.779307   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:58.779655   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:10:58.643375   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:00.643713   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:01.272195   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:03.272739   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:00.366250   74203 cri.go:89] found id: ""
	I0731 18:11:00.366278   74203 logs.go:276] 0 containers: []
	W0731 18:11:00.366288   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:00.366295   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:00.366358   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:00.399301   74203 cri.go:89] found id: ""
	I0731 18:11:00.399325   74203 logs.go:276] 0 containers: []
	W0731 18:11:00.399335   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:00.399342   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:00.399405   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:00.432182   74203 cri.go:89] found id: ""
	I0731 18:11:00.432207   74203 logs.go:276] 0 containers: []
	W0731 18:11:00.432218   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:00.432224   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:00.432284   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:00.465395   74203 cri.go:89] found id: ""
	I0731 18:11:00.465423   74203 logs.go:276] 0 containers: []
	W0731 18:11:00.465432   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:00.465440   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:00.465453   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:00.516042   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:00.516077   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:00.528621   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:00.528647   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:00.600297   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:00.600322   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:00.600339   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:00.680368   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:00.680399   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:03.217684   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:03.230691   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:03.230752   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:03.264882   74203 cri.go:89] found id: ""
	I0731 18:11:03.264910   74203 logs.go:276] 0 containers: []
	W0731 18:11:03.264918   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:03.264924   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:03.264976   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:03.301608   74203 cri.go:89] found id: ""
	I0731 18:11:03.301733   74203 logs.go:276] 0 containers: []
	W0731 18:11:03.301754   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:03.301765   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:03.301838   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:03.335077   74203 cri.go:89] found id: ""
	I0731 18:11:03.335102   74203 logs.go:276] 0 containers: []
	W0731 18:11:03.335121   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:03.335128   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:03.335196   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:03.370755   74203 cri.go:89] found id: ""
	I0731 18:11:03.370783   74203 logs.go:276] 0 containers: []
	W0731 18:11:03.370794   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:03.370802   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:03.370862   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:03.403004   74203 cri.go:89] found id: ""
	I0731 18:11:03.403035   74203 logs.go:276] 0 containers: []
	W0731 18:11:03.403045   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:03.403052   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:03.403125   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:03.437169   74203 cri.go:89] found id: ""
	I0731 18:11:03.437209   74203 logs.go:276] 0 containers: []
	W0731 18:11:03.437219   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:03.437235   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:03.437296   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:03.469956   74203 cri.go:89] found id: ""
	I0731 18:11:03.469981   74203 logs.go:276] 0 containers: []
	W0731 18:11:03.469991   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:03.469998   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:03.470056   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:03.503850   74203 cri.go:89] found id: ""
	I0731 18:11:03.503878   74203 logs.go:276] 0 containers: []
	W0731 18:11:03.503888   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:03.503898   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:03.503913   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:03.554993   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:03.555036   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:03.567898   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:03.567925   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:03.630151   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:03.630188   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:03.630207   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:03.708552   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:03.708596   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:01.278830   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:03.278880   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:05.778296   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:03.143289   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:05.152015   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:05.771810   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:08.271205   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:06.249728   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:06.261923   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:06.261998   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:06.296249   74203 cri.go:89] found id: ""
	I0731 18:11:06.296276   74203 logs.go:276] 0 containers: []
	W0731 18:11:06.296286   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:06.296292   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:06.296356   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:06.329355   74203 cri.go:89] found id: ""
	I0731 18:11:06.329381   74203 logs.go:276] 0 containers: []
	W0731 18:11:06.329389   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:06.329395   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:06.329443   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:06.362585   74203 cri.go:89] found id: ""
	I0731 18:11:06.362618   74203 logs.go:276] 0 containers: []
	W0731 18:11:06.362630   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:06.362643   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:06.362704   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:06.396489   74203 cri.go:89] found id: ""
	I0731 18:11:06.396514   74203 logs.go:276] 0 containers: []
	W0731 18:11:06.396521   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:06.396527   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:06.396590   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:06.428859   74203 cri.go:89] found id: ""
	I0731 18:11:06.428888   74203 logs.go:276] 0 containers: []
	W0731 18:11:06.428897   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:06.428903   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:06.428960   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:06.468817   74203 cri.go:89] found id: ""
	I0731 18:11:06.468846   74203 logs.go:276] 0 containers: []
	W0731 18:11:06.468856   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:06.468864   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:06.468924   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:06.499975   74203 cri.go:89] found id: ""
	I0731 18:11:06.500000   74203 logs.go:276] 0 containers: []
	W0731 18:11:06.500008   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:06.500013   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:06.500067   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:06.537410   74203 cri.go:89] found id: ""
	I0731 18:11:06.537440   74203 logs.go:276] 0 containers: []
	W0731 18:11:06.537451   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:06.537461   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:06.537476   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:06.589664   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:06.589709   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:06.603978   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:06.604005   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:06.673436   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:06.673454   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:06.673465   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:06.757101   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:06.757143   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:09.299562   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:09.311910   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:09.311971   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:09.346517   74203 cri.go:89] found id: ""
	I0731 18:11:09.346545   74203 logs.go:276] 0 containers: []
	W0731 18:11:09.346555   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:09.346562   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:09.346634   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:09.377688   74203 cri.go:89] found id: ""
	I0731 18:11:09.377713   74203 logs.go:276] 0 containers: []
	W0731 18:11:09.377720   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:09.377726   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:09.377787   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:09.412149   74203 cri.go:89] found id: ""
	I0731 18:11:09.412176   74203 logs.go:276] 0 containers: []
	W0731 18:11:09.412186   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:09.412193   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:09.412259   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:09.444134   74203 cri.go:89] found id: ""
	I0731 18:11:09.444162   74203 logs.go:276] 0 containers: []
	W0731 18:11:09.444172   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:09.444178   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:09.444233   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:09.481407   74203 cri.go:89] found id: ""
	I0731 18:11:09.481436   74203 logs.go:276] 0 containers: []
	W0731 18:11:09.481447   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:09.481453   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:09.481513   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:09.514926   74203 cri.go:89] found id: ""
	I0731 18:11:09.514950   74203 logs.go:276] 0 containers: []
	W0731 18:11:09.514967   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:09.514974   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:09.515036   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:09.547253   74203 cri.go:89] found id: ""
	I0731 18:11:09.547278   74203 logs.go:276] 0 containers: []
	W0731 18:11:09.547285   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:09.547291   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:09.547376   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:09.587585   74203 cri.go:89] found id: ""
	I0731 18:11:09.587614   74203 logs.go:276] 0 containers: []
	W0731 18:11:09.587622   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:09.587632   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:09.587646   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:09.642024   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:09.642057   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:09.655244   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:09.655270   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:09.721446   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:09.721474   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:09.721489   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:09.803315   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:09.803349   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:07.779195   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:10.278028   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:07.643242   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:10.143895   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:10.271515   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:12.771322   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:12.344355   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:12.357122   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:12.357194   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:12.392237   74203 cri.go:89] found id: ""
	I0731 18:11:12.392258   74203 logs.go:276] 0 containers: []
	W0731 18:11:12.392267   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:12.392272   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:12.392339   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:12.424490   74203 cri.go:89] found id: ""
	I0731 18:11:12.424514   74203 logs.go:276] 0 containers: []
	W0731 18:11:12.424523   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:12.424529   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:12.424587   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:12.458438   74203 cri.go:89] found id: ""
	I0731 18:11:12.458467   74203 logs.go:276] 0 containers: []
	W0731 18:11:12.458477   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:12.458483   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:12.458545   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:12.495343   74203 cri.go:89] found id: ""
	I0731 18:11:12.495371   74203 logs.go:276] 0 containers: []
	W0731 18:11:12.495383   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:12.495391   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:12.495455   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:12.527285   74203 cri.go:89] found id: ""
	I0731 18:11:12.527314   74203 logs.go:276] 0 containers: []
	W0731 18:11:12.527324   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:12.527334   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:12.527393   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:12.560341   74203 cri.go:89] found id: ""
	I0731 18:11:12.560369   74203 logs.go:276] 0 containers: []
	W0731 18:11:12.560379   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:12.560387   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:12.560444   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:12.595084   74203 cri.go:89] found id: ""
	I0731 18:11:12.595120   74203 logs.go:276] 0 containers: []
	W0731 18:11:12.595133   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:12.595141   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:12.595215   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:12.630666   74203 cri.go:89] found id: ""
	I0731 18:11:12.630692   74203 logs.go:276] 0 containers: []
	W0731 18:11:12.630702   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:12.630711   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:12.630727   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:12.683588   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:12.683620   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:12.696899   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:12.696925   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:12.757815   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:12.757837   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:12.757870   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:12.834888   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:12.834927   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:12.278464   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:14.279031   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:12.643960   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:15.142811   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:14.771367   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:16.772010   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:19.271857   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:15.372797   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:15.386268   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:15.386356   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:15.420446   74203 cri.go:89] found id: ""
	I0731 18:11:15.420477   74203 logs.go:276] 0 containers: []
	W0731 18:11:15.420488   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:15.420497   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:15.420556   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:15.456092   74203 cri.go:89] found id: ""
	I0731 18:11:15.456118   74203 logs.go:276] 0 containers: []
	W0731 18:11:15.456129   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:15.456136   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:15.456194   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:15.488277   74203 cri.go:89] found id: ""
	I0731 18:11:15.488304   74203 logs.go:276] 0 containers: []
	W0731 18:11:15.488316   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:15.488323   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:15.488384   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:15.520701   74203 cri.go:89] found id: ""
	I0731 18:11:15.520730   74203 logs.go:276] 0 containers: []
	W0731 18:11:15.520741   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:15.520749   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:15.520818   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:15.552831   74203 cri.go:89] found id: ""
	I0731 18:11:15.552854   74203 logs.go:276] 0 containers: []
	W0731 18:11:15.552862   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:15.552867   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:15.552920   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:15.589161   74203 cri.go:89] found id: ""
	I0731 18:11:15.589191   74203 logs.go:276] 0 containers: []
	W0731 18:11:15.589203   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:15.589210   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:15.589274   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:15.622501   74203 cri.go:89] found id: ""
	I0731 18:11:15.622532   74203 logs.go:276] 0 containers: []
	W0731 18:11:15.622544   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:15.622552   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:15.622611   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:15.654772   74203 cri.go:89] found id: ""
	I0731 18:11:15.654801   74203 logs.go:276] 0 containers: []
	W0731 18:11:15.654815   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:15.654826   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:15.654843   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:15.703103   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:15.703148   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:15.716620   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:15.716645   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:15.783391   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:15.783416   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:15.783431   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:15.857462   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:15.857495   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:18.394223   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:18.407297   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:18.407374   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:18.439542   74203 cri.go:89] found id: ""
	I0731 18:11:18.439564   74203 logs.go:276] 0 containers: []
	W0731 18:11:18.439572   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:18.439578   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:18.439625   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:18.471838   74203 cri.go:89] found id: ""
	I0731 18:11:18.471863   74203 logs.go:276] 0 containers: []
	W0731 18:11:18.471873   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:18.471883   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:18.471943   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:18.505325   74203 cri.go:89] found id: ""
	I0731 18:11:18.505355   74203 logs.go:276] 0 containers: []
	W0731 18:11:18.505365   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:18.505372   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:18.505432   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:18.536155   74203 cri.go:89] found id: ""
	I0731 18:11:18.536180   74203 logs.go:276] 0 containers: []
	W0731 18:11:18.536189   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:18.536194   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:18.536241   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:18.569301   74203 cri.go:89] found id: ""
	I0731 18:11:18.569329   74203 logs.go:276] 0 containers: []
	W0731 18:11:18.569339   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:18.569344   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:18.569398   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:18.603053   74203 cri.go:89] found id: ""
	I0731 18:11:18.603079   74203 logs.go:276] 0 containers: []
	W0731 18:11:18.603087   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:18.603092   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:18.603170   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:18.636259   74203 cri.go:89] found id: ""
	I0731 18:11:18.636287   74203 logs.go:276] 0 containers: []
	W0731 18:11:18.636298   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:18.636305   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:18.636361   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:18.667839   74203 cri.go:89] found id: ""
	I0731 18:11:18.667863   74203 logs.go:276] 0 containers: []
	W0731 18:11:18.667873   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:18.667883   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:18.667897   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:18.681005   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:18.681030   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:18.747793   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:18.747875   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:18.747892   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:18.828970   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:18.829005   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:18.866724   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:18.866749   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:16.279368   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:18.778730   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:20.779465   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:17.144041   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:19.645356   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:21.272651   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:23.771240   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:21.416598   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:21.431968   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:21.432027   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:21.469670   74203 cri.go:89] found id: ""
	I0731 18:11:21.469696   74203 logs.go:276] 0 containers: []
	W0731 18:11:21.469703   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:21.469709   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:21.469762   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:21.508461   74203 cri.go:89] found id: ""
	I0731 18:11:21.508490   74203 logs.go:276] 0 containers: []
	W0731 18:11:21.508500   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:21.508506   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:21.508570   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:21.548101   74203 cri.go:89] found id: ""
	I0731 18:11:21.548127   74203 logs.go:276] 0 containers: []
	W0731 18:11:21.548136   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:21.548142   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:21.548204   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:21.582617   74203 cri.go:89] found id: ""
	I0731 18:11:21.582646   74203 logs.go:276] 0 containers: []
	W0731 18:11:21.582653   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:21.582659   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:21.582712   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:21.614185   74203 cri.go:89] found id: ""
	I0731 18:11:21.614210   74203 logs.go:276] 0 containers: []
	W0731 18:11:21.614218   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:21.614223   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:21.614278   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:21.647596   74203 cri.go:89] found id: ""
	I0731 18:11:21.647619   74203 logs.go:276] 0 containers: []
	W0731 18:11:21.647629   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:21.647636   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:21.647693   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:21.680106   74203 cri.go:89] found id: ""
	I0731 18:11:21.680132   74203 logs.go:276] 0 containers: []
	W0731 18:11:21.680142   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:21.680149   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:21.680208   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:21.714708   74203 cri.go:89] found id: ""
	I0731 18:11:21.714733   74203 logs.go:276] 0 containers: []
	W0731 18:11:21.714742   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:21.714754   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:21.714779   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:21.783425   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:21.783448   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:21.783462   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:21.859943   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:21.859980   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:21.898374   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:21.898405   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:21.945753   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:21.945784   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:24.459481   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:24.471376   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:24.471435   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:24.506474   74203 cri.go:89] found id: ""
	I0731 18:11:24.506502   74203 logs.go:276] 0 containers: []
	W0731 18:11:24.506511   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:24.506516   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:24.506572   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:24.547298   74203 cri.go:89] found id: ""
	I0731 18:11:24.547324   74203 logs.go:276] 0 containers: []
	W0731 18:11:24.547332   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:24.547337   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:24.547402   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:24.579912   74203 cri.go:89] found id: ""
	I0731 18:11:24.579944   74203 logs.go:276] 0 containers: []
	W0731 18:11:24.579955   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:24.579963   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:24.580032   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:24.613754   74203 cri.go:89] found id: ""
	I0731 18:11:24.613782   74203 logs.go:276] 0 containers: []
	W0731 18:11:24.613791   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:24.613799   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:24.613859   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:24.649782   74203 cri.go:89] found id: ""
	I0731 18:11:24.649811   74203 logs.go:276] 0 containers: []
	W0731 18:11:24.649822   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:24.649829   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:24.649888   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:24.689232   74203 cri.go:89] found id: ""
	I0731 18:11:24.689264   74203 logs.go:276] 0 containers: []
	W0731 18:11:24.689274   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:24.689283   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:24.689361   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:24.727861   74203 cri.go:89] found id: ""
	I0731 18:11:24.727894   74203 logs.go:276] 0 containers: []
	W0731 18:11:24.727902   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:24.727924   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:24.727983   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:24.763839   74203 cri.go:89] found id: ""
	I0731 18:11:24.763866   74203 logs.go:276] 0 containers: []
	W0731 18:11:24.763876   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:24.763886   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:24.763901   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:24.841090   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:24.841131   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:24.877206   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:24.877231   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:24.926149   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:24.926180   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:24.938795   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:24.938822   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:25.008349   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:23.279256   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:25.778644   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:22.143312   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:24.144259   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:26.144310   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:25.771403   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:28.270613   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:27.509192   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:27.522506   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:27.522582   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:27.557915   74203 cri.go:89] found id: ""
	I0731 18:11:27.557943   74203 logs.go:276] 0 containers: []
	W0731 18:11:27.557954   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:27.557962   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:27.558019   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:27.594295   74203 cri.go:89] found id: ""
	I0731 18:11:27.594322   74203 logs.go:276] 0 containers: []
	W0731 18:11:27.594332   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:27.594348   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:27.594410   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:27.626830   74203 cri.go:89] found id: ""
	I0731 18:11:27.626857   74203 logs.go:276] 0 containers: []
	W0731 18:11:27.626868   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:27.626875   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:27.626964   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:27.662062   74203 cri.go:89] found id: ""
	I0731 18:11:27.662084   74203 logs.go:276] 0 containers: []
	W0731 18:11:27.662092   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:27.662099   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:27.662158   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:27.695686   74203 cri.go:89] found id: ""
	I0731 18:11:27.695715   74203 logs.go:276] 0 containers: []
	W0731 18:11:27.695727   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:27.695735   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:27.695785   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:27.729444   74203 cri.go:89] found id: ""
	I0731 18:11:27.729467   74203 logs.go:276] 0 containers: []
	W0731 18:11:27.729475   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:27.729481   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:27.729531   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:27.761889   74203 cri.go:89] found id: ""
	I0731 18:11:27.761916   74203 logs.go:276] 0 containers: []
	W0731 18:11:27.761926   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:27.761934   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:27.761995   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:27.796178   74203 cri.go:89] found id: ""
	I0731 18:11:27.796199   74203 logs.go:276] 0 containers: []
	W0731 18:11:27.796206   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:27.796214   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:27.796227   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:27.849613   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:27.849650   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:27.862892   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:27.862923   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:27.928691   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:27.928717   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:27.928740   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:28.006310   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:28.006340   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:27.779125   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:30.279252   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:28.643172   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:30.645474   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:30.271016   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:32.771684   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:30.543065   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:30.555951   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:30.556013   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:30.597411   74203 cri.go:89] found id: ""
	I0731 18:11:30.597440   74203 logs.go:276] 0 containers: []
	W0731 18:11:30.597451   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:30.597458   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:30.597518   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:30.629836   74203 cri.go:89] found id: ""
	I0731 18:11:30.629866   74203 logs.go:276] 0 containers: []
	W0731 18:11:30.629873   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:30.629878   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:30.629932   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:30.667402   74203 cri.go:89] found id: ""
	I0731 18:11:30.667432   74203 logs.go:276] 0 containers: []
	W0731 18:11:30.667443   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:30.667450   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:30.667513   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:30.701677   74203 cri.go:89] found id: ""
	I0731 18:11:30.701708   74203 logs.go:276] 0 containers: []
	W0731 18:11:30.701716   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:30.701722   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:30.701773   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:30.736685   74203 cri.go:89] found id: ""
	I0731 18:11:30.736714   74203 logs.go:276] 0 containers: []
	W0731 18:11:30.736721   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:30.736736   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:30.736786   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:30.771501   74203 cri.go:89] found id: ""
	I0731 18:11:30.771526   74203 logs.go:276] 0 containers: []
	W0731 18:11:30.771543   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:30.771549   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:30.771597   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:30.805878   74203 cri.go:89] found id: ""
	I0731 18:11:30.805902   74203 logs.go:276] 0 containers: []
	W0731 18:11:30.805911   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:30.805921   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:30.805966   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:30.839001   74203 cri.go:89] found id: ""
	I0731 18:11:30.839027   74203 logs.go:276] 0 containers: []
	W0731 18:11:30.839038   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:30.839048   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:30.839062   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:30.893357   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:30.893387   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:30.907222   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:30.907248   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:30.985626   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:30.985648   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:30.985668   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:31.067900   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:31.067948   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:33.607259   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:33.621596   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:33.621656   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:33.663616   74203 cri.go:89] found id: ""
	I0731 18:11:33.663642   74203 logs.go:276] 0 containers: []
	W0731 18:11:33.663649   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:33.663655   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:33.663704   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:33.702133   74203 cri.go:89] found id: ""
	I0731 18:11:33.702159   74203 logs.go:276] 0 containers: []
	W0731 18:11:33.702167   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:33.702173   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:33.702226   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:33.733730   74203 cri.go:89] found id: ""
	I0731 18:11:33.733752   74203 logs.go:276] 0 containers: []
	W0731 18:11:33.733760   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:33.733765   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:33.733813   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:33.765036   74203 cri.go:89] found id: ""
	I0731 18:11:33.765064   74203 logs.go:276] 0 containers: []
	W0731 18:11:33.765074   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:33.765080   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:33.765128   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:33.799604   74203 cri.go:89] found id: ""
	I0731 18:11:33.799630   74203 logs.go:276] 0 containers: []
	W0731 18:11:33.799640   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:33.799648   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:33.799716   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:33.831434   74203 cri.go:89] found id: ""
	I0731 18:11:33.831455   74203 logs.go:276] 0 containers: []
	W0731 18:11:33.831464   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:33.831469   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:33.831514   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:33.862975   74203 cri.go:89] found id: ""
	I0731 18:11:33.863004   74203 logs.go:276] 0 containers: []
	W0731 18:11:33.863014   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:33.863022   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:33.863090   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:33.895674   74203 cri.go:89] found id: ""
	I0731 18:11:33.895704   74203 logs.go:276] 0 containers: []
	W0731 18:11:33.895714   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:33.895723   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:33.895737   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:33.931954   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:33.931980   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:33.985353   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:33.985385   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:33.997857   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:33.997882   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:34.060523   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:34.060553   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:34.060575   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:32.778212   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:35.278655   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:33.151579   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:35.643326   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:34.771873   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:36.772309   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:39.271582   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:36.643003   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:36.659306   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:36.659385   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:36.717097   74203 cri.go:89] found id: ""
	I0731 18:11:36.717129   74203 logs.go:276] 0 containers: []
	W0731 18:11:36.717141   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:36.717149   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:36.717212   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:36.750288   74203 cri.go:89] found id: ""
	I0731 18:11:36.750314   74203 logs.go:276] 0 containers: []
	W0731 18:11:36.750325   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:36.750331   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:36.750391   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:36.785272   74203 cri.go:89] found id: ""
	I0731 18:11:36.785296   74203 logs.go:276] 0 containers: []
	W0731 18:11:36.785304   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:36.785310   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:36.785360   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:36.818927   74203 cri.go:89] found id: ""
	I0731 18:11:36.818953   74203 logs.go:276] 0 containers: []
	W0731 18:11:36.818965   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:36.818972   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:36.819020   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:36.854562   74203 cri.go:89] found id: ""
	I0731 18:11:36.854593   74203 logs.go:276] 0 containers: []
	W0731 18:11:36.854602   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:36.854607   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:36.854670   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:36.887786   74203 cri.go:89] found id: ""
	I0731 18:11:36.887814   74203 logs.go:276] 0 containers: []
	W0731 18:11:36.887825   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:36.887833   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:36.887893   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:36.919418   74203 cri.go:89] found id: ""
	I0731 18:11:36.919446   74203 logs.go:276] 0 containers: []
	W0731 18:11:36.919457   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:36.919471   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:36.919533   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:36.956934   74203 cri.go:89] found id: ""
	I0731 18:11:36.956957   74203 logs.go:276] 0 containers: []
	W0731 18:11:36.956964   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:36.956971   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:36.956989   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:37.003755   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:37.003783   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:37.016977   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:37.017004   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:37.091617   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:37.091646   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:37.091662   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:37.170870   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:37.170903   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:39.714271   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:39.730306   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:39.730383   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:39.765368   74203 cri.go:89] found id: ""
	I0731 18:11:39.765399   74203 logs.go:276] 0 containers: []
	W0731 18:11:39.765407   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:39.765412   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:39.765471   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:39.800394   74203 cri.go:89] found id: ""
	I0731 18:11:39.800419   74203 logs.go:276] 0 containers: []
	W0731 18:11:39.800427   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:39.800433   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:39.800486   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:39.834861   74203 cri.go:89] found id: ""
	I0731 18:11:39.834889   74203 logs.go:276] 0 containers: []
	W0731 18:11:39.834898   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:39.834903   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:39.834958   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:39.868108   74203 cri.go:89] found id: ""
	I0731 18:11:39.868132   74203 logs.go:276] 0 containers: []
	W0731 18:11:39.868141   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:39.868146   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:39.868220   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:39.902097   74203 cri.go:89] found id: ""
	I0731 18:11:39.902120   74203 logs.go:276] 0 containers: []
	W0731 18:11:39.902128   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:39.902134   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:39.902184   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:39.933073   74203 cri.go:89] found id: ""
	I0731 18:11:39.933100   74203 logs.go:276] 0 containers: []
	W0731 18:11:39.933109   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:39.933114   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:39.933165   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:39.965748   74203 cri.go:89] found id: ""
	I0731 18:11:39.965775   74203 logs.go:276] 0 containers: []
	W0731 18:11:39.965785   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:39.965796   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:39.965856   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:39.998164   74203 cri.go:89] found id: ""
	I0731 18:11:39.998189   74203 logs.go:276] 0 containers: []
	W0731 18:11:39.998197   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:39.998205   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:39.998222   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:40.049991   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:40.050027   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:40.063676   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:40.063705   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:40.125855   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:40.125880   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:40.125896   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:40.207937   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:40.207970   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:37.778894   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:40.278489   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:37.643651   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:40.144731   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:41.271897   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:43.771556   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:42.746315   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:42.758998   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:42.759053   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:42.791921   74203 cri.go:89] found id: ""
	I0731 18:11:42.791946   74203 logs.go:276] 0 containers: []
	W0731 18:11:42.791954   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:42.791958   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:42.792004   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:42.822888   74203 cri.go:89] found id: ""
	I0731 18:11:42.822914   74203 logs.go:276] 0 containers: []
	W0731 18:11:42.822922   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:42.822927   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:42.822973   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:42.854516   74203 cri.go:89] found id: ""
	I0731 18:11:42.854545   74203 logs.go:276] 0 containers: []
	W0731 18:11:42.854564   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:42.854574   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:42.854638   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:42.890933   74203 cri.go:89] found id: ""
	I0731 18:11:42.890955   74203 logs.go:276] 0 containers: []
	W0731 18:11:42.890963   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:42.890968   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:42.891013   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:42.925170   74203 cri.go:89] found id: ""
	I0731 18:11:42.925196   74203 logs.go:276] 0 containers: []
	W0731 18:11:42.925206   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:42.925213   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:42.925273   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:42.959845   74203 cri.go:89] found id: ""
	I0731 18:11:42.959868   74203 logs.go:276] 0 containers: []
	W0731 18:11:42.959876   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:42.959881   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:42.959938   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:42.997305   74203 cri.go:89] found id: ""
	I0731 18:11:42.997346   74203 logs.go:276] 0 containers: []
	W0731 18:11:42.997358   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:42.997366   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:42.997427   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:43.030663   74203 cri.go:89] found id: ""
	I0731 18:11:43.030690   74203 logs.go:276] 0 containers: []
	W0731 18:11:43.030700   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:43.030711   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:43.030725   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:43.112280   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:43.112303   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:43.112318   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:43.209002   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:43.209035   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:43.249596   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:43.249629   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:43.302419   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:43.302449   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:42.278874   73696 pod_ready.go:102] pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:44.273355   73696 pod_ready.go:81] duration metric: took 4m0.000454583s for pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace to be "Ready" ...
	E0731 18:11:44.273380   73696 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-64hp4" in "kube-system" namespace to be "Ready" (will not retry!)
	I0731 18:11:44.273399   73696 pod_ready.go:38] duration metric: took 4m8.019714552s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:11:44.273430   73696 kubeadm.go:597] duration metric: took 4m16.379038728s to restartPrimaryControlPlane
	W0731 18:11:44.273506   73696 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 18:11:44.273531   73696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 18:11:42.643165   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:44.644976   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:46.271751   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:48.771274   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:45.816910   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:45.829909   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:45.829976   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:45.865534   74203 cri.go:89] found id: ""
	I0731 18:11:45.865561   74203 logs.go:276] 0 containers: []
	W0731 18:11:45.865572   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:45.865584   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:45.865646   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:45.901552   74203 cri.go:89] found id: ""
	I0731 18:11:45.901585   74203 logs.go:276] 0 containers: []
	W0731 18:11:45.901593   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:45.901598   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:45.901678   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:45.938790   74203 cri.go:89] found id: ""
	I0731 18:11:45.938820   74203 logs.go:276] 0 containers: []
	W0731 18:11:45.938842   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:45.938859   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:45.938926   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:45.971502   74203 cri.go:89] found id: ""
	I0731 18:11:45.971534   74203 logs.go:276] 0 containers: []
	W0731 18:11:45.971546   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:45.971553   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:45.971620   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:46.009281   74203 cri.go:89] found id: ""
	I0731 18:11:46.009316   74203 logs.go:276] 0 containers: []
	W0731 18:11:46.009327   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:46.009335   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:46.009399   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:46.042899   74203 cri.go:89] found id: ""
	I0731 18:11:46.042928   74203 logs.go:276] 0 containers: []
	W0731 18:11:46.042939   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:46.042947   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:46.043005   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:46.079982   74203 cri.go:89] found id: ""
	I0731 18:11:46.080013   74203 logs.go:276] 0 containers: []
	W0731 18:11:46.080024   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:46.080031   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:46.080098   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:46.113136   74203 cri.go:89] found id: ""
	I0731 18:11:46.113168   74203 logs.go:276] 0 containers: []
	W0731 18:11:46.113179   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:46.113191   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:46.113206   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:46.165818   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:46.165855   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:46.181058   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:46.181083   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:46.256805   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:46.256826   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:46.256838   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:46.353045   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:46.353093   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:48.894656   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:48.910648   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:48.910723   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:48.941080   74203 cri.go:89] found id: ""
	I0731 18:11:48.941103   74203 logs.go:276] 0 containers: []
	W0731 18:11:48.941111   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:48.941117   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:48.941164   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:48.972113   74203 cri.go:89] found id: ""
	I0731 18:11:48.972136   74203 logs.go:276] 0 containers: []
	W0731 18:11:48.972146   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:48.972151   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:48.972208   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:49.004521   74203 cri.go:89] found id: ""
	I0731 18:11:49.004547   74203 logs.go:276] 0 containers: []
	W0731 18:11:49.004557   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:49.004571   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:49.004658   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:49.036600   74203 cri.go:89] found id: ""
	I0731 18:11:49.036622   74203 logs.go:276] 0 containers: []
	W0731 18:11:49.036629   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:49.036635   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:49.036683   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:49.071397   74203 cri.go:89] found id: ""
	I0731 18:11:49.071426   74203 logs.go:276] 0 containers: []
	W0731 18:11:49.071436   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:49.071444   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:49.071501   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:49.108907   74203 cri.go:89] found id: ""
	I0731 18:11:49.108933   74203 logs.go:276] 0 containers: []
	W0731 18:11:49.108944   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:49.108952   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:49.109007   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:49.141808   74203 cri.go:89] found id: ""
	I0731 18:11:49.141834   74203 logs.go:276] 0 containers: []
	W0731 18:11:49.141844   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:49.141856   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:49.141917   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:49.174063   74203 cri.go:89] found id: ""
	I0731 18:11:49.174087   74203 logs.go:276] 0 containers: []
	W0731 18:11:49.174095   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:49.174104   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:49.174116   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:49.212152   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:49.212181   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:49.267297   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:49.267324   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:49.281342   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:49.281365   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:49.349843   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:49.349866   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:49.349882   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:47.144588   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:49.644395   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:51.271203   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:53.770849   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:51.927764   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:51.940480   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:51.940539   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:51.973731   74203 cri.go:89] found id: ""
	I0731 18:11:51.973759   74203 logs.go:276] 0 containers: []
	W0731 18:11:51.973768   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:51.973780   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:51.973837   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:52.003761   74203 cri.go:89] found id: ""
	I0731 18:11:52.003783   74203 logs.go:276] 0 containers: []
	W0731 18:11:52.003790   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:52.003795   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:52.003844   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:52.035009   74203 cri.go:89] found id: ""
	I0731 18:11:52.035028   74203 logs.go:276] 0 containers: []
	W0731 18:11:52.035035   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:52.035041   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:52.035100   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:52.065475   74203 cri.go:89] found id: ""
	I0731 18:11:52.065501   74203 logs.go:276] 0 containers: []
	W0731 18:11:52.065509   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:52.065515   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:52.065574   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:52.097529   74203 cri.go:89] found id: ""
	I0731 18:11:52.097558   74203 logs.go:276] 0 containers: []
	W0731 18:11:52.097567   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:52.097573   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:52.097622   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:52.128881   74203 cri.go:89] found id: ""
	I0731 18:11:52.128909   74203 logs.go:276] 0 containers: []
	W0731 18:11:52.128917   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:52.128923   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:52.128974   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:52.159894   74203 cri.go:89] found id: ""
	I0731 18:11:52.159921   74203 logs.go:276] 0 containers: []
	W0731 18:11:52.159931   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:52.159939   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:52.159998   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:52.191955   74203 cri.go:89] found id: ""
	I0731 18:11:52.191981   74203 logs.go:276] 0 containers: []
	W0731 18:11:52.191990   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:52.191999   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:52.192009   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:52.246389   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:52.246423   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:52.260226   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:52.260253   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:52.328423   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:52.328447   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:52.328459   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:52.408456   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:52.408495   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:54.947734   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:54.960359   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:54.960420   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:54.994231   74203 cri.go:89] found id: ""
	I0731 18:11:54.994256   74203 logs.go:276] 0 containers: []
	W0731 18:11:54.994264   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:54.994270   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:54.994332   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:55.027323   74203 cri.go:89] found id: ""
	I0731 18:11:55.027364   74203 logs.go:276] 0 containers: []
	W0731 18:11:55.027374   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:55.027382   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:55.027440   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:55.061741   74203 cri.go:89] found id: ""
	I0731 18:11:55.061763   74203 logs.go:276] 0 containers: []
	W0731 18:11:55.061771   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:55.061776   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:55.061822   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:55.100685   74203 cri.go:89] found id: ""
	I0731 18:11:55.100712   74203 logs.go:276] 0 containers: []
	W0731 18:11:55.100720   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:55.100726   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:55.100780   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:55.141917   74203 cri.go:89] found id: ""
	I0731 18:11:55.141958   74203 logs.go:276] 0 containers: []
	W0731 18:11:55.141971   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:55.141980   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:55.142054   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:55.176669   74203 cri.go:89] found id: ""
	I0731 18:11:55.176702   74203 logs.go:276] 0 containers: []
	W0731 18:11:55.176711   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:55.176718   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:55.176780   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:55.209795   74203 cri.go:89] found id: ""
	I0731 18:11:55.209829   74203 logs.go:276] 0 containers: []
	W0731 18:11:55.209842   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:55.209850   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:55.209915   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:55.244503   74203 cri.go:89] found id: ""
	I0731 18:11:55.244527   74203 logs.go:276] 0 containers: []
	W0731 18:11:55.244537   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:55.244556   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:55.244572   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:55.320033   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:55.320071   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:52.143803   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:54.644223   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:56.273321   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:58.772541   73800 pod_ready.go:102] pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:55.357684   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:55.357719   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:55.411465   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:55.411501   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:55.423802   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:55.423833   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:55.487945   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:57.988078   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:11:58.001639   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:11:58.001724   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:11:58.036075   74203 cri.go:89] found id: ""
	I0731 18:11:58.036099   74203 logs.go:276] 0 containers: []
	W0731 18:11:58.036107   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:11:58.036112   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:11:58.036163   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:11:58.067316   74203 cri.go:89] found id: ""
	I0731 18:11:58.067340   74203 logs.go:276] 0 containers: []
	W0731 18:11:58.067348   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:11:58.067353   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:11:58.067420   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:11:58.102446   74203 cri.go:89] found id: ""
	I0731 18:11:58.102470   74203 logs.go:276] 0 containers: []
	W0731 18:11:58.102479   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:11:58.102485   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:11:58.102553   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:11:58.134924   74203 cri.go:89] found id: ""
	I0731 18:11:58.134949   74203 logs.go:276] 0 containers: []
	W0731 18:11:58.134957   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:11:58.134963   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:11:58.135023   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:11:58.171589   74203 cri.go:89] found id: ""
	I0731 18:11:58.171611   74203 logs.go:276] 0 containers: []
	W0731 18:11:58.171620   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:11:58.171625   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:11:58.171673   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:11:58.203813   74203 cri.go:89] found id: ""
	I0731 18:11:58.203836   74203 logs.go:276] 0 containers: []
	W0731 18:11:58.203844   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:11:58.203850   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:11:58.203911   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:11:58.236251   74203 cri.go:89] found id: ""
	I0731 18:11:58.236277   74203 logs.go:276] 0 containers: []
	W0731 18:11:58.236288   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:11:58.236295   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:11:58.236357   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:11:58.270595   74203 cri.go:89] found id: ""
	I0731 18:11:58.270624   74203 logs.go:276] 0 containers: []
	W0731 18:11:58.270636   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:11:58.270647   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:11:58.270662   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:11:58.321889   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:11:58.321927   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:11:58.334529   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:11:58.334552   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:11:58.398489   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:11:58.398515   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:11:58.398540   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:11:58.479657   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:11:58.479695   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:11:57.143080   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:11:59.144357   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:01.643343   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:01.266100   73800 pod_ready.go:81] duration metric: took 4m0.000711681s for pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace to be "Ready" ...
	E0731 18:12:01.266123   73800 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-fzxrw" in "kube-system" namespace to be "Ready" (will not retry!)
	I0731 18:12:01.266160   73800 pod_ready.go:38] duration metric: took 4m6.529342365s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:12:01.266205   73800 kubeadm.go:597] duration metric: took 4m13.643145888s to restartPrimaryControlPlane
	W0731 18:12:01.266270   73800 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 18:12:01.266297   73800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 18:12:01.014684   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:12:01.027959   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:12:01.028026   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:12:01.065423   74203 cri.go:89] found id: ""
	I0731 18:12:01.065459   74203 logs.go:276] 0 containers: []
	W0731 18:12:01.065472   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:12:01.065481   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:12:01.065545   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:12:01.099519   74203 cri.go:89] found id: ""
	I0731 18:12:01.099549   74203 logs.go:276] 0 containers: []
	W0731 18:12:01.099561   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:12:01.099568   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:12:01.099630   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:12:01.131239   74203 cri.go:89] found id: ""
	I0731 18:12:01.131262   74203 logs.go:276] 0 containers: []
	W0731 18:12:01.131270   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:12:01.131275   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:12:01.131321   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:12:01.163209   74203 cri.go:89] found id: ""
	I0731 18:12:01.163229   74203 logs.go:276] 0 containers: []
	W0731 18:12:01.163237   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:12:01.163242   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:12:01.163295   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:12:01.201165   74203 cri.go:89] found id: ""
	I0731 18:12:01.201193   74203 logs.go:276] 0 containers: []
	W0731 18:12:01.201204   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:12:01.201217   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:12:01.201274   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:12:01.233310   74203 cri.go:89] found id: ""
	I0731 18:12:01.233334   74203 logs.go:276] 0 containers: []
	W0731 18:12:01.233342   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:12:01.233348   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:12:01.233415   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:12:01.263412   74203 cri.go:89] found id: ""
	I0731 18:12:01.263442   74203 logs.go:276] 0 containers: []
	W0731 18:12:01.263452   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:12:01.263459   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:12:01.263521   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:12:01.296598   74203 cri.go:89] found id: ""
	I0731 18:12:01.296624   74203 logs.go:276] 0 containers: []
	W0731 18:12:01.296632   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:12:01.296642   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:12:01.296656   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:12:01.372362   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:12:01.372381   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:12:01.372395   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:12:01.461997   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:12:01.462029   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:12:01.507610   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:12:01.507636   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:12:01.558335   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:12:01.558375   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:12:04.073333   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:12:04.091122   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:12:04.091205   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:12:04.130510   74203 cri.go:89] found id: ""
	I0731 18:12:04.130545   74203 logs.go:276] 0 containers: []
	W0731 18:12:04.130558   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:12:04.130566   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:12:04.130632   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:12:04.174749   74203 cri.go:89] found id: ""
	I0731 18:12:04.174775   74203 logs.go:276] 0 containers: []
	W0731 18:12:04.174785   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:12:04.174792   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:12:04.174846   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:12:04.212123   74203 cri.go:89] found id: ""
	I0731 18:12:04.212160   74203 logs.go:276] 0 containers: []
	W0731 18:12:04.212172   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:12:04.212180   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:12:04.212254   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:12:04.251558   74203 cri.go:89] found id: ""
	I0731 18:12:04.251589   74203 logs.go:276] 0 containers: []
	W0731 18:12:04.251600   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:12:04.251608   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:12:04.251671   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:12:04.284831   74203 cri.go:89] found id: ""
	I0731 18:12:04.284864   74203 logs.go:276] 0 containers: []
	W0731 18:12:04.284878   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:12:04.284886   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:12:04.284954   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:12:04.325076   74203 cri.go:89] found id: ""
	I0731 18:12:04.325115   74203 logs.go:276] 0 containers: []
	W0731 18:12:04.325126   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:12:04.325135   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:12:04.325195   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:12:04.370883   74203 cri.go:89] found id: ""
	I0731 18:12:04.370922   74203 logs.go:276] 0 containers: []
	W0731 18:12:04.370933   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:12:04.370940   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:12:04.370999   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:12:04.410639   74203 cri.go:89] found id: ""
	I0731 18:12:04.410671   74203 logs.go:276] 0 containers: []
	W0731 18:12:04.410685   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:12:04.410697   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:12:04.410713   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:12:04.462988   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:12:04.463023   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:12:04.479086   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:12:04.479123   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:12:04.544675   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:12:04.544699   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:12:04.544712   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:12:04.633231   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:12:04.633267   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:12:03.645118   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:06.143865   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:07.174252   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:12:07.187289   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:12:07.187393   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:12:07.220927   74203 cri.go:89] found id: ""
	I0731 18:12:07.220953   74203 logs.go:276] 0 containers: []
	W0731 18:12:07.220964   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:12:07.220972   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:12:07.221040   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:12:07.256817   74203 cri.go:89] found id: ""
	I0731 18:12:07.256849   74203 logs.go:276] 0 containers: []
	W0731 18:12:07.256861   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:12:07.256870   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:12:07.256935   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:12:07.290267   74203 cri.go:89] found id: ""
	I0731 18:12:07.290297   74203 logs.go:276] 0 containers: []
	W0731 18:12:07.290309   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:12:07.290315   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:12:07.290373   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:12:07.330037   74203 cri.go:89] found id: ""
	I0731 18:12:07.330068   74203 logs.go:276] 0 containers: []
	W0731 18:12:07.330079   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:12:07.330087   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:12:07.330143   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:12:07.366745   74203 cri.go:89] found id: ""
	I0731 18:12:07.366770   74203 logs.go:276] 0 containers: []
	W0731 18:12:07.366778   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:12:07.366783   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:12:07.366861   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:12:07.400608   74203 cri.go:89] found id: ""
	I0731 18:12:07.400637   74203 logs.go:276] 0 containers: []
	W0731 18:12:07.400648   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:12:07.400661   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:12:07.400727   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:12:07.434996   74203 cri.go:89] found id: ""
	I0731 18:12:07.435028   74203 logs.go:276] 0 containers: []
	W0731 18:12:07.435037   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:12:07.435044   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:12:07.435130   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:12:07.474347   74203 cri.go:89] found id: ""
	I0731 18:12:07.474375   74203 logs.go:276] 0 containers: []
	W0731 18:12:07.474387   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:12:07.474400   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:12:07.474415   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:12:07.549009   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:12:07.549045   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:12:07.586710   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:12:07.586736   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:12:07.640770   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:12:07.640800   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:12:07.654380   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:12:07.654405   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:12:07.721479   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:12:10.221837   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:12:10.235686   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:12:10.235746   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:12:10.268769   74203 cri.go:89] found id: ""
	I0731 18:12:10.268794   74203 logs.go:276] 0 containers: []
	W0731 18:12:10.268802   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:12:10.268808   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:12:10.268860   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:12:10.305229   74203 cri.go:89] found id: ""
	I0731 18:12:10.305264   74203 logs.go:276] 0 containers: []
	W0731 18:12:10.305277   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:12:10.305290   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:12:10.305353   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:12:10.337070   74203 cri.go:89] found id: ""
	I0731 18:12:10.337095   74203 logs.go:276] 0 containers: []
	W0731 18:12:10.337104   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:12:10.337109   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:12:10.337155   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:12:08.643708   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:10.645483   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:10.372979   74203 cri.go:89] found id: ""
	I0731 18:12:10.373005   74203 logs.go:276] 0 containers: []
	W0731 18:12:10.373015   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:12:10.373022   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:12:10.373079   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:12:10.407225   74203 cri.go:89] found id: ""
	I0731 18:12:10.407252   74203 logs.go:276] 0 containers: []
	W0731 18:12:10.407264   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:12:10.407270   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:12:10.407327   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:12:10.443338   74203 cri.go:89] found id: ""
	I0731 18:12:10.443366   74203 logs.go:276] 0 containers: []
	W0731 18:12:10.443377   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:12:10.443385   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:12:10.443474   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:12:10.477005   74203 cri.go:89] found id: ""
	I0731 18:12:10.477030   74203 logs.go:276] 0 containers: []
	W0731 18:12:10.477038   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:12:10.477043   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:12:10.477092   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:12:10.509338   74203 cri.go:89] found id: ""
	I0731 18:12:10.509367   74203 logs.go:276] 0 containers: []
	W0731 18:12:10.509378   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:12:10.509389   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:12:10.509405   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:12:10.559604   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:12:10.559639   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:12:10.572652   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:12:10.572682   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:12:10.642749   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:12:10.642772   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:12:10.642789   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:12:10.728716   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:12:10.728753   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:12:13.267783   74203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:12:13.282235   74203 kubeadm.go:597] duration metric: took 4m4.41837453s to restartPrimaryControlPlane
	W0731 18:12:13.282324   74203 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 18:12:13.282355   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 18:12:15.410363   73696 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.136815784s)
	I0731 18:12:15.410431   73696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:12:15.426599   73696 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 18:12:15.435823   73696 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 18:12:15.444553   73696 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 18:12:15.444581   73696 kubeadm.go:157] found existing configuration files:
	
	I0731 18:12:15.444624   73696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0731 18:12:15.453198   73696 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 18:12:15.453273   73696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 18:12:15.461988   73696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0731 18:12:15.470178   73696 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 18:12:15.470238   73696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 18:12:15.478903   73696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0731 18:12:15.487176   73696 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 18:12:15.487215   73696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 18:12:15.496114   73696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0731 18:12:15.504518   73696 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 18:12:15.504579   73696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 18:12:15.513915   73696 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 18:12:15.563318   73696 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0731 18:12:15.563381   73696 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 18:12:15.697426   73696 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 18:12:15.697574   73696 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 18:12:15.697688   73696 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 18:12:15.902621   73696 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 18:12:15.904763   73696 out.go:204]   - Generating certificates and keys ...
	I0731 18:12:15.904869   73696 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 18:12:15.904948   73696 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 18:12:15.905049   73696 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 18:12:15.905149   73696 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 18:12:15.905247   73696 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 18:12:15.905328   73696 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 18:12:15.905426   73696 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 18:12:15.905516   73696 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 18:12:15.905620   73696 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 18:12:15.905729   73696 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 18:12:15.905812   73696 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 18:12:15.905890   73696 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 18:12:16.011366   73696 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 18:12:16.171776   73696 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0731 18:12:16.404302   73696 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 18:12:16.559451   73696 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 18:12:16.686612   73696 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 18:12:16.687311   73696 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 18:12:16.689956   73696 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 18:12:13.142855   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:15.144107   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:16.959318   74203 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.676937263s)
	I0731 18:12:16.959425   74203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:12:16.973440   74203 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 18:12:16.983482   74203 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 18:12:16.993930   74203 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 18:12:16.993951   74203 kubeadm.go:157] found existing configuration files:
	
	I0731 18:12:16.993993   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 18:12:17.002713   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 18:12:17.002771   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 18:12:17.012107   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 18:12:17.022548   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 18:12:17.022604   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 18:12:17.033569   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 18:12:17.043338   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 18:12:17.043391   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 18:12:17.052064   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 18:12:17.060785   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 18:12:17.060850   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 18:12:17.069499   74203 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 18:12:17.136512   74203 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0731 18:12:17.136579   74203 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 18:12:17.286224   74203 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 18:12:17.286383   74203 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 18:12:17.286506   74203 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 18:12:17.467092   74203 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 18:12:17.468918   74203 out.go:204]   - Generating certificates and keys ...
	I0731 18:12:17.469024   74203 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 18:12:17.469135   74203 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 18:12:17.469229   74203 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 18:12:17.469307   74203 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 18:12:17.469439   74203 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 18:12:17.469525   74203 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 18:12:17.469609   74203 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 18:12:17.470025   74203 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 18:12:17.470501   74203 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 18:12:17.470852   74203 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 18:12:17.470899   74203 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 18:12:17.470949   74203 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 18:12:17.673308   74203 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 18:12:17.922789   74203 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 18:12:18.391239   74203 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 18:12:18.464854   74203 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 18:12:18.480495   74203 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 18:12:18.480675   74203 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 18:12:18.480746   74203 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 18:12:18.632564   74203 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 18:12:18.635416   74203 out.go:204]   - Booting up control plane ...
	I0731 18:12:18.635542   74203 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 18:12:18.643338   74203 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 18:12:18.645881   74203 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 18:12:18.646898   74203 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 18:12:18.650052   74203 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 18:12:16.691876   73696 out.go:204]   - Booting up control plane ...
	I0731 18:12:16.691967   73696 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 18:12:16.692064   73696 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 18:12:16.692643   73696 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 18:12:16.713038   73696 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 18:12:16.713123   73696 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 18:12:16.713159   73696 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 18:12:16.855506   73696 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0731 18:12:16.855638   73696 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0731 18:12:17.856697   73696 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001297342s
	I0731 18:12:17.856823   73696 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0731 18:12:17.144295   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:19.644100   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:21.644654   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:22.358287   73696 kubeadm.go:310] [api-check] The API server is healthy after 4.501118217s
	I0731 18:12:22.370066   73696 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 18:12:22.382929   73696 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 18:12:22.402765   73696 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 18:12:22.403044   73696 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-094310 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 18:12:22.419724   73696 kubeadm.go:310] [bootstrap-token] Using token: hduea8.ix2m91ewiu6okgi9
	I0731 18:12:22.421231   73696 out.go:204]   - Configuring RBAC rules ...
	I0731 18:12:22.421382   73696 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 18:12:22.426230   73696 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 18:12:22.434423   73696 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 18:12:22.437839   73696 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 18:12:22.449264   73696 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 18:12:22.452420   73696 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 18:12:22.764876   73696 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 18:12:23.216229   73696 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 18:12:23.765173   73696 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 18:12:23.766223   73696 kubeadm.go:310] 
	I0731 18:12:23.766311   73696 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 18:12:23.766356   73696 kubeadm.go:310] 
	I0731 18:12:23.766466   73696 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 18:12:23.766487   73696 kubeadm.go:310] 
	I0731 18:12:23.766521   73696 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 18:12:23.766641   73696 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 18:12:23.766726   73696 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 18:12:23.766741   73696 kubeadm.go:310] 
	I0731 18:12:23.766827   73696 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 18:12:23.766844   73696 kubeadm.go:310] 
	I0731 18:12:23.766899   73696 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 18:12:23.766910   73696 kubeadm.go:310] 
	I0731 18:12:23.766986   73696 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 18:12:23.767089   73696 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 18:12:23.767225   73696 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 18:12:23.767237   73696 kubeadm.go:310] 
	I0731 18:12:23.767310   73696 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 18:12:23.767401   73696 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 18:12:23.767411   73696 kubeadm.go:310] 
	I0731 18:12:23.767531   73696 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token hduea8.ix2m91ewiu6okgi9 \
	I0731 18:12:23.767662   73696 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:460b24b51d10a3760f2391542a8a89e401359854f437f922cbee3f2d8ed6c856 \
	I0731 18:12:23.767695   73696 kubeadm.go:310] 	--control-plane 
	I0731 18:12:23.767702   73696 kubeadm.go:310] 
	I0731 18:12:23.767773   73696 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 18:12:23.767782   73696 kubeadm.go:310] 
	I0731 18:12:23.767847   73696 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token hduea8.ix2m91ewiu6okgi9 \
	I0731 18:12:23.767930   73696 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:460b24b51d10a3760f2391542a8a89e401359854f437f922cbee3f2d8ed6c856 
	I0731 18:12:23.768912   73696 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 18:12:23.769058   73696 cni.go:84] Creating CNI manager for ""
	I0731 18:12:23.769073   73696 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:12:23.771596   73696 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 18:12:23.773122   73696 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 18:12:23.782944   73696 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 18:12:23.800254   73696 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 18:12:23.800383   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:23.800398   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-094310 minikube.k8s.io/updated_at=2024_07_31T18_12_23_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1d737dad7efa60c56d30434fcd857dd3b14c91d9 minikube.k8s.io/name=default-k8s-diff-port-094310 minikube.k8s.io/primary=true
	I0731 18:12:23.827190   73696 ops.go:34] apiserver oom_adj: -16
	I0731 18:12:23.990425   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:24.490585   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:24.991490   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:25.490948   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:25.991461   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:23.645259   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:26.144352   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:26.491041   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:26.990516   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:27.491386   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:27.991150   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:28.490838   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:28.991267   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:29.490459   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:29.990672   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:30.491302   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:30.990644   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:28.644749   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:31.143617   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:32.532203   73800 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.265875459s)
	I0731 18:12:32.532286   73800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:12:32.548139   73800 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 18:12:32.558049   73800 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 18:12:32.567036   73800 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 18:12:32.567060   73800 kubeadm.go:157] found existing configuration files:
	
	I0731 18:12:32.567133   73800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 18:12:32.576069   73800 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 18:12:32.576124   73800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 18:12:32.584762   73800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 18:12:32.592927   73800 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 18:12:32.592980   73800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 18:12:32.601309   73800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 18:12:32.609478   73800 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 18:12:32.609525   73800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 18:12:32.617980   73800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 18:12:32.625943   73800 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 18:12:32.625978   73800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 18:12:32.634091   73800 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 18:12:32.821569   73800 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 18:12:31.491226   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:31.991099   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:32.490751   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:32.991252   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:33.490564   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:33.990977   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:34.491037   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:34.990696   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:35.491381   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:35.990793   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:36.490926   73696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:36.581312   73696 kubeadm.go:1113] duration metric: took 12.780981821s to wait for elevateKubeSystemPrivileges
	I0731 18:12:36.581370   73696 kubeadm.go:394] duration metric: took 5m8.741923744s to StartCluster
	I0731 18:12:36.581393   73696 settings.go:142] acquiring lock: {Name:mk8ac8269296bf0b887a00ff74caaad180005f89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:12:36.581485   73696 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 18:12:36.583690   73696 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/kubeconfig: {Name:mkf2340bc1ada3c683f132aca37228d7588e1fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:12:36.583986   73696 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.197 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 18:12:36.585079   73696 config.go:182] Loaded profile config "default-k8s-diff-port-094310": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:12:36.585328   73696 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 18:12:36.585677   73696 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-094310"
	I0731 18:12:36.585686   73696 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-094310"
	I0731 18:12:36.585688   73696 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-094310"
	I0731 18:12:36.585705   73696 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-094310"
	W0731 18:12:36.585717   73696 addons.go:243] addon storage-provisioner should already be in state true
	I0731 18:12:36.585720   73696 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-094310"
	I0731 18:12:36.585732   73696 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-094310"
	W0731 18:12:36.585740   73696 addons.go:243] addon metrics-server should already be in state true
	I0731 18:12:36.585752   73696 host.go:66] Checking if "default-k8s-diff-port-094310" exists ...
	I0731 18:12:36.585766   73696 host.go:66] Checking if "default-k8s-diff-port-094310" exists ...
	I0731 18:12:36.586152   73696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:36.586166   73696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:36.586166   73696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:36.586174   73696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:36.586180   73696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:36.586188   73696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:36.586456   73696 out.go:177] * Verifying Kubernetes components...
	I0731 18:12:36.588174   73696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:12:36.605611   73696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38873
	I0731 18:12:36.605856   73696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44661
	I0731 18:12:36.606122   73696 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:36.606710   73696 main.go:141] libmachine: Using API Version  1
	I0731 18:12:36.606731   73696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:36.606809   73696 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:36.607072   73696 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:36.607240   73696 main.go:141] libmachine: Using API Version  1
	I0731 18:12:36.607262   73696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:36.607789   73696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:36.607817   73696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:36.608000   73696 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:36.608173   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetState
	I0731 18:12:36.609009   73696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39155
	I0731 18:12:36.609469   73696 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:36.609954   73696 main.go:141] libmachine: Using API Version  1
	I0731 18:12:36.609973   73696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:36.610333   73696 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:36.610936   73696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:36.610998   73696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:36.612199   73696 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-094310"
	W0731 18:12:36.612224   73696 addons.go:243] addon default-storageclass should already be in state true
	I0731 18:12:36.612254   73696 host.go:66] Checking if "default-k8s-diff-port-094310" exists ...
	I0731 18:12:36.612624   73696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:36.612659   73696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:36.626474   73696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38009
	I0731 18:12:36.626981   73696 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:36.627514   73696 main.go:141] libmachine: Using API Version  1
	I0731 18:12:36.627534   73696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:36.627836   73696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43349
	I0731 18:12:36.628007   73696 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:36.628336   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetState
	I0731 18:12:36.628415   73696 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:36.628816   73696 main.go:141] libmachine: Using API Version  1
	I0731 18:12:36.628831   73696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:36.629237   73696 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:36.629450   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetState
	I0731 18:12:36.630518   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:12:36.631198   73696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43471
	I0731 18:12:36.631550   73696 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:36.632064   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:12:36.632200   73696 main.go:141] libmachine: Using API Version  1
	I0731 18:12:36.632217   73696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:36.632576   73696 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:12:36.632739   73696 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:36.633275   73696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:36.633313   73696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:36.633711   73696 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0731 18:12:33.642776   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:35.643640   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:36.633805   73696 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 18:12:36.633820   73696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 18:12:36.633840   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:12:36.634990   73696 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 18:12:36.635005   73696 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 18:12:36.635022   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:12:36.637135   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:12:36.637767   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:12:36.637792   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:12:36.637891   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:12:36.639047   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:12:36.639583   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:12:36.639617   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:12:36.639885   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:12:36.640106   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:12:36.640235   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:12:36.640419   73696 sshutil.go:53] new ssh client: &{IP:192.168.72.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/default-k8s-diff-port-094310/id_rsa Username:docker}
	I0731 18:12:36.641860   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:12:36.642037   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:12:36.642205   73696 sshutil.go:53] new ssh client: &{IP:192.168.72.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/default-k8s-diff-port-094310/id_rsa Username:docker}
	I0731 18:12:36.659960   73696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36389
	I0731 18:12:36.660280   73696 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:36.660692   73696 main.go:141] libmachine: Using API Version  1
	I0731 18:12:36.660713   73696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:36.660986   73696 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:36.661150   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetState
	I0731 18:12:36.663024   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .DriverName
	I0731 18:12:36.663232   73696 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 18:12:36.663245   73696 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 18:12:36.663264   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHHostname
	I0731 18:12:36.666016   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:12:36.666393   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:b2:ae", ip: ""} in network mk-default-k8s-diff-port-094310: {Iface:virbr1 ExpiryTime:2024-07-31 19:07:14 +0000 UTC Type:0 Mac:52:54:00:a9:b2:ae Iaid: IPaddr:192.168.72.197 Prefix:24 Hostname:default-k8s-diff-port-094310 Clientid:01:52:54:00:a9:b2:ae}
	I0731 18:12:36.666472   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | domain default-k8s-diff-port-094310 has defined IP address 192.168.72.197 and MAC address 52:54:00:a9:b2:ae in network mk-default-k8s-diff-port-094310
	I0731 18:12:36.666562   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHPort
	I0731 18:12:36.666730   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHKeyPath
	I0731 18:12:36.666832   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .GetSSHUsername
	I0731 18:12:36.666935   73696 sshutil.go:53] new ssh client: &{IP:192.168.72.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/default-k8s-diff-port-094310/id_rsa Username:docker}
	I0731 18:12:36.813977   73696 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 18:12:36.832201   73696 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-094310" to be "Ready" ...
	I0731 18:12:36.849864   73696 node_ready.go:49] node "default-k8s-diff-port-094310" has status "Ready":"True"
	I0731 18:12:36.849891   73696 node_ready.go:38] duration metric: took 17.657098ms for node "default-k8s-diff-port-094310" to be "Ready" ...
	I0731 18:12:36.849903   73696 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:12:36.860981   73696 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:36.865178   73696 pod_ready.go:92] pod "etcd-default-k8s-diff-port-094310" in "kube-system" namespace has status "Ready":"True"
	I0731 18:12:36.865198   73696 pod_ready.go:81] duration metric: took 4.190559ms for pod "etcd-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:36.865209   73696 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:36.869977   73696 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-094310" in "kube-system" namespace has status "Ready":"True"
	I0731 18:12:36.869998   73696 pod_ready.go:81] duration metric: took 4.780295ms for pod "kube-apiserver-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:36.870008   73696 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:36.874051   73696 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-094310" in "kube-system" namespace has status "Ready":"True"
	I0731 18:12:36.874069   73696 pod_ready.go:81] duration metric: took 4.053362ms for pod "kube-controller-manager-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:36.874079   73696 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:36.878519   73696 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-094310" in "kube-system" namespace has status "Ready":"True"
	I0731 18:12:36.878536   73696 pod_ready.go:81] duration metric: took 4.448692ms for pod "kube-scheduler-default-k8s-diff-port-094310" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:36.878544   73696 pod_ready.go:38] duration metric: took 28.628924ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:12:36.878564   73696 api_server.go:52] waiting for apiserver process to appear ...
	I0731 18:12:36.878622   73696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:12:36.892011   73696 api_server.go:72] duration metric: took 307.983877ms to wait for apiserver process to appear ...
	I0731 18:12:36.892031   73696 api_server.go:88] waiting for apiserver healthz status ...
	I0731 18:12:36.892049   73696 api_server.go:253] Checking apiserver healthz at https://192.168.72.197:8444/healthz ...
	I0731 18:12:36.895929   73696 api_server.go:279] https://192.168.72.197:8444/healthz returned 200:
	ok
	I0731 18:12:36.896760   73696 api_server.go:141] control plane version: v1.30.3
	I0731 18:12:36.896780   73696 api_server.go:131] duration metric: took 4.741896ms to wait for apiserver health ...
	I0731 18:12:36.896789   73696 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 18:12:36.974073   73696 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 18:12:36.974092   73696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0731 18:12:37.010218   73696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 18:12:37.018536   73696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 18:12:37.039734   73696 system_pods.go:59] 5 kube-system pods found
	I0731 18:12:37.039767   73696 system_pods.go:61] "etcd-default-k8s-diff-port-094310" [f73c4fb4-b160-4e79-a0a4-8fba255cfb8c] Running
	I0731 18:12:37.039773   73696 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-094310" [820215b3-0eaa-46ca-bf0b-4fa73942160a] Running
	I0731 18:12:37.039778   73696 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-094310" [5ca0ed32-5439-49b5-9a85-1a4b1628371d] Running
	I0731 18:12:37.039787   73696 system_pods.go:61] "kube-proxy-4vvjq" [a8dd9db6-5f45-4c8d-a9d3-03ede1db07eb] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 18:12:37.039792   73696 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-094310" [11590718-4e74-46eb-be90-6405c3f1759f] Running
	I0731 18:12:37.039802   73696 system_pods.go:74] duration metric: took 143.007992ms to wait for pod list to return data ...
	I0731 18:12:37.039812   73696 default_sa.go:34] waiting for default service account to be created ...
	I0731 18:12:37.041650   73696 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 18:12:37.041672   73696 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 18:12:37.096891   73696 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 18:12:37.096920   73696 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 18:12:37.159438   73696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 18:12:37.235560   73696 default_sa.go:45] found service account: "default"
	I0731 18:12:37.235599   73696 default_sa.go:55] duration metric: took 195.778976ms for default service account to be created ...
	I0731 18:12:37.235612   73696 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 18:12:37.439935   73696 system_pods.go:86] 7 kube-system pods found
	I0731 18:12:37.439966   73696 system_pods.go:89] "coredns-7db6d8ff4d-2r7zb" [e7acc926-db53-4c1c-a62f-45e303c69fc7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:12:37.439975   73696 system_pods.go:89] "coredns-7db6d8ff4d-756jj" [c91e86b1-8348-4dc7-9aa1-6909055c2dde] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:12:37.439982   73696 system_pods.go:89] "etcd-default-k8s-diff-port-094310" [f73c4fb4-b160-4e79-a0a4-8fba255cfb8c] Running
	I0731 18:12:37.439988   73696 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-094310" [820215b3-0eaa-46ca-bf0b-4fa73942160a] Running
	I0731 18:12:37.439993   73696 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-094310" [5ca0ed32-5439-49b5-9a85-1a4b1628371d] Running
	I0731 18:12:37.439998   73696 system_pods.go:89] "kube-proxy-4vvjq" [a8dd9db6-5f45-4c8d-a9d3-03ede1db07eb] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 18:12:37.440003   73696 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-094310" [11590718-4e74-46eb-be90-6405c3f1759f] Running
	I0731 18:12:37.440020   73696 retry.go:31] will retry after 230.300903ms: missing components: kube-dns, kube-proxy
	I0731 18:12:37.676385   73696 system_pods.go:86] 7 kube-system pods found
	I0731 18:12:37.676411   73696 system_pods.go:89] "coredns-7db6d8ff4d-2r7zb" [e7acc926-db53-4c1c-a62f-45e303c69fc7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:12:37.676421   73696 system_pods.go:89] "coredns-7db6d8ff4d-756jj" [c91e86b1-8348-4dc7-9aa1-6909055c2dde] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:12:37.676429   73696 system_pods.go:89] "etcd-default-k8s-diff-port-094310" [f73c4fb4-b160-4e79-a0a4-8fba255cfb8c] Running
	I0731 18:12:37.676436   73696 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-094310" [820215b3-0eaa-46ca-bf0b-4fa73942160a] Running
	I0731 18:12:37.676442   73696 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-094310" [5ca0ed32-5439-49b5-9a85-1a4b1628371d] Running
	I0731 18:12:37.676451   73696 system_pods.go:89] "kube-proxy-4vvjq" [a8dd9db6-5f45-4c8d-a9d3-03ede1db07eb] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 18:12:37.676456   73696 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-094310" [11590718-4e74-46eb-be90-6405c3f1759f] Running
	I0731 18:12:37.676475   73696 retry.go:31] will retry after 311.28179ms: missing components: kube-dns, kube-proxy
	I0731 18:12:37.813837   73696 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:37.813870   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .Close
	I0731 18:12:37.814017   73696 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:37.814039   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .Close
	I0731 18:12:37.814265   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | Closing plugin on server side
	I0731 18:12:37.814316   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | Closing plugin on server side
	I0731 18:12:37.814363   73696 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:37.814376   73696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:37.814391   73696 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:37.814402   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .Close
	I0731 18:12:37.814531   73696 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:37.814556   73696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:37.814598   73696 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:37.814608   73696 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:37.814631   73696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:37.814622   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .Close
	I0731 18:12:37.816102   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | Closing plugin on server side
	I0731 18:12:37.816268   73696 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:37.816280   73696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:37.830991   73696 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:37.831018   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .Close
	I0731 18:12:37.831354   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | Closing plugin on server side
	I0731 18:12:37.831354   73696 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:37.831380   73696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:37.995206   73696 system_pods.go:86] 8 kube-system pods found
	I0731 18:12:37.995248   73696 system_pods.go:89] "coredns-7db6d8ff4d-2r7zb" [e7acc926-db53-4c1c-a62f-45e303c69fc7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:12:37.995262   73696 system_pods.go:89] "coredns-7db6d8ff4d-756jj" [c91e86b1-8348-4dc7-9aa1-6909055c2dde] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:12:37.995272   73696 system_pods.go:89] "etcd-default-k8s-diff-port-094310" [f73c4fb4-b160-4e79-a0a4-8fba255cfb8c] Running
	I0731 18:12:37.995295   73696 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-094310" [820215b3-0eaa-46ca-bf0b-4fa73942160a] Running
	I0731 18:12:37.995310   73696 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-094310" [5ca0ed32-5439-49b5-9a85-1a4b1628371d] Running
	I0731 18:12:37.995322   73696 system_pods.go:89] "kube-proxy-4vvjq" [a8dd9db6-5f45-4c8d-a9d3-03ede1db07eb] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 18:12:37.995332   73696 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-094310" [11590718-4e74-46eb-be90-6405c3f1759f] Running
	I0731 18:12:37.995345   73696 system_pods.go:89] "storage-provisioner" [2e4c5a96-4dfc-4af6-8d3e-bab865644328] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 18:12:37.995370   73696 retry.go:31] will retry after 381.430275ms: missing components: kube-dns, kube-proxy
	I0731 18:12:38.392678   73696 system_pods.go:86] 9 kube-system pods found
	I0731 18:12:38.392719   73696 system_pods.go:89] "coredns-7db6d8ff4d-2r7zb" [e7acc926-db53-4c1c-a62f-45e303c69fc7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:12:38.392732   73696 system_pods.go:89] "coredns-7db6d8ff4d-756jj" [c91e86b1-8348-4dc7-9aa1-6909055c2dde] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:12:38.392742   73696 system_pods.go:89] "etcd-default-k8s-diff-port-094310" [f73c4fb4-b160-4e79-a0a4-8fba255cfb8c] Running
	I0731 18:12:38.392751   73696 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-094310" [820215b3-0eaa-46ca-bf0b-4fa73942160a] Running
	I0731 18:12:38.392760   73696 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-094310" [5ca0ed32-5439-49b5-9a85-1a4b1628371d] Running
	I0731 18:12:38.392770   73696 system_pods.go:89] "kube-proxy-4vvjq" [a8dd9db6-5f45-4c8d-a9d3-03ede1db07eb] Running
	I0731 18:12:38.392778   73696 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-094310" [11590718-4e74-46eb-be90-6405c3f1759f] Running
	I0731 18:12:38.392787   73696 system_pods.go:89] "metrics-server-569cc877fc-mskwc" [c57990b4-1d91-4764-9c33-2fd5f7d2f83b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 18:12:38.392802   73696 system_pods.go:89] "storage-provisioner" [2e4c5a96-4dfc-4af6-8d3e-bab865644328] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 18:12:38.392823   73696 retry.go:31] will retry after 567.905994ms: missing components: kube-dns
	I0731 18:12:38.501117   73696 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.341621275s)
	I0731 18:12:38.501181   73696 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:38.501200   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .Close
	I0731 18:12:38.501595   73696 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:38.501615   73696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:38.501625   73696 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:38.501634   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) Calling .Close
	I0731 18:12:38.501593   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | Closing plugin on server side
	I0731 18:12:38.501907   73696 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:38.501953   73696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:38.501966   73696 main.go:141] libmachine: (default-k8s-diff-port-094310) DBG | Closing plugin on server side
	I0731 18:12:38.501975   73696 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-094310"
	I0731 18:12:38.505204   73696 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0731 18:12:38.506517   73696 addons.go:510] duration metric: took 1.921658263s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0731 18:12:38.967657   73696 system_pods.go:86] 9 kube-system pods found
	I0731 18:12:38.967691   73696 system_pods.go:89] "coredns-7db6d8ff4d-2r7zb" [e7acc926-db53-4c1c-a62f-45e303c69fc7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:12:38.967700   73696 system_pods.go:89] "coredns-7db6d8ff4d-756jj" [c91e86b1-8348-4dc7-9aa1-6909055c2dde] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 18:12:38.967708   73696 system_pods.go:89] "etcd-default-k8s-diff-port-094310" [f73c4fb4-b160-4e79-a0a4-8fba255cfb8c] Running
	I0731 18:12:38.967716   73696 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-094310" [820215b3-0eaa-46ca-bf0b-4fa73942160a] Running
	I0731 18:12:38.967723   73696 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-094310" [5ca0ed32-5439-49b5-9a85-1a4b1628371d] Running
	I0731 18:12:38.967729   73696 system_pods.go:89] "kube-proxy-4vvjq" [a8dd9db6-5f45-4c8d-a9d3-03ede1db07eb] Running
	I0731 18:12:38.967736   73696 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-094310" [11590718-4e74-46eb-be90-6405c3f1759f] Running
	I0731 18:12:38.967746   73696 system_pods.go:89] "metrics-server-569cc877fc-mskwc" [c57990b4-1d91-4764-9c33-2fd5f7d2f83b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 18:12:38.967759   73696 system_pods.go:89] "storage-provisioner" [2e4c5a96-4dfc-4af6-8d3e-bab865644328] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 18:12:38.967779   73696 retry.go:31] will retry after 488.293971ms: missing components: kube-dns
	I0731 18:12:39.464918   73696 system_pods.go:86] 9 kube-system pods found
	I0731 18:12:39.464956   73696 system_pods.go:89] "coredns-7db6d8ff4d-2r7zb" [e7acc926-db53-4c1c-a62f-45e303c69fc7] Running
	I0731 18:12:39.464965   73696 system_pods.go:89] "coredns-7db6d8ff4d-756jj" [c91e86b1-8348-4dc7-9aa1-6909055c2dde] Running
	I0731 18:12:39.464972   73696 system_pods.go:89] "etcd-default-k8s-diff-port-094310" [f73c4fb4-b160-4e79-a0a4-8fba255cfb8c] Running
	I0731 18:12:39.464978   73696 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-094310" [820215b3-0eaa-46ca-bf0b-4fa73942160a] Running
	I0731 18:12:39.464986   73696 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-094310" [5ca0ed32-5439-49b5-9a85-1a4b1628371d] Running
	I0731 18:12:39.464992   73696 system_pods.go:89] "kube-proxy-4vvjq" [a8dd9db6-5f45-4c8d-a9d3-03ede1db07eb] Running
	I0731 18:12:39.464999   73696 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-094310" [11590718-4e74-46eb-be90-6405c3f1759f] Running
	I0731 18:12:39.465017   73696 system_pods.go:89] "metrics-server-569cc877fc-mskwc" [c57990b4-1d91-4764-9c33-2fd5f7d2f83b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 18:12:39.465028   73696 system_pods.go:89] "storage-provisioner" [2e4c5a96-4dfc-4af6-8d3e-bab865644328] Running
	I0731 18:12:39.465041   73696 system_pods.go:126] duration metric: took 2.229422302s to wait for k8s-apps to be running ...
	I0731 18:12:39.465053   73696 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 18:12:39.465111   73696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:12:39.482063   73696 system_svc.go:56] duration metric: took 16.998965ms WaitForService to wait for kubelet
	I0731 18:12:39.482092   73696 kubeadm.go:582] duration metric: took 2.898066741s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 18:12:39.482138   73696 node_conditions.go:102] verifying NodePressure condition ...
	I0731 18:12:39.486728   73696 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 18:12:39.486752   73696 node_conditions.go:123] node cpu capacity is 2
	I0731 18:12:39.486764   73696 node_conditions.go:105] duration metric: took 4.617934ms to run NodePressure ...
	I0731 18:12:39.486777   73696 start.go:241] waiting for startup goroutines ...
	I0731 18:12:39.486787   73696 start.go:246] waiting for cluster config update ...
	I0731 18:12:39.486798   73696 start.go:255] writing updated cluster config ...
	I0731 18:12:39.487565   73696 ssh_runner.go:195] Run: rm -f paused
	I0731 18:12:39.539591   73696 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 18:12:39.541533   73696 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-094310" cluster and "default" namespace by default
	I0731 18:12:37.644379   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:39.645608   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:41.969949   73800 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0731 18:12:41.970018   73800 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 18:12:41.970137   73800 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 18:12:41.970234   73800 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 18:12:41.970386   73800 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 18:12:41.970495   73800 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 18:12:41.972177   73800 out.go:204]   - Generating certificates and keys ...
	I0731 18:12:41.972244   73800 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 18:12:41.972314   73800 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 18:12:41.972403   73800 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 18:12:41.972480   73800 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 18:12:41.972538   73800 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 18:12:41.972588   73800 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 18:12:41.972654   73800 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 18:12:41.972748   73800 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 18:12:41.972859   73800 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 18:12:41.972982   73800 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 18:12:41.973027   73800 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 18:12:41.973082   73800 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 18:12:41.973152   73800 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 18:12:41.973205   73800 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0731 18:12:41.973252   73800 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 18:12:41.973323   73800 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 18:12:41.973387   73800 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 18:12:41.973456   73800 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 18:12:41.973553   73800 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 18:12:41.974927   73800 out.go:204]   - Booting up control plane ...
	I0731 18:12:41.975019   73800 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 18:12:41.975128   73800 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 18:12:41.975215   73800 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 18:12:41.975342   73800 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 18:12:41.975425   73800 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 18:12:41.975474   73800 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 18:12:41.975635   73800 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0731 18:12:41.975710   73800 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0731 18:12:41.975766   73800 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001397088s
	I0731 18:12:41.975824   73800 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0731 18:12:41.975909   73800 kubeadm.go:310] [api-check] The API server is healthy after 5.001258426s
	I0731 18:12:41.976064   73800 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 18:12:41.976241   73800 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 18:12:41.976355   73800 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 18:12:41.976528   73800 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-436067 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 18:12:41.976605   73800 kubeadm.go:310] [bootstrap-token] Using token: m9csv8.j58cj919sgzkgy1k
	I0731 18:12:41.978880   73800 out.go:204]   - Configuring RBAC rules ...
	I0731 18:12:41.978976   73800 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 18:12:41.979087   73800 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 18:12:41.979277   73800 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 18:12:41.979441   73800 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 18:12:41.979622   73800 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 18:12:41.979708   73800 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 18:12:41.979835   73800 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 18:12:41.979875   73800 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 18:12:41.979918   73800 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 18:12:41.979924   73800 kubeadm.go:310] 
	I0731 18:12:41.979971   73800 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 18:12:41.979979   73800 kubeadm.go:310] 
	I0731 18:12:41.980058   73800 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 18:12:41.980067   73800 kubeadm.go:310] 
	I0731 18:12:41.980098   73800 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 18:12:41.980160   73800 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 18:12:41.980229   73800 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 18:12:41.980236   73800 kubeadm.go:310] 
	I0731 18:12:41.980300   73800 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 18:12:41.980311   73800 kubeadm.go:310] 
	I0731 18:12:41.980384   73800 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 18:12:41.980393   73800 kubeadm.go:310] 
	I0731 18:12:41.980446   73800 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 18:12:41.980548   73800 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 18:12:41.980644   73800 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 18:12:41.980653   73800 kubeadm.go:310] 
	I0731 18:12:41.980759   73800 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 18:12:41.980824   73800 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 18:12:41.980830   73800 kubeadm.go:310] 
	I0731 18:12:41.980896   73800 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token m9csv8.j58cj919sgzkgy1k \
	I0731 18:12:41.980984   73800 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:460b24b51d10a3760f2391542a8a89e401359854f437f922cbee3f2d8ed6c856 \
	I0731 18:12:41.981011   73800 kubeadm.go:310] 	--control-plane 
	I0731 18:12:41.981023   73800 kubeadm.go:310] 
	I0731 18:12:41.981093   73800 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 18:12:41.981099   73800 kubeadm.go:310] 
	I0731 18:12:41.981183   73800 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token m9csv8.j58cj919sgzkgy1k \
	I0731 18:12:41.981306   73800 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:460b24b51d10a3760f2391542a8a89e401359854f437f922cbee3f2d8ed6c856 
	I0731 18:12:41.981317   73800 cni.go:84] Creating CNI manager for ""
	I0731 18:12:41.981324   73800 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:12:41.982701   73800 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 18:12:41.983929   73800 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 18:12:41.995272   73800 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 18:12:42.014929   73800 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 18:12:42.014984   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:42.015033   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-436067 minikube.k8s.io/updated_at=2024_07_31T18_12_42_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1d737dad7efa60c56d30434fcd857dd3b14c91d9 minikube.k8s.io/name=embed-certs-436067 minikube.k8s.io/primary=true
	I0731 18:12:42.164811   73800 ops.go:34] apiserver oom_adj: -16
	I0731 18:12:42.164934   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:42.665108   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:43.165818   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:43.665733   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:44.165074   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:42.144896   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:44.644077   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:44.665477   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:45.165127   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:45.665440   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:46.165555   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:46.665998   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:47.165829   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:47.665704   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:48.164973   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:48.665549   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:49.165210   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:47.142947   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:49.144015   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:51.644495   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:49.665500   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:50.165567   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:50.665547   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:51.166002   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:51.664981   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:52.165135   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:52.665927   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:53.165045   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:53.664981   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:54.165715   73800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:12:54.252373   73800 kubeadm.go:1113] duration metric: took 12.237438799s to wait for elevateKubeSystemPrivileges
	I0731 18:12:54.252415   73800 kubeadm.go:394] duration metric: took 5m6.689979758s to StartCluster
	I0731 18:12:54.252435   73800 settings.go:142] acquiring lock: {Name:mk8ac8269296bf0b887a00ff74caaad180005f89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:12:54.252509   73800 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 18:12:54.254175   73800 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/kubeconfig: {Name:mkf2340bc1ada3c683f132aca37228d7588e1fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:12:54.254495   73800 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.86 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 18:12:54.254600   73800 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 18:12:54.254687   73800 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-436067"
	I0731 18:12:54.254721   73800 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-436067"
	I0731 18:12:54.254724   73800 addons.go:69] Setting default-storageclass=true in profile "embed-certs-436067"
	W0731 18:12:54.254734   73800 addons.go:243] addon storage-provisioner should already be in state true
	I0731 18:12:54.254737   73800 config.go:182] Loaded profile config "embed-certs-436067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:12:54.254743   73800 addons.go:69] Setting metrics-server=true in profile "embed-certs-436067"
	I0731 18:12:54.254760   73800 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-436067"
	I0731 18:12:54.254769   73800 host.go:66] Checking if "embed-certs-436067" exists ...
	I0731 18:12:54.254785   73800 addons.go:234] Setting addon metrics-server=true in "embed-certs-436067"
	W0731 18:12:54.254795   73800 addons.go:243] addon metrics-server should already be in state true
	I0731 18:12:54.254826   73800 host.go:66] Checking if "embed-certs-436067" exists ...
	I0731 18:12:54.255205   73800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:54.255208   73800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:54.255233   73800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:54.255238   73800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:54.255302   73800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:54.255323   73800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:54.256412   73800 out.go:177] * Verifying Kubernetes components...
	I0731 18:12:54.257653   73800 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:12:54.274456   73800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34433
	I0731 18:12:54.274959   73800 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:54.275532   73800 main.go:141] libmachine: Using API Version  1
	I0731 18:12:54.275554   73800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:54.275828   73800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44423
	I0731 18:12:54.275851   73800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41513
	I0731 18:12:54.276001   73800 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:54.276152   73800 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:54.276225   73800 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:54.276498   73800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:54.276534   73800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:54.276592   73800 main.go:141] libmachine: Using API Version  1
	I0731 18:12:54.276606   73800 main.go:141] libmachine: Using API Version  1
	I0731 18:12:54.276613   73800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:54.276616   73800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:54.276954   73800 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:54.277055   73800 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:54.277103   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetState
	I0731 18:12:54.277663   73800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:54.277704   73800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:54.280559   73800 addons.go:234] Setting addon default-storageclass=true in "embed-certs-436067"
	W0731 18:12:54.280583   73800 addons.go:243] addon default-storageclass should already be in state true
	I0731 18:12:54.280615   73800 host.go:66] Checking if "embed-certs-436067" exists ...
	I0731 18:12:54.280969   73800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:54.281000   73800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:54.293211   73800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38393
	I0731 18:12:54.293657   73800 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:54.294121   73800 main.go:141] libmachine: Using API Version  1
	I0731 18:12:54.294142   73800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:54.294444   73800 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:54.294642   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetState
	I0731 18:12:54.294724   73800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38771
	I0731 18:12:54.295077   73800 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:54.295590   73800 main.go:141] libmachine: Using API Version  1
	I0731 18:12:54.295609   73800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:54.296058   73800 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:54.296285   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetState
	I0731 18:12:54.296377   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:12:54.298013   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:12:54.298541   73800 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0731 18:12:54.299454   73800 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:12:54.299489   73800 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 18:12:54.299501   73800 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 18:12:54.299515   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:12:54.300664   73800 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 18:12:54.300682   73800 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 18:12:54.300699   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:12:54.301018   73800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36647
	I0731 18:12:54.301671   73800 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:54.302210   73800 main.go:141] libmachine: Using API Version  1
	I0731 18:12:54.302229   73800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:54.302731   73800 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:54.302857   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:12:54.303479   73800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:12:54.303503   73800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:12:54.303710   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:12:54.303744   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:12:54.303768   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:12:54.303893   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:12:54.304071   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:12:54.304232   73800 sshutil.go:53] new ssh client: &{IP:192.168.50.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/embed-certs-436067/id_rsa Username:docker}
	I0731 18:12:54.304601   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:12:54.305040   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:12:54.305063   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:12:54.305311   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:12:54.305480   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:12:54.305594   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:12:54.305712   73800 sshutil.go:53] new ssh client: &{IP:192.168.50.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/embed-certs-436067/id_rsa Username:docker}
	I0731 18:12:54.318168   73800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33711
	I0731 18:12:54.318558   73800 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:12:54.319015   73800 main.go:141] libmachine: Using API Version  1
	I0731 18:12:54.319033   73800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:12:54.319355   73800 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:12:54.319552   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetState
	I0731 18:12:54.321369   73800 main.go:141] libmachine: (embed-certs-436067) Calling .DriverName
	I0731 18:12:54.321540   73800 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 18:12:54.321553   73800 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 18:12:54.321565   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHHostname
	I0731 18:12:54.324613   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:12:54.324994   73800 main.go:141] libmachine: (embed-certs-436067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:1e:25", ip: ""} in network mk-embed-certs-436067: {Iface:virbr4 ExpiryTime:2024-07-31 19:07:32 +0000 UTC Type:0 Mac:52:54:00:87:1e:25 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:embed-certs-436067 Clientid:01:52:54:00:87:1e:25}
	I0731 18:12:54.325011   73800 main.go:141] libmachine: (embed-certs-436067) DBG | domain embed-certs-436067 has defined IP address 192.168.50.86 and MAC address 52:54:00:87:1e:25 in network mk-embed-certs-436067
	I0731 18:12:54.325310   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHPort
	I0731 18:12:54.325437   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHKeyPath
	I0731 18:12:54.325571   73800 main.go:141] libmachine: (embed-certs-436067) Calling .GetSSHUsername
	I0731 18:12:54.325683   73800 sshutil.go:53] new ssh client: &{IP:192.168.50.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/embed-certs-436067/id_rsa Username:docker}
	I0731 18:12:54.435485   73800 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 18:12:54.462541   73800 node_ready.go:35] waiting up to 6m0s for node "embed-certs-436067" to be "Ready" ...
	I0731 18:12:54.473787   73800 node_ready.go:49] node "embed-certs-436067" has status "Ready":"True"
	I0731 18:12:54.473810   73800 node_ready.go:38] duration metric: took 11.237808ms for node "embed-certs-436067" to be "Ready" ...
	I0731 18:12:54.473819   73800 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:12:54.485589   73800 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:54.507887   73800 pod_ready.go:92] pod "etcd-embed-certs-436067" in "kube-system" namespace has status "Ready":"True"
	I0731 18:12:54.507910   73800 pod_ready.go:81] duration metric: took 22.296215ms for pod "etcd-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:54.507921   73800 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:54.524721   73800 pod_ready.go:92] pod "kube-apiserver-embed-certs-436067" in "kube-system" namespace has status "Ready":"True"
	I0731 18:12:54.524742   73800 pod_ready.go:81] duration metric: took 16.814491ms for pod "kube-apiserver-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:54.524751   73800 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:54.536810   73800 pod_ready.go:92] pod "kube-controller-manager-embed-certs-436067" in "kube-system" namespace has status "Ready":"True"
	I0731 18:12:54.536837   73800 pod_ready.go:81] duration metric: took 12.078703ms for pod "kube-controller-manager-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:54.536848   73800 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-85spm" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:54.552538   73800 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 18:12:54.579223   73800 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 18:12:54.579244   73800 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0731 18:12:54.596087   73800 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 18:12:54.617180   73800 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 18:12:54.617209   73800 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 18:12:54.679879   73800 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 18:12:54.679908   73800 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 18:12:54.775272   73800 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 18:12:55.199299   73800 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:55.199335   73800 main.go:141] libmachine: (embed-certs-436067) Calling .Close
	I0731 18:12:55.199342   73800 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:55.199361   73800 main.go:141] libmachine: (embed-certs-436067) Calling .Close
	I0731 18:12:55.199618   73800 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:55.199666   73800 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:55.199678   73800 main.go:141] libmachine: (embed-certs-436067) DBG | Closing plugin on server side
	I0731 18:12:55.199634   73800 main.go:141] libmachine: (embed-certs-436067) DBG | Closing plugin on server side
	I0731 18:12:55.199685   73800 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:55.199710   73800 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:55.199689   73800 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:55.199717   73800 main.go:141] libmachine: (embed-certs-436067) Calling .Close
	I0731 18:12:55.199726   73800 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:55.199735   73800 main.go:141] libmachine: (embed-certs-436067) Calling .Close
	I0731 18:12:55.200002   73800 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:55.200016   73800 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:55.200079   73800 main.go:141] libmachine: (embed-certs-436067) DBG | Closing plugin on server side
	I0731 18:12:55.200107   73800 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:55.200120   73800 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:55.227472   73800 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:55.227497   73800 main.go:141] libmachine: (embed-certs-436067) Calling .Close
	I0731 18:12:55.227792   73800 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:55.227811   73800 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:55.712134   73800 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:55.712159   73800 main.go:141] libmachine: (embed-certs-436067) Calling .Close
	I0731 18:12:55.712516   73800 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:55.712568   73800 main.go:141] libmachine: (embed-certs-436067) DBG | Closing plugin on server side
	I0731 18:12:55.712574   73800 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:55.712596   73800 main.go:141] libmachine: Making call to close driver server
	I0731 18:12:55.712605   73800 main.go:141] libmachine: (embed-certs-436067) Calling .Close
	I0731 18:12:55.712851   73800 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:12:55.712868   73800 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:12:55.712867   73800 main.go:141] libmachine: (embed-certs-436067) DBG | Closing plugin on server side
	I0731 18:12:55.712877   73800 addons.go:475] Verifying addon metrics-server=true in "embed-certs-436067"
	I0731 18:12:55.714432   73800 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0731 18:12:54.143455   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:56.144177   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:12:55.715903   73800 addons.go:510] duration metric: took 1.461304856s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0731 18:12:56.542100   73800 pod_ready.go:92] pod "kube-proxy-85spm" in "kube-system" namespace has status "Ready":"True"
	I0731 18:12:56.542122   73800 pod_ready.go:81] duration metric: took 2.005265959s for pod "kube-proxy-85spm" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:56.542135   73800 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:56.553810   73800 pod_ready.go:92] pod "kube-scheduler-embed-certs-436067" in "kube-system" namespace has status "Ready":"True"
	I0731 18:12:56.553831   73800 pod_ready.go:81] duration metric: took 11.689814ms for pod "kube-scheduler-embed-certs-436067" in "kube-system" namespace to be "Ready" ...
	I0731 18:12:56.553840   73800 pod_ready.go:38] duration metric: took 2.080010607s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:12:56.553853   73800 api_server.go:52] waiting for apiserver process to appear ...
	I0731 18:12:56.553899   73800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:12:56.568301   73800 api_server.go:72] duration metric: took 2.313759916s to wait for apiserver process to appear ...
	I0731 18:12:56.568327   73800 api_server.go:88] waiting for apiserver healthz status ...
	I0731 18:12:56.568345   73800 api_server.go:253] Checking apiserver healthz at https://192.168.50.86:8443/healthz ...
	I0731 18:12:56.573861   73800 api_server.go:279] https://192.168.50.86:8443/healthz returned 200:
	ok
	I0731 18:12:56.575494   73800 api_server.go:141] control plane version: v1.30.3
	I0731 18:12:56.575513   73800 api_server.go:131] duration metric: took 7.1795ms to wait for apiserver health ...
	I0731 18:12:56.575520   73800 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 18:12:56.669169   73800 system_pods.go:59] 9 kube-system pods found
	I0731 18:12:56.669197   73800 system_pods.go:61] "coredns-7db6d8ff4d-fqkfd" [2e5a67a3-6f2b-43a5-8b94-cf48202c5958] Running
	I0731 18:12:56.669202   73800 system_pods.go:61] "coredns-7db6d8ff4d-qpb62" [e57f157b-01b5-42ec-9630-b9b5ae94fe5d] Running
	I0731 18:12:56.669206   73800 system_pods.go:61] "etcd-embed-certs-436067" [d0439817-b307-4673-8a64-34aa31266fbb] Running
	I0731 18:12:56.669210   73800 system_pods.go:61] "kube-apiserver-embed-certs-436067" [c2409e33-d42b-4132-8f63-197c4d445704] Running
	I0731 18:12:56.669214   73800 system_pods.go:61] "kube-controller-manager-embed-certs-436067" [dfea1f44-48a3-4a3c-8a27-be6f01f58add] Running
	I0731 18:12:56.669218   73800 system_pods.go:61] "kube-proxy-85spm" [eecb399a-0365-436d-8f14-a7853a5a2ea3] Running
	I0731 18:12:56.669221   73800 system_pods.go:61] "kube-scheduler-embed-certs-436067" [e406617d-adf7-4baa-9a8a-6fd92847a12f] Running
	I0731 18:12:56.669228   73800 system_pods.go:61] "metrics-server-569cc877fc-pgf6q" [b799fa04-0bf7-4914-9738-964b825577b5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 18:12:56.669231   73800 system_pods.go:61] "storage-provisioner" [a7d95d32-1002-4fba-b0fd-555b872efa1f] Running
	I0731 18:12:56.669240   73800 system_pods.go:74] duration metric: took 93.714593ms to wait for pod list to return data ...
	I0731 18:12:56.669247   73800 default_sa.go:34] waiting for default service account to be created ...
	I0731 18:12:56.866494   73800 default_sa.go:45] found service account: "default"
	I0731 18:12:56.866521   73800 default_sa.go:55] duration metric: took 197.264891ms for default service account to be created ...
	I0731 18:12:56.866532   73800 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 18:12:57.068903   73800 system_pods.go:86] 9 kube-system pods found
	I0731 18:12:57.068930   73800 system_pods.go:89] "coredns-7db6d8ff4d-fqkfd" [2e5a67a3-6f2b-43a5-8b94-cf48202c5958] Running
	I0731 18:12:57.068936   73800 system_pods.go:89] "coredns-7db6d8ff4d-qpb62" [e57f157b-01b5-42ec-9630-b9b5ae94fe5d] Running
	I0731 18:12:57.068940   73800 system_pods.go:89] "etcd-embed-certs-436067" [d0439817-b307-4673-8a64-34aa31266fbb] Running
	I0731 18:12:57.068944   73800 system_pods.go:89] "kube-apiserver-embed-certs-436067" [c2409e33-d42b-4132-8f63-197c4d445704] Running
	I0731 18:12:57.068948   73800 system_pods.go:89] "kube-controller-manager-embed-certs-436067" [dfea1f44-48a3-4a3c-8a27-be6f01f58add] Running
	I0731 18:12:57.068951   73800 system_pods.go:89] "kube-proxy-85spm" [eecb399a-0365-436d-8f14-a7853a5a2ea3] Running
	I0731 18:12:57.068955   73800 system_pods.go:89] "kube-scheduler-embed-certs-436067" [e406617d-adf7-4baa-9a8a-6fd92847a12f] Running
	I0731 18:12:57.068961   73800 system_pods.go:89] "metrics-server-569cc877fc-pgf6q" [b799fa04-0bf7-4914-9738-964b825577b5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 18:12:57.068965   73800 system_pods.go:89] "storage-provisioner" [a7d95d32-1002-4fba-b0fd-555b872efa1f] Running
	I0731 18:12:57.068972   73800 system_pods.go:126] duration metric: took 202.435205ms to wait for k8s-apps to be running ...
	I0731 18:12:57.068980   73800 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 18:12:57.069018   73800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:12:57.083728   73800 system_svc.go:56] duration metric: took 14.739831ms WaitForService to wait for kubelet
	I0731 18:12:57.083756   73800 kubeadm.go:582] duration metric: took 2.829227102s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 18:12:57.083782   73800 node_conditions.go:102] verifying NodePressure condition ...
	I0731 18:12:57.266463   73800 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 18:12:57.266486   73800 node_conditions.go:123] node cpu capacity is 2
	I0731 18:12:57.266495   73800 node_conditions.go:105] duration metric: took 182.707869ms to run NodePressure ...
	I0731 18:12:57.266505   73800 start.go:241] waiting for startup goroutines ...
	I0731 18:12:57.266512   73800 start.go:246] waiting for cluster config update ...
	I0731 18:12:57.266521   73800 start.go:255] writing updated cluster config ...
	I0731 18:12:57.266767   73800 ssh_runner.go:195] Run: rm -f paused
	I0731 18:12:57.313723   73800 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 18:12:57.315966   73800 out.go:177] * Done! kubectl is now configured to use "embed-certs-436067" cluster and "default" namespace by default
	I0731 18:12:58.652853   74203 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0731 18:12:58.653480   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:12:58.653735   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:12:58.643237   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:13:01.143274   73479 pod_ready.go:102] pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace has status "Ready":"False"
	I0731 18:13:01.643357   73479 pod_ready.go:81] duration metric: took 4m0.006506347s for pod "metrics-server-78fcd8795b-27pkr" in "kube-system" namespace to be "Ready" ...
	E0731 18:13:01.643382   73479 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0731 18:13:01.643388   73479 pod_ready.go:38] duration metric: took 4m7.418860701s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:13:01.643402   73479 api_server.go:52] waiting for apiserver process to appear ...
	I0731 18:13:01.643428   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:13:01.643481   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:13:01.692071   73479 cri.go:89] found id: "895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0"
	I0731 18:13:01.692092   73479 cri.go:89] found id: ""
	I0731 18:13:01.692101   73479 logs.go:276] 1 containers: [895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0]
	I0731 18:13:01.692159   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:01.697266   73479 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:13:01.697356   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:13:01.736299   73479 cri.go:89] found id: "65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645"
	I0731 18:13:01.736350   73479 cri.go:89] found id: ""
	I0731 18:13:01.736360   73479 logs.go:276] 1 containers: [65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645]
	I0731 18:13:01.736417   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:01.740672   73479 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:13:01.740733   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:13:01.774782   73479 cri.go:89] found id: "f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855"
	I0731 18:13:01.774816   73479 cri.go:89] found id: ""
	I0731 18:13:01.774826   73479 logs.go:276] 1 containers: [f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855]
	I0731 18:13:01.774893   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:01.778542   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:13:01.778618   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:13:01.818749   73479 cri.go:89] found id: "ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2"
	I0731 18:13:01.818769   73479 cri.go:89] found id: ""
	I0731 18:13:01.818776   73479 logs.go:276] 1 containers: [ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2]
	I0731 18:13:01.818828   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:01.827176   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:13:01.827248   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:13:01.860700   73479 cri.go:89] found id: "57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847"
	I0731 18:13:01.860730   73479 cri.go:89] found id: ""
	I0731 18:13:01.860739   73479 logs.go:276] 1 containers: [57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847]
	I0731 18:13:01.860825   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:03.654494   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:13:03.654747   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:13:01.864629   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:13:01.864702   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:13:01.899293   73479 cri.go:89] found id: "ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c"
	I0731 18:13:01.899338   73479 cri.go:89] found id: ""
	I0731 18:13:01.899347   73479 logs.go:276] 1 containers: [ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c]
	I0731 18:13:01.899406   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:01.903202   73479 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:13:01.903272   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:13:01.934472   73479 cri.go:89] found id: ""
	I0731 18:13:01.934505   73479 logs.go:276] 0 containers: []
	W0731 18:13:01.934516   73479 logs.go:278] No container was found matching "kindnet"
	I0731 18:13:01.934523   73479 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 18:13:01.934588   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 18:13:01.967244   73479 cri.go:89] found id: "6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df"
	I0731 18:13:01.967271   73479 cri.go:89] found id: "9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c"
	I0731 18:13:01.967276   73479 cri.go:89] found id: ""
	I0731 18:13:01.967285   73479 logs.go:276] 2 containers: [6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df 9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c]
	I0731 18:13:01.967349   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:01.971167   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:01.975648   73479 logs.go:123] Gathering logs for kubelet ...
	I0731 18:13:01.975670   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:13:02.031430   73479 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:13:02.031472   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 18:13:02.158774   73479 logs.go:123] Gathering logs for kube-apiserver [895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0] ...
	I0731 18:13:02.158803   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0"
	I0731 18:13:02.199495   73479 logs.go:123] Gathering logs for kube-scheduler [ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2] ...
	I0731 18:13:02.199521   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2"
	I0731 18:13:02.232285   73479 logs.go:123] Gathering logs for kube-proxy [57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847] ...
	I0731 18:13:02.232327   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847"
	I0731 18:13:02.272360   73479 logs.go:123] Gathering logs for storage-provisioner [9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c] ...
	I0731 18:13:02.272389   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c"
	I0731 18:13:02.305902   73479 logs.go:123] Gathering logs for dmesg ...
	I0731 18:13:02.305931   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:13:02.319954   73479 logs.go:123] Gathering logs for etcd [65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645] ...
	I0731 18:13:02.319984   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645"
	I0731 18:13:02.361657   73479 logs.go:123] Gathering logs for coredns [f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855] ...
	I0731 18:13:02.361685   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855"
	I0731 18:13:02.395696   73479 logs.go:123] Gathering logs for kube-controller-manager [ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c] ...
	I0731 18:13:02.395724   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c"
	I0731 18:13:02.444671   73479 logs.go:123] Gathering logs for storage-provisioner [6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df] ...
	I0731 18:13:02.444704   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df"
	I0731 18:13:02.480666   73479 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:13:02.480693   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:13:02.967693   73479 logs.go:123] Gathering logs for container status ...
	I0731 18:13:02.967741   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:13:05.512381   73479 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:13:05.528582   73479 api_server.go:72] duration metric: took 4m19.030809429s to wait for apiserver process to appear ...
	I0731 18:13:05.528612   73479 api_server.go:88] waiting for apiserver healthz status ...
	I0731 18:13:05.528652   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:13:05.528730   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:13:05.567984   73479 cri.go:89] found id: "895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0"
	I0731 18:13:05.568004   73479 cri.go:89] found id: ""
	I0731 18:13:05.568013   73479 logs.go:276] 1 containers: [895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0]
	I0731 18:13:05.568073   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:05.571946   73479 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:13:05.572003   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:13:05.620468   73479 cri.go:89] found id: "65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645"
	I0731 18:13:05.620495   73479 cri.go:89] found id: ""
	I0731 18:13:05.620504   73479 logs.go:276] 1 containers: [65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645]
	I0731 18:13:05.620571   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:05.624599   73479 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:13:05.624653   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:13:05.663717   73479 cri.go:89] found id: "f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855"
	I0731 18:13:05.663740   73479 cri.go:89] found id: ""
	I0731 18:13:05.663748   73479 logs.go:276] 1 containers: [f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855]
	I0731 18:13:05.663803   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:05.667601   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:13:05.667672   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:13:05.699764   73479 cri.go:89] found id: "ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2"
	I0731 18:13:05.699791   73479 cri.go:89] found id: ""
	I0731 18:13:05.699801   73479 logs.go:276] 1 containers: [ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2]
	I0731 18:13:05.699858   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:05.703965   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:13:05.704036   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:13:05.739460   73479 cri.go:89] found id: "57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847"
	I0731 18:13:05.739487   73479 cri.go:89] found id: ""
	I0731 18:13:05.739496   73479 logs.go:276] 1 containers: [57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847]
	I0731 18:13:05.739558   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:05.743180   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:13:05.743232   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:13:05.777369   73479 cri.go:89] found id: "ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c"
	I0731 18:13:05.777390   73479 cri.go:89] found id: ""
	I0731 18:13:05.777397   73479 logs.go:276] 1 containers: [ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c]
	I0731 18:13:05.777449   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:05.781388   73479 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:13:05.781435   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:13:05.825567   73479 cri.go:89] found id: ""
	I0731 18:13:05.825599   73479 logs.go:276] 0 containers: []
	W0731 18:13:05.825610   73479 logs.go:278] No container was found matching "kindnet"
	I0731 18:13:05.825617   73479 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 18:13:05.825689   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 18:13:05.859538   73479 cri.go:89] found id: "6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df"
	I0731 18:13:05.859570   73479 cri.go:89] found id: "9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c"
	I0731 18:13:05.859577   73479 cri.go:89] found id: ""
	I0731 18:13:05.859586   73479 logs.go:276] 2 containers: [6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df 9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c]
	I0731 18:13:05.859657   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:05.863513   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:05.866989   73479 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:13:05.867011   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:13:06.314116   73479 logs.go:123] Gathering logs for container status ...
	I0731 18:13:06.314166   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:13:06.357738   73479 logs.go:123] Gathering logs for kubelet ...
	I0731 18:13:06.357764   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:13:06.407330   73479 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:13:06.407365   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 18:13:06.508580   73479 logs.go:123] Gathering logs for kube-apiserver [895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0] ...
	I0731 18:13:06.508616   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0"
	I0731 18:13:06.550032   73479 logs.go:123] Gathering logs for coredns [f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855] ...
	I0731 18:13:06.550071   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855"
	I0731 18:13:06.588519   73479 logs.go:123] Gathering logs for kube-proxy [57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847] ...
	I0731 18:13:06.588548   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847"
	I0731 18:13:06.622872   73479 logs.go:123] Gathering logs for storage-provisioner [6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df] ...
	I0731 18:13:06.622901   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df"
	I0731 18:13:06.666694   73479 logs.go:123] Gathering logs for dmesg ...
	I0731 18:13:06.666721   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:13:06.680326   73479 logs.go:123] Gathering logs for etcd [65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645] ...
	I0731 18:13:06.680355   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645"
	I0731 18:13:06.723966   73479 logs.go:123] Gathering logs for kube-scheduler [ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2] ...
	I0731 18:13:06.723997   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2"
	I0731 18:13:06.760873   73479 logs.go:123] Gathering logs for kube-controller-manager [ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c] ...
	I0731 18:13:06.760901   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c"
	I0731 18:13:06.809348   73479 logs.go:123] Gathering logs for storage-provisioner [9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c] ...
	I0731 18:13:06.809387   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c"
	I0731 18:13:09.341394   73479 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I0731 18:13:09.346642   73479 api_server.go:279] https://192.168.61.126:8443/healthz returned 200:
	ok
	I0731 18:13:09.347803   73479 api_server.go:141] control plane version: v1.31.0-beta.0
	I0731 18:13:09.347821   73479 api_server.go:131] duration metric: took 3.819202346s to wait for apiserver health ...
	I0731 18:13:09.347828   73479 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 18:13:09.347850   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:13:09.347903   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:13:09.391857   73479 cri.go:89] found id: "895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0"
	I0731 18:13:09.391885   73479 cri.go:89] found id: ""
	I0731 18:13:09.391895   73479 logs.go:276] 1 containers: [895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0]
	I0731 18:13:09.391956   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:09.395723   73479 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:13:09.395789   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:13:09.430108   73479 cri.go:89] found id: "65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645"
	I0731 18:13:09.430128   73479 cri.go:89] found id: ""
	I0731 18:13:09.430135   73479 logs.go:276] 1 containers: [65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645]
	I0731 18:13:09.430180   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:09.433933   73479 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:13:09.434037   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:13:09.471630   73479 cri.go:89] found id: "f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855"
	I0731 18:13:09.471655   73479 cri.go:89] found id: ""
	I0731 18:13:09.471663   73479 logs.go:276] 1 containers: [f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855]
	I0731 18:13:09.471709   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:09.476432   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:13:09.476496   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:13:09.519568   73479 cri.go:89] found id: "ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2"
	I0731 18:13:09.519590   73479 cri.go:89] found id: ""
	I0731 18:13:09.519598   73479 logs.go:276] 1 containers: [ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2]
	I0731 18:13:09.519641   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:09.523587   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:13:09.523656   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:13:09.559405   73479 cri.go:89] found id: "57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847"
	I0731 18:13:09.559429   73479 cri.go:89] found id: ""
	I0731 18:13:09.559438   73479 logs.go:276] 1 containers: [57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847]
	I0731 18:13:09.559485   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:09.564137   73479 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:13:09.564199   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:13:09.605298   73479 cri.go:89] found id: "ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c"
	I0731 18:13:09.605324   73479 cri.go:89] found id: ""
	I0731 18:13:09.605332   73479 logs.go:276] 1 containers: [ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c]
	I0731 18:13:09.605403   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:09.612233   73479 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:13:09.612296   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:13:09.648804   73479 cri.go:89] found id: ""
	I0731 18:13:09.648836   73479 logs.go:276] 0 containers: []
	W0731 18:13:09.648848   73479 logs.go:278] No container was found matching "kindnet"
	I0731 18:13:09.648855   73479 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 18:13:09.648916   73479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 18:13:09.694708   73479 cri.go:89] found id: "6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df"
	I0731 18:13:09.694733   73479 cri.go:89] found id: "9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c"
	I0731 18:13:09.694737   73479 cri.go:89] found id: ""
	I0731 18:13:09.694743   73479 logs.go:276] 2 containers: [6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df 9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c]
	I0731 18:13:09.694794   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:09.698687   73479 ssh_runner.go:195] Run: which crictl
	I0731 18:13:09.702244   73479 logs.go:123] Gathering logs for storage-provisioner [6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df] ...
	I0731 18:13:09.702261   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f311536202ba854461dd5abcc13c5b8d006399426b52e0c8d10bc41f06826df"
	I0731 18:13:09.737777   73479 logs.go:123] Gathering logs for storage-provisioner [9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c] ...
	I0731 18:13:09.737808   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ea2bc105f57ae4d05e4535f2851a35b9014c7574a63fef533fa0c487002941c"
	I0731 18:13:09.771128   73479 logs.go:123] Gathering logs for container status ...
	I0731 18:13:09.771161   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:13:09.817498   73479 logs.go:123] Gathering logs for dmesg ...
	I0731 18:13:09.817525   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:13:09.833574   73479 logs.go:123] Gathering logs for etcd [65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645] ...
	I0731 18:13:09.833607   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65ef90d7b082adf059c9973e7f1dfde4770ed9718d440c8e0d525d427b16f645"
	I0731 18:13:09.872664   73479 logs.go:123] Gathering logs for kube-scheduler [ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2] ...
	I0731 18:13:09.872691   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed1c40e21d8aa2b6321516625802154255f5eda1642411e377150b8df1c88bd2"
	I0731 18:13:09.913741   73479 logs.go:123] Gathering logs for coredns [f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855] ...
	I0731 18:13:09.913771   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f043eb2392c22ee26a4729969283f278a4dc7ff4784addf7139958beb2cba855"
	I0731 18:13:09.949469   73479 logs.go:123] Gathering logs for kube-proxy [57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847] ...
	I0731 18:13:09.949512   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57bdb8e09be40a0925d7a86eb613ef8a0455415666562743e36e6800b8636847"
	I0731 18:13:09.985409   73479 logs.go:123] Gathering logs for kube-controller-manager [ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c] ...
	I0731 18:13:09.985447   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee75a53c57652705a335378f60ff96b2fce4543143edc35249ac027c7c6e4a2c"
	I0731 18:13:10.039018   73479 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:13:10.039048   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:13:10.406380   73479 logs.go:123] Gathering logs for kubelet ...
	I0731 18:13:10.406416   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:13:10.459944   73479 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:13:10.459997   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 18:13:10.564092   73479 logs.go:123] Gathering logs for kube-apiserver [895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0] ...
	I0731 18:13:10.564134   73479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 895465d024797e36349af3a0f7cd0689ec7e813d811a3205a301baae081986e0"
	I0731 18:13:13.124074   73479 system_pods.go:59] 8 kube-system pods found
	I0731 18:13:13.124102   73479 system_pods.go:61] "coredns-5cfdc65f69-k7clq" [e4be77b6-aa7c-45e2-90a1-6a8264fd5101] Running
	I0731 18:13:13.124107   73479 system_pods.go:61] "etcd-no-preload-673754" [d64e9194-dd33-4a28-b8c3-d25462f1dae8] Running
	I0731 18:13:13.124110   73479 system_pods.go:61] "kube-apiserver-no-preload-673754" [4497614d-51de-4a5f-93dc-1397446fe4c8] Running
	I0731 18:13:13.124114   73479 system_pods.go:61] "kube-controller-manager-no-preload-673754" [fe7f714e-d778-49e7-b4ef-44e6efb5bc33] Running
	I0731 18:13:13.124117   73479 system_pods.go:61] "kube-proxy-hqxh6" [4fb623c0-4be3-421c-85d6-1d76a90b874f] Running
	I0731 18:13:13.124119   73479 system_pods.go:61] "kube-scheduler-no-preload-673754" [558e9b78-a6a9-46b8-9db8-004126359c74] Running
	I0731 18:13:13.124125   73479 system_pods.go:61] "metrics-server-78fcd8795b-27pkr" [a63a156b-0446-4bd9-8619-de75edaeb481] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 18:13:13.124129   73479 system_pods.go:61] "storage-provisioner" [a9f0ab70-18f4-4fac-b858-a9177077fe29] Running
	I0731 18:13:13.124135   73479 system_pods.go:74] duration metric: took 3.776302431s to wait for pod list to return data ...
	I0731 18:13:13.124141   73479 default_sa.go:34] waiting for default service account to be created ...
	I0731 18:13:13.127100   73479 default_sa.go:45] found service account: "default"
	I0731 18:13:13.127137   73479 default_sa.go:55] duration metric: took 2.989455ms for default service account to be created ...
	I0731 18:13:13.127148   73479 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 18:13:13.132359   73479 system_pods.go:86] 8 kube-system pods found
	I0731 18:13:13.132379   73479 system_pods.go:89] "coredns-5cfdc65f69-k7clq" [e4be77b6-aa7c-45e2-90a1-6a8264fd5101] Running
	I0731 18:13:13.132387   73479 system_pods.go:89] "etcd-no-preload-673754" [d64e9194-dd33-4a28-b8c3-d25462f1dae8] Running
	I0731 18:13:13.132393   73479 system_pods.go:89] "kube-apiserver-no-preload-673754" [4497614d-51de-4a5f-93dc-1397446fe4c8] Running
	I0731 18:13:13.132399   73479 system_pods.go:89] "kube-controller-manager-no-preload-673754" [fe7f714e-d778-49e7-b4ef-44e6efb5bc33] Running
	I0731 18:13:13.132405   73479 system_pods.go:89] "kube-proxy-hqxh6" [4fb623c0-4be3-421c-85d6-1d76a90b874f] Running
	I0731 18:13:13.132410   73479 system_pods.go:89] "kube-scheduler-no-preload-673754" [558e9b78-a6a9-46b8-9db8-004126359c74] Running
	I0731 18:13:13.132420   73479 system_pods.go:89] "metrics-server-78fcd8795b-27pkr" [a63a156b-0446-4bd9-8619-de75edaeb481] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 18:13:13.132427   73479 system_pods.go:89] "storage-provisioner" [a9f0ab70-18f4-4fac-b858-a9177077fe29] Running
	I0731 18:13:13.132435   73479 system_pods.go:126] duration metric: took 5.281138ms to wait for k8s-apps to be running ...
	I0731 18:13:13.132443   73479 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 18:13:13.132488   73479 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:13:13.148254   73479 system_svc.go:56] duration metric: took 15.802724ms WaitForService to wait for kubelet
	I0731 18:13:13.148281   73479 kubeadm.go:582] duration metric: took 4m26.650509962s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 18:13:13.148315   73479 node_conditions.go:102] verifying NodePressure condition ...
	I0731 18:13:13.151986   73479 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 18:13:13.152006   73479 node_conditions.go:123] node cpu capacity is 2
	I0731 18:13:13.152018   73479 node_conditions.go:105] duration metric: took 3.693857ms to run NodePressure ...
	I0731 18:13:13.152031   73479 start.go:241] waiting for startup goroutines ...
	I0731 18:13:13.152043   73479 start.go:246] waiting for cluster config update ...
	I0731 18:13:13.152058   73479 start.go:255] writing updated cluster config ...
	I0731 18:13:13.152347   73479 ssh_runner.go:195] Run: rm -f paused
	I0731 18:13:13.202434   73479 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0731 18:13:13.205205   73479 out.go:177] * Done! kubectl is now configured to use "no-preload-673754" cluster and "default" namespace by default
	I0731 18:13:13.655618   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:13:13.655843   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:13:33.657356   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:13:33.657560   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:14:13.660934   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:14:13.661161   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:14:13.661183   74203 kubeadm.go:310] 
	I0731 18:14:13.661216   74203 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 18:14:13.661251   74203 kubeadm.go:310] 		timed out waiting for the condition
	I0731 18:14:13.661279   74203 kubeadm.go:310] 
	I0731 18:14:13.661338   74203 kubeadm.go:310] 	This error is likely caused by:
	I0731 18:14:13.661378   74203 kubeadm.go:310] 		- The kubelet is not running
	I0731 18:14:13.661477   74203 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 18:14:13.661483   74203 kubeadm.go:310] 
	I0731 18:14:13.661577   74203 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 18:14:13.661617   74203 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 18:14:13.661646   74203 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 18:14:13.661651   74203 kubeadm.go:310] 
	I0731 18:14:13.661781   74203 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 18:14:13.661897   74203 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 18:14:13.661909   74203 kubeadm.go:310] 
	I0731 18:14:13.662044   74203 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 18:14:13.662164   74203 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 18:14:13.662265   74203 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 18:14:13.662444   74203 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 18:14:13.662477   74203 kubeadm.go:310] 
	I0731 18:14:13.663123   74203 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 18:14:13.663235   74203 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 18:14:13.663331   74203 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0731 18:14:13.663497   74203 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0731 18:14:13.663559   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 18:14:18.956376   74203 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.292787213s)
	I0731 18:14:18.956479   74203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:14:18.970820   74203 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 18:14:18.980747   74203 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 18:14:18.980771   74203 kubeadm.go:157] found existing configuration files:
	
	I0731 18:14:18.980816   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 18:14:18.989985   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 18:14:18.990063   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 18:14:18.999143   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 18:14:19.008740   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 18:14:19.008798   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 18:14:19.018729   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 18:14:19.028953   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 18:14:19.029015   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 18:14:19.039399   74203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 18:14:19.049072   74203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 18:14:19.049124   74203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 18:14:19.059592   74203 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 18:14:19.121542   74203 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0731 18:14:19.121613   74203 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 18:14:19.271989   74203 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 18:14:19.272100   74203 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 18:14:19.272223   74203 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 18:14:19.440224   74203 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 18:14:19.441929   74203 out.go:204]   - Generating certificates and keys ...
	I0731 18:14:19.442025   74203 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 18:14:19.442104   74203 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 18:14:19.442196   74203 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 18:14:19.442245   74203 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 18:14:19.442326   74203 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 18:14:19.442395   74203 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 18:14:19.442498   74203 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 18:14:19.442610   74203 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 18:14:19.442687   74203 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 18:14:19.442770   74203 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 18:14:19.442813   74203 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 18:14:19.442887   74203 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 18:14:19.481696   74203 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 18:14:19.804252   74203 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 18:14:20.038734   74203 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 18:14:20.211133   74203 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 18:14:20.225726   74203 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 18:14:20.227920   74203 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 18:14:20.227977   74203 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 18:14:20.364068   74203 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 18:14:20.365991   74203 out.go:204]   - Booting up control plane ...
	I0731 18:14:20.366094   74203 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 18:14:20.366195   74203 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 18:14:20.366270   74203 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 18:14:20.366379   74203 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 18:14:20.367688   74203 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 18:15:00.365616   74203 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0731 18:15:00.366184   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:15:00.366412   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:15:05.366332   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:15:05.366529   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:15:15.366241   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:15:15.366499   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:15:35.366114   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:15:35.366344   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:16:15.365995   74203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 18:16:15.366181   74203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 18:16:15.366191   74203 kubeadm.go:310] 
	I0731 18:16:15.366224   74203 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 18:16:15.366448   74203 kubeadm.go:310] 		timed out waiting for the condition
	I0731 18:16:15.366472   74203 kubeadm.go:310] 
	I0731 18:16:15.366517   74203 kubeadm.go:310] 	This error is likely caused by:
	I0731 18:16:15.366568   74203 kubeadm.go:310] 		- The kubelet is not running
	I0731 18:16:15.366723   74203 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 18:16:15.366740   74203 kubeadm.go:310] 
	I0731 18:16:15.366863   74203 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 18:16:15.366924   74203 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 18:16:15.366986   74203 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 18:16:15.366999   74203 kubeadm.go:310] 
	I0731 18:16:15.367153   74203 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 18:16:15.367271   74203 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 18:16:15.367283   74203 kubeadm.go:310] 
	I0731 18:16:15.367418   74203 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 18:16:15.367504   74203 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 18:16:15.367609   74203 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 18:16:15.367725   74203 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 18:16:15.367734   74203 kubeadm.go:310] 
	I0731 18:16:15.369210   74203 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 18:16:15.369361   74203 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 18:16:15.369434   74203 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0731 18:16:15.369496   74203 kubeadm.go:394] duration metric: took 8m6.557607575s to StartCluster
	I0731 18:16:15.369537   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:16:15.369590   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:16:15.432899   74203 cri.go:89] found id: ""
	I0731 18:16:15.432929   74203 logs.go:276] 0 containers: []
	W0731 18:16:15.432941   74203 logs.go:278] No container was found matching "kube-apiserver"
	I0731 18:16:15.432947   74203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:16:15.433005   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:16:15.470506   74203 cri.go:89] found id: ""
	I0731 18:16:15.470534   74203 logs.go:276] 0 containers: []
	W0731 18:16:15.470542   74203 logs.go:278] No container was found matching "etcd"
	I0731 18:16:15.470549   74203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:16:15.470609   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:16:15.502032   74203 cri.go:89] found id: ""
	I0731 18:16:15.502055   74203 logs.go:276] 0 containers: []
	W0731 18:16:15.502062   74203 logs.go:278] No container was found matching "coredns"
	I0731 18:16:15.502067   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:16:15.502115   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:16:15.533897   74203 cri.go:89] found id: ""
	I0731 18:16:15.533918   74203 logs.go:276] 0 containers: []
	W0731 18:16:15.533925   74203 logs.go:278] No container was found matching "kube-scheduler"
	I0731 18:16:15.533930   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:16:15.533980   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:16:15.565275   74203 cri.go:89] found id: ""
	I0731 18:16:15.565311   74203 logs.go:276] 0 containers: []
	W0731 18:16:15.565326   74203 logs.go:278] No container was found matching "kube-proxy"
	I0731 18:16:15.565333   74203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:16:15.565395   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:16:15.601402   74203 cri.go:89] found id: ""
	I0731 18:16:15.601427   74203 logs.go:276] 0 containers: []
	W0731 18:16:15.601435   74203 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 18:16:15.601440   74203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:16:15.601489   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:16:15.638778   74203 cri.go:89] found id: ""
	I0731 18:16:15.638801   74203 logs.go:276] 0 containers: []
	W0731 18:16:15.638808   74203 logs.go:278] No container was found matching "kindnet"
	I0731 18:16:15.638813   74203 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 18:16:15.638861   74203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 18:16:15.675697   74203 cri.go:89] found id: ""
	I0731 18:16:15.675720   74203 logs.go:276] 0 containers: []
	W0731 18:16:15.675728   74203 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 18:16:15.675736   74203 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:16:15.675748   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 18:16:15.745287   74203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 18:16:15.745325   74203 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:16:15.745341   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:16:15.848503   74203 logs.go:123] Gathering logs for container status ...
	I0731 18:16:15.848536   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:16:15.887234   74203 logs.go:123] Gathering logs for kubelet ...
	I0731 18:16:15.887258   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:16:15.934871   74203 logs.go:123] Gathering logs for dmesg ...
	I0731 18:16:15.934901   74203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0731 18:16:15.947727   74203 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0731 18:16:15.947769   74203 out.go:239] * 
	W0731 18:16:15.947817   74203 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 18:16:15.947836   74203 out.go:239] * 
	W0731 18:16:15.948669   74203 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 18:16:15.952286   74203 out.go:177] 
	W0731 18:16:15.953375   74203 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 18:16:15.953424   74203 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0731 18:16:15.953442   74203 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0731 18:16:15.954734   74203 out.go:177] 
	
	
	==> CRI-O <==
	Jul 31 18:27:40 old-k8s-version-276459 crio[645]: time="2024-07-31 18:27:40.762328516Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722450460762308114,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6127320f-ab56-42e3-ae77-76f98638136c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:27:40 old-k8s-version-276459 crio[645]: time="2024-07-31 18:27:40.762801798Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c3aa0c9a-c26b-4ce5-bba6-7e9b6214514a name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:27:40 old-k8s-version-276459 crio[645]: time="2024-07-31 18:27:40.762877868Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c3aa0c9a-c26b-4ce5-bba6-7e9b6214514a name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:27:40 old-k8s-version-276459 crio[645]: time="2024-07-31 18:27:40.762934749Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c3aa0c9a-c26b-4ce5-bba6-7e9b6214514a name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:27:40 old-k8s-version-276459 crio[645]: time="2024-07-31 18:27:40.798626117Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e3ec8e78-9ed5-48ed-884a-7ff9b6471060 name=/runtime.v1.RuntimeService/Version
	Jul 31 18:27:40 old-k8s-version-276459 crio[645]: time="2024-07-31 18:27:40.798713137Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e3ec8e78-9ed5-48ed-884a-7ff9b6471060 name=/runtime.v1.RuntimeService/Version
	Jul 31 18:27:40 old-k8s-version-276459 crio[645]: time="2024-07-31 18:27:40.799631737Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ae60ffa1-bfd1-42e7-9362-728ceb2ea0ce name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:27:40 old-k8s-version-276459 crio[645]: time="2024-07-31 18:27:40.799998183Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722450460799976986,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ae60ffa1-bfd1-42e7-9362-728ceb2ea0ce name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:27:40 old-k8s-version-276459 crio[645]: time="2024-07-31 18:27:40.800421286Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1cc89fe3-9f64-4906-9b12-08a5298cd41d name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:27:40 old-k8s-version-276459 crio[645]: time="2024-07-31 18:27:40.800492613Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1cc89fe3-9f64-4906-9b12-08a5298cd41d name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:27:40 old-k8s-version-276459 crio[645]: time="2024-07-31 18:27:40.800524575Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1cc89fe3-9f64-4906-9b12-08a5298cd41d name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:27:40 old-k8s-version-276459 crio[645]: time="2024-07-31 18:27:40.829565816Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fec7e56f-6885-4844-b46f-cd9791ba57e5 name=/runtime.v1.RuntimeService/Version
	Jul 31 18:27:40 old-k8s-version-276459 crio[645]: time="2024-07-31 18:27:40.829651527Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fec7e56f-6885-4844-b46f-cd9791ba57e5 name=/runtime.v1.RuntimeService/Version
	Jul 31 18:27:40 old-k8s-version-276459 crio[645]: time="2024-07-31 18:27:40.830591765Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4c4e9f1a-4cc2-423e-a855-859a43169852 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:27:40 old-k8s-version-276459 crio[645]: time="2024-07-31 18:27:40.830985202Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722450460830966351,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4c4e9f1a-4cc2-423e-a855-859a43169852 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:27:40 old-k8s-version-276459 crio[645]: time="2024-07-31 18:27:40.831513311Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=17d9e106-d9d7-4bcf-a1fb-b0ec4cc8562a name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:27:40 old-k8s-version-276459 crio[645]: time="2024-07-31 18:27:40.831583170Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=17d9e106-d9d7-4bcf-a1fb-b0ec4cc8562a name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:27:40 old-k8s-version-276459 crio[645]: time="2024-07-31 18:27:40.831627201Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=17d9e106-d9d7-4bcf-a1fb-b0ec4cc8562a name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:27:40 old-k8s-version-276459 crio[645]: time="2024-07-31 18:27:40.862696579Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=db4387e9-19b0-479c-8657-228792745c7b name=/runtime.v1.RuntimeService/Version
	Jul 31 18:27:40 old-k8s-version-276459 crio[645]: time="2024-07-31 18:27:40.862792081Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=db4387e9-19b0-479c-8657-228792745c7b name=/runtime.v1.RuntimeService/Version
	Jul 31 18:27:40 old-k8s-version-276459 crio[645]: time="2024-07-31 18:27:40.864156198Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=19046d09-e7a0-4212-a6ec-9197f33a9246 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:27:40 old-k8s-version-276459 crio[645]: time="2024-07-31 18:27:40.864586566Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722450460864560216,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=19046d09-e7a0-4212-a6ec-9197f33a9246 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:27:40 old-k8s-version-276459 crio[645]: time="2024-07-31 18:27:40.865093783Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cdbb8352-a044-427f-b14f-bb74c6867299 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:27:40 old-k8s-version-276459 crio[645]: time="2024-07-31 18:27:40.865156095Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cdbb8352-a044-427f-b14f-bb74c6867299 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:27:40 old-k8s-version-276459 crio[645]: time="2024-07-31 18:27:40.865201635Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=cdbb8352-a044-427f-b14f-bb74c6867299 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul31 18:07] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051412] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042688] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.944073] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.816954] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.537194] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul31 18:08] systemd-fstab-generator[565]: Ignoring "noauto" option for root device
	[  +0.060507] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.075418] systemd-fstab-generator[577]: Ignoring "noauto" option for root device
	[  +0.176279] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.160769] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.263587] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +6.137429] systemd-fstab-generator[832]: Ignoring "noauto" option for root device
	[  +0.060102] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.916258] systemd-fstab-generator[956]: Ignoring "noauto" option for root device
	[ +12.736454] kauditd_printk_skb: 46 callbacks suppressed
	[Jul31 18:12] systemd-fstab-generator[5055]: Ignoring "noauto" option for root device
	[Jul31 18:14] systemd-fstab-generator[5341]: Ignoring "noauto" option for root device
	[  +0.064729] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 18:27:41 up 19 min,  0 users,  load average: 0.00, 0.02, 0.01
	Linux old-k8s-version-276459 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 31 18:27:39 old-k8s-version-276459 kubelet[6839]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Jul 31 18:27:39 old-k8s-version-276459 kubelet[6839]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Jul 31 18:27:39 old-k8s-version-276459 kubelet[6839]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Jul 31 18:27:39 old-k8s-version-276459 kubelet[6839]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000610ef0)
	Jul 31 18:27:39 old-k8s-version-276459 kubelet[6839]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Jul 31 18:27:39 old-k8s-version-276459 kubelet[6839]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000a39ef0, 0x4f0ac20, 0xc0009a44b0, 0x1, 0xc0001000c0)
	Jul 31 18:27:39 old-k8s-version-276459 kubelet[6839]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Jul 31 18:27:39 old-k8s-version-276459 kubelet[6839]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc00024c7e0, 0xc0001000c0)
	Jul 31 18:27:39 old-k8s-version-276459 kubelet[6839]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Jul 31 18:27:39 old-k8s-version-276459 kubelet[6839]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Jul 31 18:27:39 old-k8s-version-276459 kubelet[6839]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Jul 31 18:27:39 old-k8s-version-276459 kubelet[6839]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000930c00, 0xc0009a27c0)
	Jul 31 18:27:39 old-k8s-version-276459 kubelet[6839]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Jul 31 18:27:39 old-k8s-version-276459 kubelet[6839]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Jul 31 18:27:39 old-k8s-version-276459 kubelet[6839]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Jul 31 18:27:39 old-k8s-version-276459 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 31 18:27:39 old-k8s-version-276459 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 31 18:27:40 old-k8s-version-276459 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 139.
	Jul 31 18:27:40 old-k8s-version-276459 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 31 18:27:40 old-k8s-version-276459 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 31 18:27:40 old-k8s-version-276459 kubelet[6866]: I0731 18:27:40.405344    6866 server.go:416] Version: v1.20.0
	Jul 31 18:27:40 old-k8s-version-276459 kubelet[6866]: I0731 18:27:40.405826    6866 server.go:837] Client rotation is on, will bootstrap in background
	Jul 31 18:27:40 old-k8s-version-276459 kubelet[6866]: I0731 18:27:40.408374    6866 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 31 18:27:40 old-k8s-version-276459 kubelet[6866]: W0731 18:27:40.409376    6866 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jul 31 18:27:40 old-k8s-version-276459 kubelet[6866]: I0731 18:27:40.409642    6866 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-276459 -n old-k8s-version-276459
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-276459 -n old-k8s-version-276459: exit status 2 (215.432858ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-276459" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (139.62s)

                                                
                                    

Test pass (250/320)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 31.88
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.30.3/json-events 20.15
13 TestDownloadOnly/v1.30.3/preload-exists 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.06
18 TestDownloadOnly/v1.30.3/DeleteAll 0.14
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.31.0-beta.0/json-events 48.19
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.05
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.13
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.12
30 TestBinaryMirror 0.55
31 TestOffline 98.82
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
36 TestAddons/Setup 146.56
40 TestAddons/serial/GCPAuth/Namespaces 0.14
42 TestAddons/parallel/Registry 16.83
44 TestAddons/parallel/InspektorGadget 11.91
46 TestAddons/parallel/HelmTiller 10.56
48 TestAddons/parallel/CSI 54.08
49 TestAddons/parallel/Headlamp 17.7
50 TestAddons/parallel/CloudSpanner 5.65
51 TestAddons/parallel/LocalPath 17.08
52 TestAddons/parallel/NvidiaDevicePlugin 6.78
53 TestAddons/parallel/Yakd 11.95
55 TestCertOptions 77
56 TestCertExpiration 265.95
58 TestForceSystemdFlag 83.97
59 TestForceSystemdEnv 48.4
61 TestKVMDriverInstallOrUpdate 4.26
65 TestErrorSpam/setup 41.43
66 TestErrorSpam/start 0.32
67 TestErrorSpam/status 0.69
68 TestErrorSpam/pause 1.47
69 TestErrorSpam/unpause 1.54
70 TestErrorSpam/stop 4.92
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 93.88
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 41.01
77 TestFunctional/serial/KubeContext 0.04
78 TestFunctional/serial/KubectlGetPods 0.06
81 TestFunctional/serial/CacheCmd/cache/add_remote 3.61
82 TestFunctional/serial/CacheCmd/cache/add_local 2.05
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
84 TestFunctional/serial/CacheCmd/cache/list 0.04
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
86 TestFunctional/serial/CacheCmd/cache/cache_reload 1.62
87 TestFunctional/serial/CacheCmd/cache/delete 0.09
88 TestFunctional/serial/MinikubeKubectlCmd 0.1
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
90 TestFunctional/serial/ExtraConfig 83.7
91 TestFunctional/serial/ComponentHealth 0.06
92 TestFunctional/serial/LogsCmd 1.32
93 TestFunctional/serial/LogsFileCmd 1.42
94 TestFunctional/serial/InvalidService 3.95
96 TestFunctional/parallel/ConfigCmd 0.29
97 TestFunctional/parallel/DashboardCmd 16.87
98 TestFunctional/parallel/DryRun 0.25
99 TestFunctional/parallel/InternationalLanguage 0.13
100 TestFunctional/parallel/StatusCmd 0.74
104 TestFunctional/parallel/ServiceCmdConnect 9.49
105 TestFunctional/parallel/AddonsCmd 0.12
106 TestFunctional/parallel/PersistentVolumeClaim 45.79
108 TestFunctional/parallel/SSHCmd 0.41
109 TestFunctional/parallel/CpCmd 1.19
110 TestFunctional/parallel/MySQL 64.02
111 TestFunctional/parallel/FileSync 0.2
112 TestFunctional/parallel/CertSync 1.19
116 TestFunctional/parallel/NodeLabels 0.05
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.42
120 TestFunctional/parallel/License 0.55
121 TestFunctional/parallel/ImageCommands/ImageListShort 2
122 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
123 TestFunctional/parallel/ImageCommands/ImageListJson 0.2
124 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
125 TestFunctional/parallel/ImageCommands/ImageBuild 3.21
126 TestFunctional/parallel/ImageCommands/Setup 1.8
127 TestFunctional/parallel/Version/short 0.04
128 TestFunctional/parallel/Version/components 0.67
129 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
130 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
131 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
132 TestFunctional/parallel/ServiceCmd/DeployApp 22.21
133 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.23
143 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.84
144 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.02
145 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.8
146 TestFunctional/parallel/ImageCommands/ImageRemove 2.59
147 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 5.24
148 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.62
149 TestFunctional/parallel/ServiceCmd/List 0.48
150 TestFunctional/parallel/ServiceCmd/JSONOutput 0.46
151 TestFunctional/parallel/ServiceCmd/HTTPS 0.28
152 TestFunctional/parallel/ServiceCmd/Format 0.27
153 TestFunctional/parallel/ServiceCmd/URL 0.29
154 TestFunctional/parallel/ProfileCmd/profile_not_create 0.28
155 TestFunctional/parallel/ProfileCmd/profile_list 0.28
156 TestFunctional/parallel/MountCmd/any-port 8.7
157 TestFunctional/parallel/ProfileCmd/profile_json_output 0.25
158 TestFunctional/parallel/MountCmd/specific-port 1.72
159 TestFunctional/parallel/MountCmd/VerifyCleanup 1.4
160 TestFunctional/delete_echo-server_images 0.03
161 TestFunctional/delete_my-image_image 0.02
162 TestFunctional/delete_minikube_cached_images 0.01
166 TestMultiControlPlane/serial/StartCluster 204.43
167 TestMultiControlPlane/serial/DeployApp 6.65
168 TestMultiControlPlane/serial/PingHostFromPods 1.11
169 TestMultiControlPlane/serial/AddWorkerNode 53.7
170 TestMultiControlPlane/serial/NodeLabels 0.07
171 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.52
172 TestMultiControlPlane/serial/CopyFile 12.33
174 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.48
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.39
178 TestMultiControlPlane/serial/DeleteSecondaryNode 16.96
179 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.35
181 TestMultiControlPlane/serial/RestartCluster 326.03
182 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.38
183 TestMultiControlPlane/serial/AddSecondaryNode 76.49
184 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.52
188 TestJSONOutput/start/Command 94.99
189 TestJSONOutput/start/Audit 0
191 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/pause/Command 0.69
195 TestJSONOutput/pause/Audit 0
197 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/unpause/Command 0.59
201 TestJSONOutput/unpause/Audit 0
203 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/stop/Command 6.6
207 TestJSONOutput/stop/Audit 0
209 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
210 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
211 TestErrorJSONOutput 0.18
216 TestMainNoArgs 0.04
217 TestMinikubeProfile 83.55
220 TestMountStart/serial/StartWithMountFirst 26.8
221 TestMountStart/serial/VerifyMountFirst 0.36
222 TestMountStart/serial/StartWithMountSecond 25.17
223 TestMountStart/serial/VerifyMountSecond 0.36
224 TestMountStart/serial/DeleteFirst 0.85
225 TestMountStart/serial/VerifyMountPostDelete 0.36
226 TestMountStart/serial/Stop 1.26
227 TestMountStart/serial/RestartStopped 22.42
228 TestMountStart/serial/VerifyMountPostStop 0.36
231 TestMultiNode/serial/FreshStart2Nodes 115.71
232 TestMultiNode/serial/DeployApp2Nodes 5.15
233 TestMultiNode/serial/PingHostFrom2Pods 0.73
234 TestMultiNode/serial/AddNode 46.34
235 TestMultiNode/serial/MultiNodeLabels 0.06
236 TestMultiNode/serial/ProfileList 0.2
237 TestMultiNode/serial/CopyFile 6.9
238 TestMultiNode/serial/StopNode 2.13
239 TestMultiNode/serial/StartAfterStop 38.95
241 TestMultiNode/serial/DeleteNode 2.39
243 TestMultiNode/serial/RestartMultiNode 181.98
244 TestMultiNode/serial/ValidateNameConflict 42.36
251 TestScheduledStopUnix 109.75
255 TestRunningBinaryUpgrade 254.75
260 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
264 TestNoKubernetes/serial/StartWithK8s 70.2
269 TestNetworkPlugins/group/false 2.86
280 TestNoKubernetes/serial/StartWithStopK8s 40.92
281 TestNoKubernetes/serial/Start 72.18
282 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
283 TestNoKubernetes/serial/ProfileList 1.57
284 TestNoKubernetes/serial/Stop 1.28
285 TestNoKubernetes/serial/StartNoArgs 38.18
286 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.18
288 TestPause/serial/Start 121.32
289 TestStoppedBinaryUpgrade/Setup 2.27
290 TestStoppedBinaryUpgrade/Upgrade 123.06
292 TestNetworkPlugins/group/auto/Start 60.78
293 TestStoppedBinaryUpgrade/MinikubeLogs 0.91
294 TestNetworkPlugins/group/kindnet/Start 80.8
295 TestNetworkPlugins/group/calico/Start 113.98
296 TestNetworkPlugins/group/auto/KubeletFlags 0.22
297 TestNetworkPlugins/group/auto/NetCatPod 10.3
298 TestNetworkPlugins/group/auto/DNS 0.16
299 TestNetworkPlugins/group/auto/Localhost 0.14
300 TestNetworkPlugins/group/auto/HairPin 0.13
301 TestNetworkPlugins/group/custom-flannel/Start 78.02
302 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
303 TestNetworkPlugins/group/kindnet/KubeletFlags 0.2
304 TestNetworkPlugins/group/kindnet/NetCatPod 11.22
305 TestNetworkPlugins/group/kindnet/DNS 0.18
306 TestNetworkPlugins/group/kindnet/Localhost 0.16
307 TestNetworkPlugins/group/kindnet/HairPin 0.14
308 TestNetworkPlugins/group/bridge/Start 98.78
309 TestNetworkPlugins/group/calico/ControllerPod 6.01
310 TestNetworkPlugins/group/calico/KubeletFlags 0.22
311 TestNetworkPlugins/group/calico/NetCatPod 10.27
312 TestNetworkPlugins/group/calico/DNS 0.17
313 TestNetworkPlugins/group/calico/Localhost 0.13
314 TestNetworkPlugins/group/calico/HairPin 0.12
315 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.49
316 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.85
317 TestNetworkPlugins/group/flannel/Start 89.74
318 TestNetworkPlugins/group/custom-flannel/DNS 0.18
319 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
320 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
321 TestNetworkPlugins/group/enable-default-cni/Start 108.68
322 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
323 TestNetworkPlugins/group/bridge/NetCatPod 10.29
324 TestNetworkPlugins/group/bridge/DNS 0.2
325 TestNetworkPlugins/group/bridge/Localhost 0.19
326 TestNetworkPlugins/group/bridge/HairPin 0.16
329 TestNetworkPlugins/group/flannel/ControllerPod 6.01
330 TestNetworkPlugins/group/flannel/KubeletFlags 0.24
331 TestNetworkPlugins/group/flannel/NetCatPod 11.28
333 TestStartStop/group/no-preload/serial/FirstStart 114.62
334 TestNetworkPlugins/group/flannel/DNS 0.16
335 TestNetworkPlugins/group/flannel/Localhost 0.12
336 TestNetworkPlugins/group/flannel/HairPin 0.12
338 TestStartStop/group/embed-certs/serial/FirstStart 116.11
339 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.18
340 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.21
341 TestNetworkPlugins/group/enable-default-cni/DNS 0.24
342 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
343 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
345 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 69.82
346 TestStartStop/group/no-preload/serial/DeployApp 9.29
347 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.93
349 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.29
350 TestStartStop/group/embed-certs/serial/DeployApp 9.3
351 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.98
353 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.91
358 TestStartStop/group/no-preload/serial/SecondStart 646.65
361 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 593.79
362 TestStartStop/group/embed-certs/serial/SecondStart 603.11
363 TestStartStop/group/old-k8s-version/serial/Stop 2.36
364 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
375 TestStartStop/group/newest-cni/serial/FirstStart 48.71
376 TestStartStop/group/newest-cni/serial/DeployApp 0
377 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.2
378 TestStartStop/group/newest-cni/serial/Stop 11.32
379 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
380 TestStartStop/group/newest-cni/serial/SecondStart 35.16
381 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
382 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
383 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
384 TestStartStop/group/newest-cni/serial/Pause 2.26
x
+
TestDownloadOnly/v1.20.0/json-events (31.88s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-079702 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-079702 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (31.878856325s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (31.88s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-079702
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-079702: exit status 85 (55.608033ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-079702 | jenkins | v1.33.1 | 31 Jul 24 16:39 UTC |          |
	|         | -p download-only-079702        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 16:39:56
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 16:39:56.362118   15271 out.go:291] Setting OutFile to fd 1 ...
	I0731 16:39:56.362233   15271 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 16:39:56.362242   15271 out.go:304] Setting ErrFile to fd 2...
	I0731 16:39:56.362247   15271 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 16:39:56.362447   15271 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19349-8084/.minikube/bin
	W0731 16:39:56.362577   15271 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19349-8084/.minikube/config/config.json: open /home/jenkins/minikube-integration/19349-8084/.minikube/config/config.json: no such file or directory
	I0731 16:39:56.363201   15271 out.go:298] Setting JSON to true
	I0731 16:39:56.364157   15271 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1340,"bootTime":1722442656,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 16:39:56.364214   15271 start.go:139] virtualization: kvm guest
	I0731 16:39:56.366673   15271 out.go:97] [download-only-079702] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0731 16:39:56.366772   15271 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball: no such file or directory
	I0731 16:39:56.366812   15271 notify.go:220] Checking for updates...
	I0731 16:39:56.368350   15271 out.go:169] MINIKUBE_LOCATION=19349
	I0731 16:39:56.370311   15271 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 16:39:56.371702   15271 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 16:39:56.373061   15271 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19349-8084/.minikube
	I0731 16:39:56.374310   15271 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0731 16:39:56.376942   15271 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0731 16:39:56.377152   15271 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 16:39:56.478003   15271 out.go:97] Using the kvm2 driver based on user configuration
	I0731 16:39:56.478026   15271 start.go:297] selected driver: kvm2
	I0731 16:39:56.478033   15271 start.go:901] validating driver "kvm2" against <nil>
	I0731 16:39:56.478361   15271 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 16:39:56.478489   15271 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19349-8084/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 16:39:56.492993   15271 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 16:39:56.493040   15271 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 16:39:56.493536   15271 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0731 16:39:56.493720   15271 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 16:39:56.493745   15271 cni.go:84] Creating CNI manager for ""
	I0731 16:39:56.493753   15271 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 16:39:56.493761   15271 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 16:39:56.493817   15271 start.go:340] cluster config:
	{Name:download-only-079702 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-079702 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 16:39:56.494017   15271 iso.go:125] acquiring lock: {Name:mk4494bb5a63902bf12a909c3109db85229bc516 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 16:39:56.496387   15271 out.go:97] Downloading VM boot image ...
	I0731 16:39:56.496413   15271 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19349-8084/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0731 16:40:13.872961   15271 out.go:97] Starting "download-only-079702" primary control-plane node in "download-only-079702" cluster
	I0731 16:40:13.872989   15271 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 16:40:13.967920   15271 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0731 16:40:13.967944   15271 cache.go:56] Caching tarball of preloaded images
	I0731 16:40:13.968105   15271 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 16:40:13.970185   15271 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0731 16:40:13.970203   15271 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0731 16:40:14.070027   15271 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0731 16:40:26.299885   15271 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0731 16:40:26.299973   15271 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-079702 host does not exist
	  To start a cluster, run: "minikube start -p download-only-079702"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-079702
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (20.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-893685 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-893685 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (20.149379469s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (20.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-893685
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-893685: exit status 85 (58.056301ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-079702 | jenkins | v1.33.1 | 31 Jul 24 16:39 UTC |                     |
	|         | -p download-only-079702        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 31 Jul 24 16:40 UTC | 31 Jul 24 16:40 UTC |
	| delete  | -p download-only-079702        | download-only-079702 | jenkins | v1.33.1 | 31 Jul 24 16:40 UTC | 31 Jul 24 16:40 UTC |
	| start   | -o=json --download-only        | download-only-893685 | jenkins | v1.33.1 | 31 Jul 24 16:40 UTC |                     |
	|         | -p download-only-893685        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 16:40:28
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 16:40:28.553103   15544 out.go:291] Setting OutFile to fd 1 ...
	I0731 16:40:28.553451   15544 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 16:40:28.553467   15544 out.go:304] Setting ErrFile to fd 2...
	I0731 16:40:28.553474   15544 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 16:40:28.553856   15544 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19349-8084/.minikube/bin
	I0731 16:40:28.554414   15544 out.go:298] Setting JSON to true
	I0731 16:40:28.555306   15544 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1373,"bootTime":1722442656,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 16:40:28.555370   15544 start.go:139] virtualization: kvm guest
	I0731 16:40:28.557363   15544 out.go:97] [download-only-893685] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 16:40:28.557463   15544 notify.go:220] Checking for updates...
	I0731 16:40:28.558730   15544 out.go:169] MINIKUBE_LOCATION=19349
	I0731 16:40:28.560169   15544 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 16:40:28.561590   15544 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 16:40:28.562807   15544 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19349-8084/.minikube
	I0731 16:40:28.564159   15544 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0731 16:40:28.566233   15544 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0731 16:40:28.566457   15544 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 16:40:28.597905   15544 out.go:97] Using the kvm2 driver based on user configuration
	I0731 16:40:28.597928   15544 start.go:297] selected driver: kvm2
	I0731 16:40:28.597934   15544 start.go:901] validating driver "kvm2" against <nil>
	I0731 16:40:28.598241   15544 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 16:40:28.598339   15544 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19349-8084/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 16:40:28.612968   15544 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 16:40:28.613012   15544 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 16:40:28.613484   15544 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0731 16:40:28.613648   15544 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 16:40:28.613711   15544 cni.go:84] Creating CNI manager for ""
	I0731 16:40:28.613727   15544 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 16:40:28.613739   15544 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 16:40:28.613808   15544 start.go:340] cluster config:
	{Name:download-only-893685 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-893685 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 16:40:28.613920   15544 iso.go:125] acquiring lock: {Name:mk4494bb5a63902bf12a909c3109db85229bc516 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 16:40:28.615717   15544 out.go:97] Starting "download-only-893685" primary control-plane node in "download-only-893685" cluster
	I0731 16:40:28.615739   15544 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 16:40:28.755588   15544 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0731 16:40:28.755618   15544 cache.go:56] Caching tarball of preloaded images
	I0731 16:40:28.755769   15544 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 16:40:28.757789   15544 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0731 16:40:28.757805   15544 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 ...
	I0731 16:40:28.856467   15544 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:15191286f02471d9b3ea0b587fcafc39 -> /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0731 16:40:47.070039   15544 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 ...
	I0731 16:40:47.070132   15544 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 ...
	I0731 16:40:47.840743   15544 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 16:40:47.841070   15544 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/download-only-893685/config.json ...
	I0731 16:40:47.841098   15544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/download-only-893685/config.json: {Name:mk041d81b20780e90158415493c9dd48c6ab9cca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 16:40:47.841270   15544 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 16:40:47.841441   15544 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19349-8084/.minikube/cache/linux/amd64/v1.30.3/kubectl
	
	
	* The control-plane node download-only-893685 host does not exist
	  To start a cluster, run: "minikube start -p download-only-893685"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-893685
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (48.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-133798 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-133798 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (48.18865463s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (48.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-133798
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-133798: exit status 85 (54.031079ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-079702 | jenkins | v1.33.1 | 31 Jul 24 16:39 UTC |                     |
	|         | -p download-only-079702             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 31 Jul 24 16:40 UTC | 31 Jul 24 16:40 UTC |
	| delete  | -p download-only-079702             | download-only-079702 | jenkins | v1.33.1 | 31 Jul 24 16:40 UTC | 31 Jul 24 16:40 UTC |
	| start   | -o=json --download-only             | download-only-893685 | jenkins | v1.33.1 | 31 Jul 24 16:40 UTC |                     |
	|         | -p download-only-893685             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 31 Jul 24 16:40 UTC | 31 Jul 24 16:40 UTC |
	| delete  | -p download-only-893685             | download-only-893685 | jenkins | v1.33.1 | 31 Jul 24 16:40 UTC | 31 Jul 24 16:40 UTC |
	| start   | -o=json --download-only             | download-only-133798 | jenkins | v1.33.1 | 31 Jul 24 16:40 UTC |                     |
	|         | -p download-only-133798             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 16:40:49
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 16:40:49.026293   15781 out.go:291] Setting OutFile to fd 1 ...
	I0731 16:40:49.026550   15781 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 16:40:49.026560   15781 out.go:304] Setting ErrFile to fd 2...
	I0731 16:40:49.026566   15781 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 16:40:49.026758   15781 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19349-8084/.minikube/bin
	I0731 16:40:49.027351   15781 out.go:298] Setting JSON to true
	I0731 16:40:49.028175   15781 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1393,"bootTime":1722442656,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 16:40:49.028232   15781 start.go:139] virtualization: kvm guest
	I0731 16:40:49.030378   15781 out.go:97] [download-only-133798] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 16:40:49.030525   15781 notify.go:220] Checking for updates...
	I0731 16:40:49.031990   15781 out.go:169] MINIKUBE_LOCATION=19349
	I0731 16:40:49.033616   15781 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 16:40:49.035307   15781 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 16:40:49.036841   15781 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19349-8084/.minikube
	I0731 16:40:49.038271   15781 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0731 16:40:49.040878   15781 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0731 16:40:49.041094   15781 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 16:40:49.072616   15781 out.go:97] Using the kvm2 driver based on user configuration
	I0731 16:40:49.072641   15781 start.go:297] selected driver: kvm2
	I0731 16:40:49.072648   15781 start.go:901] validating driver "kvm2" against <nil>
	I0731 16:40:49.072988   15781 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 16:40:49.073079   15781 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19349-8084/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 16:40:49.087449   15781 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 16:40:49.087501   15781 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 16:40:49.088072   15781 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0731 16:40:49.088233   15781 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 16:40:49.088260   15781 cni.go:84] Creating CNI manager for ""
	I0731 16:40:49.088271   15781 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 16:40:49.088284   15781 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 16:40:49.088350   15781 start.go:340] cluster config:
	{Name:download-only-133798 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-133798 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 16:40:49.088462   15781 iso.go:125] acquiring lock: {Name:mk4494bb5a63902bf12a909c3109db85229bc516 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 16:40:49.090266   15781 out.go:97] Starting "download-only-133798" primary control-plane node in "download-only-133798" cluster
	I0731 16:40:49.090280   15781 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0731 16:40:49.593682   15781 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0731 16:40:49.593720   15781 cache.go:56] Caching tarball of preloaded images
	I0731 16:40:49.593900   15781 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0731 16:40:49.595874   15781 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0731 16:40:49.595909   15781 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	I0731 16:40:49.700483   15781 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:3743f5ddb63994a661f14e5a8d3af98c -> /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0731 16:41:01.689189   15781 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	I0731 16:41:01.689290   15781 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19349-8084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	I0731 16:41:02.429263   15781 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0731 16:41:02.429577   15781 profile.go:143] Saving config to /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/download-only-133798/config.json ...
	I0731 16:41:02.429603   15781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/download-only-133798/config.json: {Name:mk2ab9f5632608e40d1500b1562abbaad73fc573 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 16:41:02.429755   15781 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0731 16:41:02.429880   15781 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-beta.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19349-8084/.minikube/cache/linux/amd64/v1.31.0-beta.0/kubectl
	
	
	* The control-plane node download-only-133798 host does not exist
	  To start a cluster, run: "minikube start -p download-only-133798"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-133798
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.55s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-720978 --alsologtostderr --binary-mirror http://127.0.0.1:43559 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-720978" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-720978
--- PASS: TestBinaryMirror (0.55s)

                                                
                                    
x
+
TestOffline (98.82s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-195412 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-195412 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m37.9828943s)
helpers_test.go:175: Cleaning up "offline-crio-195412" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-195412
--- PASS: TestOffline (98.82s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-190022
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-190022: exit status 85 (52.558211ms)

                                                
                                                
-- stdout --
	* Profile "addons-190022" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-190022"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-190022
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-190022: exit status 85 (51.673111ms)

                                                
                                                
-- stdout --
	* Profile "addons-190022" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-190022"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (146.56s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-190022 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-190022 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m26.557350595s)
--- PASS: TestAddons/Setup (146.56s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-190022 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-190022 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 2.880629ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-698f998955-xbtsh" [0beecbd0-f912-410d-b71c-b5c7bb05b1a6] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004470676s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-f7tqb" [896d8e3c-67c0-4b9c-bab5-43c46ee24394] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004314354s
addons_test.go:342: (dbg) Run:  kubectl --context addons-190022 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-190022 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-190022 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.975528659s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-190022 ip
2024/07/31 16:44:46 [DEBUG] GET http://192.168.39.140:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-190022 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.83s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.91s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-f5ddz" [fbab294a-fdad-4367-a2da-0c96a44b7701] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003788031s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-190022
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-190022: (5.906446702s)
--- PASS: TestAddons/parallel/InspektorGadget (11.91s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.56s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 3.510014ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-jbrvp" [acce776b-f280-4d5c-85be-c197f74e1f0d] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.0042079s
addons_test.go:475: (dbg) Run:  kubectl --context addons-190022 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-190022 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.971118454s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-190022 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.56s)

                                                
                                    
x
+
TestAddons/parallel/CSI (54.08s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 13.409497ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-190022 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190022 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190022 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190022 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190022 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190022 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190022 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190022 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190022 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190022 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190022 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190022 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190022 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190022 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190022 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190022 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-190022 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [b469bb15-4798-4589-8d43-3c7f6be1cee3] Pending
helpers_test.go:344: "task-pv-pod" [b469bb15-4798-4589-8d43-3c7f6be1cee3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [b469bb15-4798-4589-8d43-3c7f6be1cee3] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 16.004246836s
addons_test.go:590: (dbg) Run:  kubectl --context addons-190022 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-190022 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-190022 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-190022 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-190022 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-190022 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190022 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190022 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190022 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190022 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190022 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190022 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-190022 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [b71bdb51-1e7b-4e0d-953d-85dd2af82edf] Pending
helpers_test.go:344: "task-pv-pod-restore" [b71bdb51-1e7b-4e0d-953d-85dd2af82edf] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [b71bdb51-1e7b-4e0d-953d-85dd2af82edf] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003831202s
addons_test.go:632: (dbg) Run:  kubectl --context addons-190022 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-190022 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-190022 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-190022 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-190022 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.669608623s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-190022 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (54.08s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.7s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-190022 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-9d868696f-kxkw6" [e4229d94-83a5-432c-9211-0075ceeaed2e] Pending
helpers_test.go:344: "headlamp-9d868696f-kxkw6" [e4229d94-83a5-432c-9211-0075ceeaed2e] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-9d868696f-kxkw6" [e4229d94-83a5-432c-9211-0075ceeaed2e] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003957898s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-190022 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-190022 addons disable headlamp --alsologtostderr -v=1: (5.757775782s)
--- PASS: TestAddons/parallel/Headlamp (17.70s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.65s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5455fb9b69-ghz5q" [67539104-c1b5-4dd9-96c0-8f2a9c629a9f] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004733325s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-190022
--- PASS: TestAddons/parallel/CloudSpanner (5.65s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (17.08s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-190022 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-190022 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190022 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [f67074a8-6a41-4225-ab53-58de8a7f2b55] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [f67074a8-6a41-4225-ab53-58de8a7f2b55] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [f67074a8-6a41-4225-ab53-58de8a7f2b55] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 8.003440961s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-190022 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-190022 ssh "cat /opt/local-path-provisioner/pvc-f9db1751-d5be-4a5c-a915-8af812dc20b1_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-190022 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-190022 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-190022 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (17.08s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.78s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-zcd67" [f8e78301-23c4-432b-bd96-644d7c9b034e] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004607455s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-190022
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.78s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.95s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-c5np7" [2f7d7c3f-d44f-4fe2-b3df-059fbd31e7f6] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.007371698s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-190022 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-190022 addons disable yakd --alsologtostderr -v=1: (5.94112689s)
--- PASS: TestAddons/parallel/Yakd (11.95s)

                                                
                                    
x
+
TestCertOptions (77s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-241744 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
E0731 17:49:05.345933   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-241744 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m15.764324521s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-241744 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-241744 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-241744 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-241744" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-241744
--- PASS: TestCertOptions (77.00s)

                                                
                                    
x
+
TestCertExpiration (265.95s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-761578 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-761578 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (52.271060569s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-761578 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
E0731 17:52:57.004720   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/functional-496242/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-761578 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (32.622555695s)
helpers_test.go:175: Cleaning up "cert-expiration-761578" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-761578
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-761578: (1.051864997s)
--- PASS: TestCertExpiration (265.95s)

                                                
                                    
x
+
TestForceSystemdFlag (83.97s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-428032 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E0731 17:47:57.005649   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/functional-496242/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-428032 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m22.907380383s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-428032 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-428032" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-428032
--- PASS: TestForceSystemdFlag (83.97s)

                                                
                                    
x
+
TestForceSystemdEnv (48.4s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-082965 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-082965 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (47.573389207s)
helpers_test.go:175: Cleaning up "force-systemd-env-082965" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-082965
--- PASS: TestForceSystemdEnv (48.40s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.26s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.26s)

                                                
                                    
x
+
TestErrorSpam/setup (41.43s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-688034 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-688034 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-688034 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-688034 --driver=kvm2  --container-runtime=crio: (41.432365673s)
--- PASS: TestErrorSpam/setup (41.43s)

                                                
                                    
x
+
TestErrorSpam/start (0.32s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-688034 --log_dir /tmp/nospam-688034 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-688034 --log_dir /tmp/nospam-688034 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-688034 --log_dir /tmp/nospam-688034 start --dry-run
--- PASS: TestErrorSpam/start (0.32s)

                                                
                                    
x
+
TestErrorSpam/status (0.69s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-688034 --log_dir /tmp/nospam-688034 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-688034 --log_dir /tmp/nospam-688034 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-688034 --log_dir /tmp/nospam-688034 status
--- PASS: TestErrorSpam/status (0.69s)

                                                
                                    
x
+
TestErrorSpam/pause (1.47s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-688034 --log_dir /tmp/nospam-688034 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-688034 --log_dir /tmp/nospam-688034 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-688034 --log_dir /tmp/nospam-688034 pause
--- PASS: TestErrorSpam/pause (1.47s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.54s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-688034 --log_dir /tmp/nospam-688034 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-688034 --log_dir /tmp/nospam-688034 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-688034 --log_dir /tmp/nospam-688034 unpause
--- PASS: TestErrorSpam/unpause (1.54s)

                                                
                                    
x
+
TestErrorSpam/stop (4.92s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-688034 --log_dir /tmp/nospam-688034 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-688034 --log_dir /tmp/nospam-688034 stop: (1.564373201s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-688034 --log_dir /tmp/nospam-688034 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-688034 --log_dir /tmp/nospam-688034 stop: (1.773677886s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-688034 --log_dir /tmp/nospam-688034 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-688034 --log_dir /tmp/nospam-688034 stop: (1.578049982s)
--- PASS: TestErrorSpam/stop (4.92s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19349-8084/.minikube/files/etc/test/nested/copy/15259/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (93.88s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-496242 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0731 16:54:05.346873   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/client.crt: no such file or directory
E0731 16:54:05.352543   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/client.crt: no such file or directory
E0731 16:54:05.362773   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/client.crt: no such file or directory
E0731 16:54:05.383044   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/client.crt: no such file or directory
E0731 16:54:05.423321   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/client.crt: no such file or directory
E0731 16:54:05.503627   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/client.crt: no such file or directory
E0731 16:54:05.663989   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/client.crt: no such file or directory
E0731 16:54:05.984698   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/client.crt: no such file or directory
E0731 16:54:06.625698   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/client.crt: no such file or directory
E0731 16:54:07.906192   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/client.crt: no such file or directory
E0731 16:54:10.468032   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/client.crt: no such file or directory
E0731 16:54:15.588398   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/client.crt: no such file or directory
E0731 16:54:25.828802   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/client.crt: no such file or directory
E0731 16:54:46.309869   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/client.crt: no such file or directory
E0731 16:55:27.270833   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-496242 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m33.882514269s)
--- PASS: TestFunctional/serial/StartWithProxy (93.88s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (41.01s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-496242 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-496242 --alsologtostderr -v=8: (41.005623696s)
functional_test.go:659: soft start took 41.006303631s for "functional-496242" cluster.
--- PASS: TestFunctional/serial/SoftStart (41.01s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-496242 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.61s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-496242 cache add registry.k8s.io/pause:3.1: (1.275503788s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-496242 cache add registry.k8s.io/pause:3.3: (1.22393072s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-496242 cache add registry.k8s.io/pause:latest: (1.110989261s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.61s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-496242 /tmp/TestFunctionalserialCacheCmdcacheadd_local780975971/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 cache add minikube-local-cache-test:functional-496242
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-496242 cache add minikube-local-cache-test:functional-496242: (1.713313377s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 cache delete minikube-local-cache-test:functional-496242
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-496242
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.62s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-496242 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (197.469114ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.62s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 kubectl -- --context functional-496242 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-496242 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (83.7s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-496242 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0731 16:56:49.191164   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-496242 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m23.699013124s)
functional_test.go:757: restart took 1m23.699133665s for "functional-496242" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (83.70s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-496242 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.32s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-496242 logs: (1.314831712s)
--- PASS: TestFunctional/serial/LogsCmd (1.32s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.42s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 logs --file /tmp/TestFunctionalserialLogsFileCmd608419914/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-496242 logs --file /tmp/TestFunctionalserialLogsFileCmd608419914/001/logs.txt: (1.419309536s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.42s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.95s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-496242 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-496242
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-496242: exit status 115 (256.632004ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.165:31476 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-496242 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.95s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-496242 config get cpus: exit status 14 (41.865935ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-496242 config get cpus: exit status 14 (43.473538ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (16.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-496242 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-496242 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 25334: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (16.87s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-496242 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-496242 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (122.788857ms)

                                                
                                                
-- stdout --
	* [functional-496242] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19349
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19349-8084/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19349-8084/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 16:58:21.864331   25112 out.go:291] Setting OutFile to fd 1 ...
	I0731 16:58:21.864439   25112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 16:58:21.864448   25112 out.go:304] Setting ErrFile to fd 2...
	I0731 16:58:21.864452   25112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 16:58:21.864638   25112 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19349-8084/.minikube/bin
	I0731 16:58:21.865106   25112 out.go:298] Setting JSON to false
	I0731 16:58:21.866029   25112 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2446,"bootTime":1722442656,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 16:58:21.866087   25112 start.go:139] virtualization: kvm guest
	I0731 16:58:21.868720   25112 out.go:177] * [functional-496242] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 16:58:21.869838   25112 notify.go:220] Checking for updates...
	I0731 16:58:21.869840   25112 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 16:58:21.870963   25112 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 16:58:21.871940   25112 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 16:58:21.872972   25112 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19349-8084/.minikube
	I0731 16:58:21.874133   25112 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 16:58:21.875297   25112 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 16:58:21.876710   25112 config.go:182] Loaded profile config "functional-496242": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 16:58:21.877064   25112 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:58:21.877107   25112 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:58:21.891475   25112 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40965
	I0731 16:58:21.891896   25112 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:58:21.892408   25112 main.go:141] libmachine: Using API Version  1
	I0731 16:58:21.892428   25112 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:58:21.892719   25112 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:58:21.892876   25112 main.go:141] libmachine: (functional-496242) Calling .DriverName
	I0731 16:58:21.893104   25112 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 16:58:21.893359   25112 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:58:21.893390   25112 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:58:21.907835   25112 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38637
	I0731 16:58:21.908196   25112 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:58:21.908642   25112 main.go:141] libmachine: Using API Version  1
	I0731 16:58:21.908663   25112 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:58:21.908993   25112 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:58:21.909151   25112 main.go:141] libmachine: (functional-496242) Calling .DriverName
	I0731 16:58:21.941888   25112 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 16:58:21.943101   25112 start.go:297] selected driver: kvm2
	I0731 16:58:21.943131   25112 start.go:901] validating driver "kvm2" against &{Name:functional-496242 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-496242 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 16:58:21.943237   25112 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 16:58:21.945259   25112 out.go:177] 
	W0731 16:58:21.946322   25112 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0731 16:58:21.947524   25112 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-496242 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-496242 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-496242 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (130.593101ms)

                                                
                                                
-- stdout --
	* [functional-496242] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19349
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19349-8084/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19349-8084/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 16:58:22.113831   25169 out.go:291] Setting OutFile to fd 1 ...
	I0731 16:58:22.113935   25169 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 16:58:22.113946   25169 out.go:304] Setting ErrFile to fd 2...
	I0731 16:58:22.113952   25169 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 16:58:22.114216   25169 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19349-8084/.minikube/bin
	I0731 16:58:22.114754   25169 out.go:298] Setting JSON to false
	I0731 16:58:22.115727   25169 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2446,"bootTime":1722442656,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 16:58:22.115781   25169 start.go:139] virtualization: kvm guest
	I0731 16:58:22.117872   25169 out.go:177] * [functional-496242] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0731 16:58:22.119170   25169 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 16:58:22.119189   25169 notify.go:220] Checking for updates...
	I0731 16:58:22.121500   25169 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 16:58:22.122828   25169 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 16:58:22.124004   25169 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19349-8084/.minikube
	I0731 16:58:22.125257   25169 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 16:58:22.126467   25169 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 16:58:22.128156   25169 config.go:182] Loaded profile config "functional-496242": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 16:58:22.128749   25169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:58:22.128824   25169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:58:22.143554   25169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37621
	I0731 16:58:22.143907   25169 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:58:22.144380   25169 main.go:141] libmachine: Using API Version  1
	I0731 16:58:22.144399   25169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:58:22.144694   25169 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:58:22.144862   25169 main.go:141] libmachine: (functional-496242) Calling .DriverName
	I0731 16:58:22.145067   25169 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 16:58:22.145335   25169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 16:58:22.145392   25169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 16:58:22.159278   25169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44995
	I0731 16:58:22.159702   25169 main.go:141] libmachine: () Calling .GetVersion
	I0731 16:58:22.160199   25169 main.go:141] libmachine: Using API Version  1
	I0731 16:58:22.160220   25169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 16:58:22.160483   25169 main.go:141] libmachine: () Calling .GetMachineName
	I0731 16:58:22.160662   25169 main.go:141] libmachine: (functional-496242) Calling .DriverName
	I0731 16:58:22.195263   25169 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0731 16:58:22.196640   25169 start.go:297] selected driver: kvm2
	I0731 16:58:22.196659   25169 start.go:901] validating driver "kvm2" against &{Name:functional-496242 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-496242 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 16:58:22.196792   25169 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 16:58:22.198819   25169 out.go:177] 
	W0731 16:58:22.200003   25169 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0731 16:58:22.201131   25169 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-496242 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-496242 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-s85lq" [80d38eaf-8e8f-4f09-a8b7-c8025c80d9d5] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-s85lq" [80d38eaf-8e8f-4f09-a8b7-c8025c80d9d5] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.003909549s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.165:31109
functional_test.go:1671: http://192.168.39.165:31109: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-s85lq

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.165:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.165:31109
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.49s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (45.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [650be353-18d7-4fa4-a606-71d0c80c1ef3] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003591129s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-496242 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-496242 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-496242 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-496242 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-496242 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [1c11648b-dc39-4ea9-aed1-53dabe47266a] Pending
helpers_test.go:344: "sp-pod" [1c11648b-dc39-4ea9-aed1-53dabe47266a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [1c11648b-dc39-4ea9-aed1-53dabe47266a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 22.003766996s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-496242 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-496242 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-496242 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [380ea862-4d4e-4e7a-8b59-2a3d72425013] Pending
helpers_test.go:344: "sp-pod" [380ea862-4d4e-4e7a-8b59-2a3d72425013] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [380ea862-4d4e-4e7a-8b59-2a3d72425013] Running
2024/07/31 16:58:38 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.004520385s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-496242 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (45.79s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 ssh -n functional-496242 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 cp functional-496242:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2438164770/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 ssh -n functional-496242 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 ssh -n functional-496242 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (64.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-496242 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-k2d6k" [19f69734-8c03-4ffe-a398-ccd75468d08a] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-k2d6k" [19f69734-8c03-4ffe-a398-ccd75468d08a] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 59.003617741s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-496242 exec mysql-64454c8b5c-k2d6k -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-496242 exec mysql-64454c8b5c-k2d6k -- mysql -ppassword -e "show databases;": exit status 1 (144.164427ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-496242 exec mysql-64454c8b5c-k2d6k -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-496242 exec mysql-64454c8b5c-k2d6k -- mysql -ppassword -e "show databases;": exit status 1 (136.018804ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-496242 exec mysql-64454c8b5c-k2d6k -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-496242 exec mysql-64454c8b5c-k2d6k -- mysql -ppassword -e "show databases;": exit status 1 (136.450874ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-496242 exec mysql-64454c8b5c-k2d6k -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (64.02s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/15259/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 ssh "sudo cat /etc/test/nested/copy/15259/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/15259.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 ssh "sudo cat /etc/ssl/certs/15259.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/15259.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 ssh "sudo cat /usr/share/ca-certificates/15259.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/152592.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 ssh "sudo cat /etc/ssl/certs/152592.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/152592.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 ssh "sudo cat /usr/share/ca-certificates/152592.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-496242 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-496242 ssh "sudo systemctl is-active docker": exit status 1 (203.983566ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-496242 ssh "sudo systemctl is-active containerd": exit status 1 (212.429542ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 image ls --format short --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-amd64 -p functional-496242 image ls --format short --alsologtostderr: (1.995145567s)
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-496242 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-496242
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240715-585640e9
docker.io/kicbase/echo-server:functional-496242
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-496242 image ls --format short --alsologtostderr:
I0731 16:58:34.118723   25956 out.go:291] Setting OutFile to fd 1 ...
I0731 16:58:34.119153   25956 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 16:58:34.119201   25956 out.go:304] Setting ErrFile to fd 2...
I0731 16:58:34.119218   25956 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 16:58:34.119637   25956 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19349-8084/.minikube/bin
I0731 16:58:34.120490   25956 config.go:182] Loaded profile config "functional-496242": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0731 16:58:34.120592   25956 config.go:182] Loaded profile config "functional-496242": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0731 16:58:34.120918   25956 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0731 16:58:34.120959   25956 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 16:58:34.140637   25956 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37893
I0731 16:58:34.141145   25956 main.go:141] libmachine: () Calling .GetVersion
I0731 16:58:34.141743   25956 main.go:141] libmachine: Using API Version  1
I0731 16:58:34.141771   25956 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 16:58:34.142121   25956 main.go:141] libmachine: () Calling .GetMachineName
I0731 16:58:34.142274   25956 main.go:141] libmachine: (functional-496242) Calling .GetState
I0731 16:58:34.144166   25956 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0731 16:58:34.144219   25956 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 16:58:34.158668   25956 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39533
I0731 16:58:34.159188   25956 main.go:141] libmachine: () Calling .GetVersion
I0731 16:58:34.159766   25956 main.go:141] libmachine: Using API Version  1
I0731 16:58:34.159798   25956 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 16:58:34.160194   25956 main.go:141] libmachine: () Calling .GetMachineName
I0731 16:58:34.160375   25956 main.go:141] libmachine: (functional-496242) Calling .DriverName
I0731 16:58:34.160598   25956 ssh_runner.go:195] Run: systemctl --version
I0731 16:58:34.160628   25956 main.go:141] libmachine: (functional-496242) Calling .GetSSHHostname
I0731 16:58:34.163267   25956 main.go:141] libmachine: (functional-496242) DBG | domain functional-496242 has defined MAC address 52:54:00:74:eb:70 in network mk-functional-496242
I0731 16:58:34.163666   25956 main.go:141] libmachine: (functional-496242) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:eb:70", ip: ""} in network mk-functional-496242: {Iface:virbr1 ExpiryTime:2024-07-31 17:54:16 +0000 UTC Type:0 Mac:52:54:00:74:eb:70 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:functional-496242 Clientid:01:52:54:00:74:eb:70}
I0731 16:58:34.163700   25956 main.go:141] libmachine: (functional-496242) DBG | domain functional-496242 has defined IP address 192.168.39.165 and MAC address 52:54:00:74:eb:70 in network mk-functional-496242
I0731 16:58:34.163810   25956 main.go:141] libmachine: (functional-496242) Calling .GetSSHPort
I0731 16:58:34.163981   25956 main.go:141] libmachine: (functional-496242) Calling .GetSSHKeyPath
I0731 16:58:34.164123   25956 main.go:141] libmachine: (functional-496242) Calling .GetSSHUsername
I0731 16:58:34.164251   25956 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/functional-496242/id_rsa Username:docker}
I0731 16:58:34.294765   25956 ssh_runner.go:195] Run: sudo crictl images --output json
I0731 16:58:36.068927   25956 ssh_runner.go:235] Completed: sudo crictl images --output json: (1.774131122s)
I0731 16:58:36.069212   25956 main.go:141] libmachine: Making call to close driver server
I0731 16:58:36.069224   25956 main.go:141] libmachine: (functional-496242) Calling .Close
I0731 16:58:36.069529   25956 main.go:141] libmachine: Successfully made call to close driver server
I0731 16:58:36.069547   25956 main.go:141] libmachine: Making call to close connection to plugin binary
I0731 16:58:36.069574   25956 main.go:141] libmachine: (functional-496242) DBG | Closing plugin on server side
I0731 16:58:36.069659   25956 main.go:141] libmachine: Making call to close driver server
I0731 16:58:36.069674   25956 main.go:141] libmachine: (functional-496242) Calling .Close
I0731 16:58:36.069880   25956 main.go:141] libmachine: Successfully made call to close driver server
I0731 16:58:36.069893   25956 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (2.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-496242 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| localhost/my-image                      | functional-496242  | 6448106222d07 | 1.47MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-controller-manager | v1.30.3            | 76932a3b37d7e | 112MB  |
| registry.k8s.io/kube-scheduler          | v1.30.3            | 3edc18e7b7672 | 63.1MB |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/minikube-local-cache-test     | functional-496242  | aac8eea52ee57 | 3.33kB |
| registry.k8s.io/kube-apiserver          | v1.30.3            | 1f6d574d502f3 | 118MB  |
| registry.k8s.io/kube-proxy              | v1.30.3            | 55bb025d2cfa5 | 86MB   |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/kicbase/echo-server           | functional-496242  | 9056ab77afb8e | 4.94MB |
| docker.io/kindest/kindnetd              | v20240715-585640e9 | 5cc3abe5717db | 87.2MB |
| docker.io/library/nginx                 | latest             | a72860cb95fd5 | 192MB  |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-496242 image ls --format table --alsologtostderr:
I0731 16:58:39.325378   26100 out.go:291] Setting OutFile to fd 1 ...
I0731 16:58:39.325954   26100 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 16:58:39.325973   26100 out.go:304] Setting ErrFile to fd 2...
I0731 16:58:39.325981   26100 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 16:58:39.326409   26100 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19349-8084/.minikube/bin
I0731 16:58:39.327452   26100 config.go:182] Loaded profile config "functional-496242": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0731 16:58:39.327614   26100 config.go:182] Loaded profile config "functional-496242": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0731 16:58:39.328005   26100 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0731 16:58:39.328060   26100 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 16:58:39.343257   26100 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45965
I0731 16:58:39.343721   26100 main.go:141] libmachine: () Calling .GetVersion
I0731 16:58:39.344294   26100 main.go:141] libmachine: Using API Version  1
I0731 16:58:39.344324   26100 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 16:58:39.344645   26100 main.go:141] libmachine: () Calling .GetMachineName
I0731 16:58:39.344832   26100 main.go:141] libmachine: (functional-496242) Calling .GetState
I0731 16:58:39.346679   26100 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0731 16:58:39.346714   26100 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 16:58:39.361498   26100 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45909
I0731 16:58:39.361920   26100 main.go:141] libmachine: () Calling .GetVersion
I0731 16:58:39.362371   26100 main.go:141] libmachine: Using API Version  1
I0731 16:58:39.362391   26100 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 16:58:39.362724   26100 main.go:141] libmachine: () Calling .GetMachineName
I0731 16:58:39.362915   26100 main.go:141] libmachine: (functional-496242) Calling .DriverName
I0731 16:58:39.363125   26100 ssh_runner.go:195] Run: systemctl --version
I0731 16:58:39.363151   26100 main.go:141] libmachine: (functional-496242) Calling .GetSSHHostname
I0731 16:58:39.365758   26100 main.go:141] libmachine: (functional-496242) DBG | domain functional-496242 has defined MAC address 52:54:00:74:eb:70 in network mk-functional-496242
I0731 16:58:39.366165   26100 main.go:141] libmachine: (functional-496242) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:eb:70", ip: ""} in network mk-functional-496242: {Iface:virbr1 ExpiryTime:2024-07-31 17:54:16 +0000 UTC Type:0 Mac:52:54:00:74:eb:70 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:functional-496242 Clientid:01:52:54:00:74:eb:70}
I0731 16:58:39.366193   26100 main.go:141] libmachine: (functional-496242) DBG | domain functional-496242 has defined IP address 192.168.39.165 and MAC address 52:54:00:74:eb:70 in network mk-functional-496242
I0731 16:58:39.366410   26100 main.go:141] libmachine: (functional-496242) Calling .GetSSHPort
I0731 16:58:39.366562   26100 main.go:141] libmachine: (functional-496242) Calling .GetSSHKeyPath
I0731 16:58:39.366691   26100 main.go:141] libmachine: (functional-496242) Calling .GetSSHUsername
I0731 16:58:39.366846   26100 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/functional-496242/id_rsa Username:docker}
I0731 16:58:39.445929   26100 ssh_runner.go:195] Run: sudo crictl images --output json
I0731 16:58:39.496197   26100 main.go:141] libmachine: Making call to close driver server
I0731 16:58:39.496222   26100 main.go:141] libmachine: (functional-496242) Calling .Close
I0731 16:58:39.496504   26100 main.go:141] libmachine: Successfully made call to close driver server
I0731 16:58:39.496525   26100 main.go:141] libmachine: Making call to close connection to plugin binary
I0731 16:58:39.496535   26100 main.go:141] libmachine: Making call to close driver server
I0731 16:58:39.496581   26100 main.go:141] libmachine: (functional-496242) Calling .Close
I0731 16:58:39.496830   26100 main.go:141] libmachine: (functional-496242) DBG | Closing plugin on server side
I0731 16:58:39.496836   26100 main.go:141] libmachine: Successfully made call to close driver server
I0731 16:58:39.496860   26100 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-496242 image ls --format json --alsologtostderr:
[{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:functional-496242"],"size":"4943877"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb
1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha
256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"29297bbd752ef6b725a77eb59a609feea54a969b906113ffd13d74c564f39aa3","repoDigests":["docker.io/library/a280dba13e37c764485fc1300bfce5a41ca76f5fecdff88efc654df028f71425-tmp@sha256:bb9cca2ab1a235f872f7e6a99518af970e8f0f2bf5726101801d40220ae499bc"],"repoTags":[],"size":"1466018"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac28
7463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"aac8eea52ee570f1e523f79cd456ba7a80c3604ce10c4947ae6e5e8fbc498602","repoDigests":["localhost/m
inikube-local-cache-test@sha256:4a6b36ac43590b852573a80be93967793269ce6b97d9998986d5ffa0ff093c60"],"repoTags":["localhost/minikube-local-cache-test:functional-496242"],"size":"3330"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","repoDigests":["registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c","registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"117609954"},{"id":"55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","repoDigests":["regi
stry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80","registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"85953945"},{"id":"3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","repoDigests":["registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266","registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"63051080"},{"id":"5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f","repoDigests":["docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115","docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"],"repoTags":["docker.io/kindest/kindnetd:v20240715-585640e9"],"size":"87165492"},{"id":"a72860c
b95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a","repoDigests":["docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c","docker.io/library/nginx@sha256:baa881b012a49e3c2cd6ab9d80f9fcd2962a98af8ede947d0ef930a427b28afc"],"repoTags":["docker.io/library/nginx:latest"],"size":"191750286"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"6448106222d07e88ed89c86f3effab12da3b2eb9c483e570fc5b1758fd008e11","repoDigests":["localhost/my-image@sha256:98b622981ffcc8b7b1c9fd8ed2822f6e58b1b5f936a2999e16509ec07bec7f78"],"repoTags":["localhost/my-image:functional-496242"],"size":"1468600"},{"id":"76932a3b37d7eb138c8f47c9a2b4218f04
66dd273badf856f2ce2f0277e15b5e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7","registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"112198984"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-496242 image ls --format json --alsologtostderr:
I0731 16:58:39.118245   26076 out.go:291] Setting OutFile to fd 1 ...
I0731 16:58:39.118350   26076 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 16:58:39.118355   26076 out.go:304] Setting ErrFile to fd 2...
I0731 16:58:39.118359   26076 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 16:58:39.118535   26076 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19349-8084/.minikube/bin
I0731 16:58:39.119039   26076 config.go:182] Loaded profile config "functional-496242": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0731 16:58:39.119161   26076 config.go:182] Loaded profile config "functional-496242": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0731 16:58:39.119570   26076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0731 16:58:39.119609   26076 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 16:58:39.137011   26076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43163
I0731 16:58:39.137568   26076 main.go:141] libmachine: () Calling .GetVersion
I0731 16:58:39.138220   26076 main.go:141] libmachine: Using API Version  1
I0731 16:58:39.138247   26076 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 16:58:39.138593   26076 main.go:141] libmachine: () Calling .GetMachineName
I0731 16:58:39.138814   26076 main.go:141] libmachine: (functional-496242) Calling .GetState
I0731 16:58:39.140981   26076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0731 16:58:39.141036   26076 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 16:58:39.155742   26076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45647
I0731 16:58:39.156197   26076 main.go:141] libmachine: () Calling .GetVersion
I0731 16:58:39.156726   26076 main.go:141] libmachine: Using API Version  1
I0731 16:58:39.156749   26076 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 16:58:39.157113   26076 main.go:141] libmachine: () Calling .GetMachineName
I0731 16:58:39.157309   26076 main.go:141] libmachine: (functional-496242) Calling .DriverName
I0731 16:58:39.157501   26076 ssh_runner.go:195] Run: systemctl --version
I0731 16:58:39.157519   26076 main.go:141] libmachine: (functional-496242) Calling .GetSSHHostname
I0731 16:58:39.159732   26076 main.go:141] libmachine: (functional-496242) DBG | domain functional-496242 has defined MAC address 52:54:00:74:eb:70 in network mk-functional-496242
I0731 16:58:39.160140   26076 main.go:141] libmachine: (functional-496242) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:eb:70", ip: ""} in network mk-functional-496242: {Iface:virbr1 ExpiryTime:2024-07-31 17:54:16 +0000 UTC Type:0 Mac:52:54:00:74:eb:70 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:functional-496242 Clientid:01:52:54:00:74:eb:70}
I0731 16:58:39.160178   26076 main.go:141] libmachine: (functional-496242) DBG | domain functional-496242 has defined IP address 192.168.39.165 and MAC address 52:54:00:74:eb:70 in network mk-functional-496242
I0731 16:58:39.160287   26076 main.go:141] libmachine: (functional-496242) Calling .GetSSHPort
I0731 16:58:39.160469   26076 main.go:141] libmachine: (functional-496242) Calling .GetSSHKeyPath
I0731 16:58:39.160641   26076 main.go:141] libmachine: (functional-496242) Calling .GetSSHUsername
I0731 16:58:39.160802   26076 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/functional-496242/id_rsa Username:docker}
I0731 16:58:39.237127   26076 ssh_runner.go:195] Run: sudo crictl images --output json
I0731 16:58:39.274839   26076 main.go:141] libmachine: Making call to close driver server
I0731 16:58:39.274851   26076 main.go:141] libmachine: (functional-496242) Calling .Close
I0731 16:58:39.275134   26076 main.go:141] libmachine: Successfully made call to close driver server
I0731 16:58:39.275147   26076 main.go:141] libmachine: (functional-496242) DBG | Closing plugin on server side
I0731 16:58:39.275151   26076 main.go:141] libmachine: Making call to close connection to plugin binary
I0731 16:58:39.275193   26076 main.go:141] libmachine: Making call to close driver server
I0731 16:58:39.275203   26076 main.go:141] libmachine: (functional-496242) Calling .Close
I0731 16:58:39.275418   26076 main.go:141] libmachine: (functional-496242) DBG | Closing plugin on server side
I0731 16:58:39.275429   26076 main.go:141] libmachine: Successfully made call to close driver server
I0731 16:58:39.275448   26076 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-496242 image ls --format yaml --alsologtostderr:
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: aac8eea52ee570f1e523f79cd456ba7a80c3604ce10c4947ae6e5e8fbc498602
repoDigests:
- localhost/minikube-local-cache-test@sha256:4a6b36ac43590b852573a80be93967793269ce6b97d9998986d5ffa0ff093c60
repoTags:
- localhost/minikube-local-cache-test:functional-496242
size: "3330"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: 1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c
- registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "117609954"
- id: 76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7
- registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "112198984"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:functional-496242
size: "4943877"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a
repoDigests:
- docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c
- docker.io/library/nginx@sha256:baa881b012a49e3c2cd6ab9d80f9fcd2962a98af8ede947d0ef930a427b28afc
repoTags:
- docker.io/library/nginx:latest
size: "191750286"
- id: 3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266
- registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "63051080"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1
repoDigests:
- registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80
- registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "85953945"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f
repoDigests:
- docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115
- docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493
repoTags:
- docker.io/kindest/kindnetd:v20240715-585640e9
size: "87165492"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-496242 image ls --format yaml --alsologtostderr:
I0731 16:58:36.114788   25997 out.go:291] Setting OutFile to fd 1 ...
I0731 16:58:36.114884   25997 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 16:58:36.114888   25997 out.go:304] Setting ErrFile to fd 2...
I0731 16:58:36.114892   25997 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 16:58:36.115201   25997 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19349-8084/.minikube/bin
I0731 16:58:36.115826   25997 config.go:182] Loaded profile config "functional-496242": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0731 16:58:36.115940   25997 config.go:182] Loaded profile config "functional-496242": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0731 16:58:36.116362   25997 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0731 16:58:36.116405   25997 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 16:58:36.131223   25997 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45093
I0731 16:58:36.131717   25997 main.go:141] libmachine: () Calling .GetVersion
I0731 16:58:36.132343   25997 main.go:141] libmachine: Using API Version  1
I0731 16:58:36.132372   25997 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 16:58:36.132795   25997 main.go:141] libmachine: () Calling .GetMachineName
I0731 16:58:36.133017   25997 main.go:141] libmachine: (functional-496242) Calling .GetState
I0731 16:58:36.134955   25997 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0731 16:58:36.134992   25997 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 16:58:36.149269   25997 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36275
I0731 16:58:36.149710   25997 main.go:141] libmachine: () Calling .GetVersion
I0731 16:58:36.150144   25997 main.go:141] libmachine: Using API Version  1
I0731 16:58:36.150166   25997 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 16:58:36.150471   25997 main.go:141] libmachine: () Calling .GetMachineName
I0731 16:58:36.150679   25997 main.go:141] libmachine: (functional-496242) Calling .DriverName
I0731 16:58:36.150922   25997 ssh_runner.go:195] Run: systemctl --version
I0731 16:58:36.150948   25997 main.go:141] libmachine: (functional-496242) Calling .GetSSHHostname
I0731 16:58:36.153663   25997 main.go:141] libmachine: (functional-496242) DBG | domain functional-496242 has defined MAC address 52:54:00:74:eb:70 in network mk-functional-496242
I0731 16:58:36.154094   25997 main.go:141] libmachine: (functional-496242) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:eb:70", ip: ""} in network mk-functional-496242: {Iface:virbr1 ExpiryTime:2024-07-31 17:54:16 +0000 UTC Type:0 Mac:52:54:00:74:eb:70 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:functional-496242 Clientid:01:52:54:00:74:eb:70}
I0731 16:58:36.154128   25997 main.go:141] libmachine: (functional-496242) DBG | domain functional-496242 has defined IP address 192.168.39.165 and MAC address 52:54:00:74:eb:70 in network mk-functional-496242
I0731 16:58:36.154277   25997 main.go:141] libmachine: (functional-496242) Calling .GetSSHPort
I0731 16:58:36.154438   25997 main.go:141] libmachine: (functional-496242) Calling .GetSSHKeyPath
I0731 16:58:36.154586   25997 main.go:141] libmachine: (functional-496242) Calling .GetSSHUsername
I0731 16:58:36.154723   25997 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/functional-496242/id_rsa Username:docker}
I0731 16:58:36.259724   25997 ssh_runner.go:195] Run: sudo crictl images --output json
I0731 16:58:36.312030   25997 main.go:141] libmachine: Making call to close driver server
I0731 16:58:36.312043   25997 main.go:141] libmachine: (functional-496242) Calling .Close
I0731 16:58:36.312357   25997 main.go:141] libmachine: Successfully made call to close driver server
I0731 16:58:36.312377   25997 main.go:141] libmachine: Making call to close connection to plugin binary
I0731 16:58:36.312391   25997 main.go:141] libmachine: (functional-496242) DBG | Closing plugin on server side
I0731 16:58:36.312394   25997 main.go:141] libmachine: Making call to close driver server
I0731 16:58:36.312414   25997 main.go:141] libmachine: (functional-496242) Calling .Close
I0731 16:58:36.312672   25997 main.go:141] libmachine: (functional-496242) DBG | Closing plugin on server side
I0731 16:58:36.312759   25997 main.go:141] libmachine: Successfully made call to close driver server
I0731 16:58:36.312785   25997 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-496242 ssh pgrep buildkitd: exit status 1 (186.627962ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 image build -t localhost/my-image:functional-496242 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-496242 image build -t localhost/my-image:functional-496242 testdata/build --alsologtostderr: (2.810979664s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-496242 image build -t localhost/my-image:functional-496242 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 29297bbd752
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-496242
--> 6448106222d
Successfully tagged localhost/my-image:functional-496242
6448106222d07e88ed89c86f3effab12da3b2eb9c483e570fc5b1758fd008e11
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-496242 image build -t localhost/my-image:functional-496242 testdata/build --alsologtostderr:
I0731 16:58:36.549041   26051 out.go:291] Setting OutFile to fd 1 ...
I0731 16:58:36.549187   26051 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 16:58:36.549198   26051 out.go:304] Setting ErrFile to fd 2...
I0731 16:58:36.549204   26051 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 16:58:36.549400   26051 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19349-8084/.minikube/bin
I0731 16:58:36.549968   26051 config.go:182] Loaded profile config "functional-496242": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0731 16:58:36.550478   26051 config.go:182] Loaded profile config "functional-496242": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0731 16:58:36.550862   26051 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0731 16:58:36.550937   26051 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 16:58:36.565913   26051 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41795
I0731 16:58:36.566382   26051 main.go:141] libmachine: () Calling .GetVersion
I0731 16:58:36.566905   26051 main.go:141] libmachine: Using API Version  1
I0731 16:58:36.566928   26051 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 16:58:36.567281   26051 main.go:141] libmachine: () Calling .GetMachineName
I0731 16:58:36.567475   26051 main.go:141] libmachine: (functional-496242) Calling .GetState
I0731 16:58:36.569365   26051 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0731 16:58:36.569401   26051 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 16:58:36.584616   26051 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45463
I0731 16:58:36.585055   26051 main.go:141] libmachine: () Calling .GetVersion
I0731 16:58:36.585582   26051 main.go:141] libmachine: Using API Version  1
I0731 16:58:36.585609   26051 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 16:58:36.585947   26051 main.go:141] libmachine: () Calling .GetMachineName
I0731 16:58:36.586152   26051 main.go:141] libmachine: (functional-496242) Calling .DriverName
I0731 16:58:36.586379   26051 ssh_runner.go:195] Run: systemctl --version
I0731 16:58:36.586401   26051 main.go:141] libmachine: (functional-496242) Calling .GetSSHHostname
I0731 16:58:36.589204   26051 main.go:141] libmachine: (functional-496242) DBG | domain functional-496242 has defined MAC address 52:54:00:74:eb:70 in network mk-functional-496242
I0731 16:58:36.589576   26051 main.go:141] libmachine: (functional-496242) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:eb:70", ip: ""} in network mk-functional-496242: {Iface:virbr1 ExpiryTime:2024-07-31 17:54:16 +0000 UTC Type:0 Mac:52:54:00:74:eb:70 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:functional-496242 Clientid:01:52:54:00:74:eb:70}
I0731 16:58:36.589603   26051 main.go:141] libmachine: (functional-496242) DBG | domain functional-496242 has defined IP address 192.168.39.165 and MAC address 52:54:00:74:eb:70 in network mk-functional-496242
I0731 16:58:36.589740   26051 main.go:141] libmachine: (functional-496242) Calling .GetSSHPort
I0731 16:58:36.589886   26051 main.go:141] libmachine: (functional-496242) Calling .GetSSHKeyPath
I0731 16:58:36.590033   26051 main.go:141] libmachine: (functional-496242) Calling .GetSSHUsername
I0731 16:58:36.590186   26051 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/functional-496242/id_rsa Username:docker}
I0731 16:58:36.669306   26051 build_images.go:161] Building image from path: /tmp/build.4035865034.tar
I0731 16:58:36.669369   26051 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0731 16:58:36.679994   26051 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4035865034.tar
I0731 16:58:36.684493   26051 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4035865034.tar: stat -c "%s %y" /var/lib/minikube/build/build.4035865034.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4035865034.tar': No such file or directory
I0731 16:58:36.684530   26051 ssh_runner.go:362] scp /tmp/build.4035865034.tar --> /var/lib/minikube/build/build.4035865034.tar (3072 bytes)
I0731 16:58:36.707875   26051 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4035865034
I0731 16:58:36.716496   26051 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4035865034 -xf /var/lib/minikube/build/build.4035865034.tar
I0731 16:58:36.725171   26051 crio.go:315] Building image: /var/lib/minikube/build/build.4035865034
I0731 16:58:36.725237   26051 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-496242 /var/lib/minikube/build/build.4035865034 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0731 16:58:39.286613   26051 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-496242 /var/lib/minikube/build/build.4035865034 --cgroup-manager=cgroupfs: (2.561350815s)
I0731 16:58:39.286677   26051 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4035865034
I0731 16:58:39.299797   26051 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4035865034.tar
I0731 16:58:39.309865   26051 build_images.go:217] Built localhost/my-image:functional-496242 from /tmp/build.4035865034.tar
I0731 16:58:39.309897   26051 build_images.go:133] succeeded building to: functional-496242
I0731 16:58:39.309904   26051 build_images.go:134] failed building to: 
I0731 16:58:39.309929   26051 main.go:141] libmachine: Making call to close driver server
I0731 16:58:39.309940   26051 main.go:141] libmachine: (functional-496242) Calling .Close
I0731 16:58:39.310305   26051 main.go:141] libmachine: Successfully made call to close driver server
I0731 16:58:39.310324   26051 main.go:141] libmachine: Making call to close connection to plugin binary
I0731 16:58:39.310334   26051 main.go:141] libmachine: Making call to close driver server
I0731 16:58:39.310343   26051 main.go:141] libmachine: (functional-496242) Calling .Close
I0731 16:58:39.310568   26051 main.go:141] libmachine: Successfully made call to close driver server
I0731 16:58:39.310583   26051 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.774449943s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-496242
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (22.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-496242 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-496242 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-9qr2n" [1d576a69-393d-4ab5-a67b-d5adc2bdcc63] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-9qr2n" [1d576a69-393d-4ab5-a67b-d5adc2bdcc63] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 22.004450317s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (22.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 image load --daemon docker.io/kicbase/echo-server:functional-496242 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-496242 image load --daemon docker.io/kicbase/echo-server:functional-496242 --alsologtostderr: (1.030148581s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 image load --daemon docker.io/kicbase/echo-server:functional-496242 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-496242
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 image load --daemon docker.io/kicbase/echo-server:functional-496242 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 image save docker.io/kicbase/echo-server:functional-496242 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (2.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 image rm docker.io/kicbase/echo-server:functional-496242 --alsologtostderr
functional_test.go:391: (dbg) Done: out/minikube-linux-amd64 -p functional-496242 image rm docker.io/kicbase/echo-server:functional-496242 --alsologtostderr: (2.320580947s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (2.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (5.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-496242 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (5.025167836s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (5.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-496242
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 image save --daemon docker.io/kicbase/echo-server:functional-496242 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-496242
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 service list -o json
functional_test.go:1490: Took "462.844037ms" to run "out/minikube-linux-amd64 -p functional-496242 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.165:30428
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.165:30428
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "236.962541ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "44.143864ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-496242 /tmp/TestFunctionalparallelMountCmdany-port3521956276/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722445101265327969" to /tmp/TestFunctionalparallelMountCmdany-port3521956276/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722445101265327969" to /tmp/TestFunctionalparallelMountCmdany-port3521956276/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722445101265327969" to /tmp/TestFunctionalparallelMountCmdany-port3521956276/001/test-1722445101265327969
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-496242 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (205.423701ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 31 16:58 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 31 16:58 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 31 16:58 test-1722445101265327969
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 ssh cat /mount-9p/test-1722445101265327969
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-496242 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [2929f3ce-58e1-41a0-a503-b1760fe7a2a2] Pending
helpers_test.go:344: "busybox-mount" [2929f3ce-58e1-41a0-a503-b1760fe7a2a2] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [2929f3ce-58e1-41a0-a503-b1760fe7a2a2] Running
helpers_test.go:344: "busybox-mount" [2929f3ce-58e1-41a0-a503-b1760fe7a2a2] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [2929f3ce-58e1-41a0-a503-b1760fe7a2a2] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.00443526s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-496242 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-496242 /tmp/TestFunctionalparallelMountCmdany-port3521956276/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.70s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "204.404468ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "40.248926ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-496242 /tmp/TestFunctionalparallelMountCmdspecific-port2636628352/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-496242 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (187.20029ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-496242 /tmp/TestFunctionalparallelMountCmdspecific-port2636628352/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-496242 ssh "sudo umount -f /mount-9p": exit status 1 (178.637487ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-496242 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-496242 /tmp/TestFunctionalparallelMountCmdspecific-port2636628352/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-496242 /tmp/TestFunctionalparallelMountCmdVerifyCleanup917903237/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-496242 /tmp/TestFunctionalparallelMountCmdVerifyCleanup917903237/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-496242 /tmp/TestFunctionalparallelMountCmdVerifyCleanup917903237/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-496242 ssh "findmnt -T" /mount1: exit status 1 (268.716193ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-496242 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-496242 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-496242 /tmp/TestFunctionalparallelMountCmdVerifyCleanup917903237/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-496242 /tmp/TestFunctionalparallelMountCmdVerifyCleanup917903237/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-496242 /tmp/TestFunctionalparallelMountCmdVerifyCleanup917903237/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.40s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-496242
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-496242
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-496242
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (204.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-234651 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0731 16:59:05.345954   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/client.crt: no such file or directory
E0731 16:59:33.031657   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-234651 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m23.791151909s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (204.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-234651 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-234651 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-234651 -- rollout status deployment/busybox: (4.669687126s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-234651 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-234651 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-234651 -- exec busybox-fc5497c4f-2w6fp -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-234651 -- exec busybox-fc5497c4f-fdmbt -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-234651 -- exec busybox-fc5497c4f-qw457 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-234651 -- exec busybox-fc5497c4f-2w6fp -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-234651 -- exec busybox-fc5497c4f-fdmbt -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-234651 -- exec busybox-fc5497c4f-qw457 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-234651 -- exec busybox-fc5497c4f-2w6fp -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-234651 -- exec busybox-fc5497c4f-fdmbt -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-234651 -- exec busybox-fc5497c4f-qw457 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-234651 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-234651 -- exec busybox-fc5497c4f-2w6fp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-234651 -- exec busybox-fc5497c4f-2w6fp -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-234651 -- exec busybox-fc5497c4f-fdmbt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-234651 -- exec busybox-fc5497c4f-fdmbt -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-234651 -- exec busybox-fc5497c4f-qw457 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-234651 -- exec busybox-fc5497c4f-qw457 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (53.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-234651 -v=7 --alsologtostderr
E0731 17:02:57.005196   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/functional-496242/client.crt: no such file or directory
E0731 17:02:57.010474   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/functional-496242/client.crt: no such file or directory
E0731 17:02:57.020775   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/functional-496242/client.crt: no such file or directory
E0731 17:02:57.041080   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/functional-496242/client.crt: no such file or directory
E0731 17:02:57.081393   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/functional-496242/client.crt: no such file or directory
E0731 17:02:57.162431   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/functional-496242/client.crt: no such file or directory
E0731 17:02:57.323308   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/functional-496242/client.crt: no such file or directory
E0731 17:02:57.643713   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/functional-496242/client.crt: no such file or directory
E0731 17:02:58.284086   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/functional-496242/client.crt: no such file or directory
E0731 17:02:59.564461   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/functional-496242/client.crt: no such file or directory
E0731 17:03:02.125643   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/functional-496242/client.crt: no such file or directory
E0731 17:03:07.246813   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/functional-496242/client.crt: no such file or directory
E0731 17:03:17.487736   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/functional-496242/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-234651 -v=7 --alsologtostderr: (52.917042047s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (53.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-234651 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 cp testdata/cp-test.txt ha-234651:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 ssh -n ha-234651 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 cp ha-234651:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2373933749/001/cp-test_ha-234651.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 ssh -n ha-234651 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 cp ha-234651:/home/docker/cp-test.txt ha-234651-m02:/home/docker/cp-test_ha-234651_ha-234651-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 ssh -n ha-234651 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 ssh -n ha-234651-m02 "sudo cat /home/docker/cp-test_ha-234651_ha-234651-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 cp ha-234651:/home/docker/cp-test.txt ha-234651-m03:/home/docker/cp-test_ha-234651_ha-234651-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 ssh -n ha-234651 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 ssh -n ha-234651-m03 "sudo cat /home/docker/cp-test_ha-234651_ha-234651-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 cp ha-234651:/home/docker/cp-test.txt ha-234651-m04:/home/docker/cp-test_ha-234651_ha-234651-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 ssh -n ha-234651 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 ssh -n ha-234651-m04 "sudo cat /home/docker/cp-test_ha-234651_ha-234651-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 cp testdata/cp-test.txt ha-234651-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 ssh -n ha-234651-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 cp ha-234651-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2373933749/001/cp-test_ha-234651-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 ssh -n ha-234651-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 cp ha-234651-m02:/home/docker/cp-test.txt ha-234651:/home/docker/cp-test_ha-234651-m02_ha-234651.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 ssh -n ha-234651-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 ssh -n ha-234651 "sudo cat /home/docker/cp-test_ha-234651-m02_ha-234651.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 cp ha-234651-m02:/home/docker/cp-test.txt ha-234651-m03:/home/docker/cp-test_ha-234651-m02_ha-234651-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 ssh -n ha-234651-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 ssh -n ha-234651-m03 "sudo cat /home/docker/cp-test_ha-234651-m02_ha-234651-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 cp ha-234651-m02:/home/docker/cp-test.txt ha-234651-m04:/home/docker/cp-test_ha-234651-m02_ha-234651-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 ssh -n ha-234651-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 ssh -n ha-234651-m04 "sudo cat /home/docker/cp-test_ha-234651-m02_ha-234651-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 cp testdata/cp-test.txt ha-234651-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 ssh -n ha-234651-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 cp ha-234651-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2373933749/001/cp-test_ha-234651-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 ssh -n ha-234651-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 cp ha-234651-m03:/home/docker/cp-test.txt ha-234651:/home/docker/cp-test_ha-234651-m03_ha-234651.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 ssh -n ha-234651-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 ssh -n ha-234651 "sudo cat /home/docker/cp-test_ha-234651-m03_ha-234651.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 cp ha-234651-m03:/home/docker/cp-test.txt ha-234651-m02:/home/docker/cp-test_ha-234651-m03_ha-234651-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 ssh -n ha-234651-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 ssh -n ha-234651-m02 "sudo cat /home/docker/cp-test_ha-234651-m03_ha-234651-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 cp ha-234651-m03:/home/docker/cp-test.txt ha-234651-m04:/home/docker/cp-test_ha-234651-m03_ha-234651-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 ssh -n ha-234651-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 ssh -n ha-234651-m04 "sudo cat /home/docker/cp-test_ha-234651-m03_ha-234651-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 cp testdata/cp-test.txt ha-234651-m04:/home/docker/cp-test.txt
E0731 17:03:37.968661   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/functional-496242/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 ssh -n ha-234651-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 cp ha-234651-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2373933749/001/cp-test_ha-234651-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 ssh -n ha-234651-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 cp ha-234651-m04:/home/docker/cp-test.txt ha-234651:/home/docker/cp-test_ha-234651-m04_ha-234651.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 ssh -n ha-234651-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 ssh -n ha-234651 "sudo cat /home/docker/cp-test_ha-234651-m04_ha-234651.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 cp ha-234651-m04:/home/docker/cp-test.txt ha-234651-m02:/home/docker/cp-test_ha-234651-m04_ha-234651-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 ssh -n ha-234651-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 ssh -n ha-234651-m02 "sudo cat /home/docker/cp-test_ha-234651-m04_ha-234651-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 cp ha-234651-m04:/home/docker/cp-test.txt ha-234651-m03:/home/docker/cp-test_ha-234651-m04_ha-234651-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 ssh -n ha-234651-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 ssh -n ha-234651-m03 "sudo cat /home/docker/cp-test_ha-234651-m04_ha-234651-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.476433086s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-234651 node delete m03 -v=7 --alsologtostderr: (16.241980386s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (326.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-234651 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0731 17:17:57.005421   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/functional-496242/client.crt: no such file or directory
E0731 17:19:05.346309   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/client.crt: no such file or directory
E0731 17:19:20.051520   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/functional-496242/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-234651 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m25.276791319s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (326.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (76.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-234651 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-234651 --control-plane -v=7 --alsologtostderr: (1m15.707220203s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-234651 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (76.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.52s)

                                                
                                    
x
+
TestJSONOutput/start/Command (94.99s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-878906 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0731 17:22:57.004878   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/functional-496242/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-878906 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m34.987807608s)
--- PASS: TestJSONOutput/start/Command (94.99s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-878906 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-878906 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.6s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-878906 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-878906 --output=json --user=testUser: (6.597249151s)
--- PASS: TestJSONOutput/stop/Command (6.60s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-253431 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-253431 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (56.052157ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ce7a02fb-96a4-4229-ae91-f5fbac6e9306","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-253431] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"349d316e-3667-49e9-918a-ae1c54f7a7b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19349"}}
	{"specversion":"1.0","id":"eb788a5a-d63f-4a86-9466-be07dc3d2988","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b6c6052f-1f5d-45ef-93bc-ea0bc50bcaab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19349-8084/kubeconfig"}}
	{"specversion":"1.0","id":"eed8683a-4c66-4109-94b6-dbfbf89c780b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19349-8084/.minikube"}}
	{"specversion":"1.0","id":"83a8488b-4ede-4b6b-9bcb-2d5b898de651","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"77a279ba-edfc-4692-a255-0b4d6ed0aa4c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"134c8f42-199a-4ae0-aa32-e346eaba770d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-253431" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-253431
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (83.55s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-415673 --driver=kvm2  --container-runtime=crio
E0731 17:24:05.346761   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-415673 --driver=kvm2  --container-runtime=crio: (38.092128572s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-418424 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-418424 --driver=kvm2  --container-runtime=crio: (42.908739701s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-415673
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-418424
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-418424" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-418424
helpers_test.go:175: Cleaning up "first-415673" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-415673
--- PASS: TestMinikubeProfile (83.55s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (26.8s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-413820 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-413820 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.797213712s)
--- PASS: TestMountStart/serial/StartWithMountFirst (26.80s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-413820 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-413820 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (25.17s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-431391 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-431391 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (24.166310493s)
--- PASS: TestMountStart/serial/StartWithMountSecond (25.17s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-431391 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-431391 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.85s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-413820 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.85s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-431391 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-431391 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-431391
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-431391: (1.264572413s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.42s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-431391
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-431391: (21.415828912s)
--- PASS: TestMountStart/serial/RestartStopped (22.42s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-431391 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-431391 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (115.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-498089 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0731 17:27:08.392781   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/client.crt: no such file or directory
E0731 17:27:57.005078   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/functional-496242/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-498089 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m55.302291234s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498089 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (115.71s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-498089 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-498089 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-498089 -- rollout status deployment/busybox: (3.744417143s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-498089 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-498089 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-498089 -- exec busybox-fc5497c4f-5dpsw -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-498089 -- exec busybox-fc5497c4f-tm4jn -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-498089 -- exec busybox-fc5497c4f-5dpsw -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-498089 -- exec busybox-fc5497c4f-tm4jn -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-498089 -- exec busybox-fc5497c4f-5dpsw -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-498089 -- exec busybox-fc5497c4f-tm4jn -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.15s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-498089 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-498089 -- exec busybox-fc5497c4f-5dpsw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-498089 -- exec busybox-fc5497c4f-5dpsw -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-498089 -- exec busybox-fc5497c4f-tm4jn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-498089 -- exec busybox-fc5497c4f-tm4jn -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.73s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (46.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-498089 -v 3 --alsologtostderr
E0731 17:29:05.346440   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/client.crt: no such file or directory
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-498089 -v 3 --alsologtostderr: (45.80265056s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498089 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (46.34s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-498089 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.20s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498089 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498089 cp testdata/cp-test.txt multinode-498089:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498089 ssh -n multinode-498089 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498089 cp multinode-498089:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile179218134/001/cp-test_multinode-498089.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498089 ssh -n multinode-498089 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498089 cp multinode-498089:/home/docker/cp-test.txt multinode-498089-m02:/home/docker/cp-test_multinode-498089_multinode-498089-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498089 ssh -n multinode-498089 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498089 ssh -n multinode-498089-m02 "sudo cat /home/docker/cp-test_multinode-498089_multinode-498089-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498089 cp multinode-498089:/home/docker/cp-test.txt multinode-498089-m03:/home/docker/cp-test_multinode-498089_multinode-498089-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498089 ssh -n multinode-498089 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498089 ssh -n multinode-498089-m03 "sudo cat /home/docker/cp-test_multinode-498089_multinode-498089-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498089 cp testdata/cp-test.txt multinode-498089-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498089 ssh -n multinode-498089-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498089 cp multinode-498089-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile179218134/001/cp-test_multinode-498089-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498089 ssh -n multinode-498089-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498089 cp multinode-498089-m02:/home/docker/cp-test.txt multinode-498089:/home/docker/cp-test_multinode-498089-m02_multinode-498089.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498089 ssh -n multinode-498089-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498089 ssh -n multinode-498089 "sudo cat /home/docker/cp-test_multinode-498089-m02_multinode-498089.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498089 cp multinode-498089-m02:/home/docker/cp-test.txt multinode-498089-m03:/home/docker/cp-test_multinode-498089-m02_multinode-498089-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498089 ssh -n multinode-498089-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498089 ssh -n multinode-498089-m03 "sudo cat /home/docker/cp-test_multinode-498089-m02_multinode-498089-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498089 cp testdata/cp-test.txt multinode-498089-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498089 ssh -n multinode-498089-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498089 cp multinode-498089-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile179218134/001/cp-test_multinode-498089-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498089 ssh -n multinode-498089-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498089 cp multinode-498089-m03:/home/docker/cp-test.txt multinode-498089:/home/docker/cp-test_multinode-498089-m03_multinode-498089.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498089 ssh -n multinode-498089-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498089 ssh -n multinode-498089 "sudo cat /home/docker/cp-test_multinode-498089-m03_multinode-498089.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498089 cp multinode-498089-m03:/home/docker/cp-test.txt multinode-498089-m02:/home/docker/cp-test_multinode-498089-m03_multinode-498089-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498089 ssh -n multinode-498089-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498089 ssh -n multinode-498089-m02 "sudo cat /home/docker/cp-test_multinode-498089-m03_multinode-498089-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.90s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498089 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-498089 node stop m03: (1.324082219s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498089 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-498089 status: exit status 7 (401.250367ms)

                                                
                                                
-- stdout --
	multinode-498089
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-498089-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-498089-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498089 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-498089 status --alsologtostderr: exit status 7 (404.12097ms)

                                                
                                                
-- stdout --
	multinode-498089
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-498089-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-498089-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 17:29:23.705794   43881 out.go:291] Setting OutFile to fd 1 ...
	I0731 17:29:23.706016   43881 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 17:29:23.706023   43881 out.go:304] Setting ErrFile to fd 2...
	I0731 17:29:23.706027   43881 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 17:29:23.706223   43881 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19349-8084/.minikube/bin
	I0731 17:29:23.706367   43881 out.go:298] Setting JSON to false
	I0731 17:29:23.706390   43881 mustload.go:65] Loading cluster: multinode-498089
	I0731 17:29:23.706483   43881 notify.go:220] Checking for updates...
	I0731 17:29:23.706736   43881 config.go:182] Loaded profile config "multinode-498089": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 17:29:23.706749   43881 status.go:255] checking status of multinode-498089 ...
	I0731 17:29:23.707088   43881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:29:23.707172   43881 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:29:23.727239   43881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46405
	I0731 17:29:23.727729   43881 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:29:23.728257   43881 main.go:141] libmachine: Using API Version  1
	I0731 17:29:23.728278   43881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:29:23.728649   43881 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:29:23.728857   43881 main.go:141] libmachine: (multinode-498089) Calling .GetState
	I0731 17:29:23.730292   43881 status.go:330] multinode-498089 host status = "Running" (err=<nil>)
	I0731 17:29:23.730311   43881 host.go:66] Checking if "multinode-498089" exists ...
	I0731 17:29:23.730589   43881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:29:23.730654   43881 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:29:23.745993   43881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46137
	I0731 17:29:23.746396   43881 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:29:23.746893   43881 main.go:141] libmachine: Using API Version  1
	I0731 17:29:23.746913   43881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:29:23.747186   43881 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:29:23.747340   43881 main.go:141] libmachine: (multinode-498089) Calling .GetIP
	I0731 17:29:23.750007   43881 main.go:141] libmachine: (multinode-498089) DBG | domain multinode-498089 has defined MAC address 52:54:00:b0:35:3d in network mk-multinode-498089
	I0731 17:29:23.750400   43881 main.go:141] libmachine: (multinode-498089) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:35:3d", ip: ""} in network mk-multinode-498089: {Iface:virbr1 ExpiryTime:2024-07-31 18:26:40 +0000 UTC Type:0 Mac:52:54:00:b0:35:3d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-498089 Clientid:01:52:54:00:b0:35:3d}
	I0731 17:29:23.750435   43881 main.go:141] libmachine: (multinode-498089) DBG | domain multinode-498089 has defined IP address 192.168.39.100 and MAC address 52:54:00:b0:35:3d in network mk-multinode-498089
	I0731 17:29:23.750620   43881 host.go:66] Checking if "multinode-498089" exists ...
	I0731 17:29:23.750883   43881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:29:23.750913   43881 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:29:23.765230   43881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39621
	I0731 17:29:23.765717   43881 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:29:23.766091   43881 main.go:141] libmachine: Using API Version  1
	I0731 17:29:23.766114   43881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:29:23.766475   43881 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:29:23.766683   43881 main.go:141] libmachine: (multinode-498089) Calling .DriverName
	I0731 17:29:23.766892   43881 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 17:29:23.766916   43881 main.go:141] libmachine: (multinode-498089) Calling .GetSSHHostname
	I0731 17:29:23.769777   43881 main.go:141] libmachine: (multinode-498089) DBG | domain multinode-498089 has defined MAC address 52:54:00:b0:35:3d in network mk-multinode-498089
	I0731 17:29:23.770118   43881 main.go:141] libmachine: (multinode-498089) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:35:3d", ip: ""} in network mk-multinode-498089: {Iface:virbr1 ExpiryTime:2024-07-31 18:26:40 +0000 UTC Type:0 Mac:52:54:00:b0:35:3d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-498089 Clientid:01:52:54:00:b0:35:3d}
	I0731 17:29:23.770144   43881 main.go:141] libmachine: (multinode-498089) DBG | domain multinode-498089 has defined IP address 192.168.39.100 and MAC address 52:54:00:b0:35:3d in network mk-multinode-498089
	I0731 17:29:23.770279   43881 main.go:141] libmachine: (multinode-498089) Calling .GetSSHPort
	I0731 17:29:23.770491   43881 main.go:141] libmachine: (multinode-498089) Calling .GetSSHKeyPath
	I0731 17:29:23.770642   43881 main.go:141] libmachine: (multinode-498089) Calling .GetSSHUsername
	I0731 17:29:23.770794   43881 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/multinode-498089/id_rsa Username:docker}
	I0731 17:29:23.851561   43881 ssh_runner.go:195] Run: systemctl --version
	I0731 17:29:23.856943   43881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 17:29:23.870234   43881 kubeconfig.go:125] found "multinode-498089" server: "https://192.168.39.100:8443"
	I0731 17:29:23.870258   43881 api_server.go:166] Checking apiserver status ...
	I0731 17:29:23.870286   43881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 17:29:23.882607   43881 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1122/cgroup
	W0731 17:29:23.891338   43881 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1122/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 17:29:23.891405   43881 ssh_runner.go:195] Run: ls
	I0731 17:29:23.895179   43881 api_server.go:253] Checking apiserver healthz at https://192.168.39.100:8443/healthz ...
	I0731 17:29:23.899069   43881 api_server.go:279] https://192.168.39.100:8443/healthz returned 200:
	ok
	I0731 17:29:23.899088   43881 status.go:422] multinode-498089 apiserver status = Running (err=<nil>)
	I0731 17:29:23.899096   43881 status.go:257] multinode-498089 status: &{Name:multinode-498089 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 17:29:23.899138   43881 status.go:255] checking status of multinode-498089-m02 ...
	I0731 17:29:23.899415   43881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:29:23.899451   43881 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:29:23.914522   43881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43329
	I0731 17:29:23.914943   43881 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:29:23.915430   43881 main.go:141] libmachine: Using API Version  1
	I0731 17:29:23.915451   43881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:29:23.915738   43881 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:29:23.915935   43881 main.go:141] libmachine: (multinode-498089-m02) Calling .GetState
	I0731 17:29:23.917469   43881 status.go:330] multinode-498089-m02 host status = "Running" (err=<nil>)
	I0731 17:29:23.917484   43881 host.go:66] Checking if "multinode-498089-m02" exists ...
	I0731 17:29:23.917754   43881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:29:23.917789   43881 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:29:23.932175   43881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41595
	I0731 17:29:23.932592   43881 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:29:23.933014   43881 main.go:141] libmachine: Using API Version  1
	I0731 17:29:23.933035   43881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:29:23.933309   43881 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:29:23.933485   43881 main.go:141] libmachine: (multinode-498089-m02) Calling .GetIP
	I0731 17:29:23.935817   43881 main.go:141] libmachine: (multinode-498089-m02) DBG | domain multinode-498089-m02 has defined MAC address 52:54:00:56:d5:17 in network mk-multinode-498089
	I0731 17:29:23.936273   43881 main.go:141] libmachine: (multinode-498089-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:d5:17", ip: ""} in network mk-multinode-498089: {Iface:virbr1 ExpiryTime:2024-07-31 18:27:49 +0000 UTC Type:0 Mac:52:54:00:56:d5:17 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:multinode-498089-m02 Clientid:01:52:54:00:56:d5:17}
	I0731 17:29:23.936307   43881 main.go:141] libmachine: (multinode-498089-m02) DBG | domain multinode-498089-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:56:d5:17 in network mk-multinode-498089
	I0731 17:29:23.936479   43881 host.go:66] Checking if "multinode-498089-m02" exists ...
	I0731 17:29:23.936974   43881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:29:23.937016   43881 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:29:23.951588   43881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40519
	I0731 17:29:23.951921   43881 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:29:23.952392   43881 main.go:141] libmachine: Using API Version  1
	I0731 17:29:23.952415   43881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:29:23.952690   43881 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:29:23.952874   43881 main.go:141] libmachine: (multinode-498089-m02) Calling .DriverName
	I0731 17:29:23.953055   43881 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 17:29:23.953075   43881 main.go:141] libmachine: (multinode-498089-m02) Calling .GetSSHHostname
	I0731 17:29:23.955739   43881 main.go:141] libmachine: (multinode-498089-m02) DBG | domain multinode-498089-m02 has defined MAC address 52:54:00:56:d5:17 in network mk-multinode-498089
	I0731 17:29:23.956130   43881 main.go:141] libmachine: (multinode-498089-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:d5:17", ip: ""} in network mk-multinode-498089: {Iface:virbr1 ExpiryTime:2024-07-31 18:27:49 +0000 UTC Type:0 Mac:52:54:00:56:d5:17 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:multinode-498089-m02 Clientid:01:52:54:00:56:d5:17}
	I0731 17:29:23.956150   43881 main.go:141] libmachine: (multinode-498089-m02) DBG | domain multinode-498089-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:56:d5:17 in network mk-multinode-498089
	I0731 17:29:23.956318   43881 main.go:141] libmachine: (multinode-498089-m02) Calling .GetSSHPort
	I0731 17:29:23.956486   43881 main.go:141] libmachine: (multinode-498089-m02) Calling .GetSSHKeyPath
	I0731 17:29:23.956612   43881 main.go:141] libmachine: (multinode-498089-m02) Calling .GetSSHUsername
	I0731 17:29:23.956730   43881 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19349-8084/.minikube/machines/multinode-498089-m02/id_rsa Username:docker}
	I0731 17:29:24.038186   43881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 17:29:24.051332   43881 status.go:257] multinode-498089-m02 status: &{Name:multinode-498089-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0731 17:29:24.051387   43881 status.go:255] checking status of multinode-498089-m03 ...
	I0731 17:29:24.051763   43881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 17:29:24.051810   43881 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 17:29:24.067103   43881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37261
	I0731 17:29:24.067622   43881 main.go:141] libmachine: () Calling .GetVersion
	I0731 17:29:24.068095   43881 main.go:141] libmachine: Using API Version  1
	I0731 17:29:24.068116   43881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 17:29:24.068425   43881 main.go:141] libmachine: () Calling .GetMachineName
	I0731 17:29:24.068618   43881 main.go:141] libmachine: (multinode-498089-m03) Calling .GetState
	I0731 17:29:24.070179   43881 status.go:330] multinode-498089-m03 host status = "Stopped" (err=<nil>)
	I0731 17:29:24.070193   43881 status.go:343] host is not running, skipping remaining checks
	I0731 17:29:24.070199   43881 status.go:257] multinode-498089-m03 status: &{Name:multinode-498089-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498089 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-498089 node start m03 -v=7 --alsologtostderr: (38.318553021s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498089 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (38.95s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498089 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-498089 node delete m03: (1.887319481s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498089 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.39s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (181.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-498089 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0731 17:39:05.347155   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-498089 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m1.487895692s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-498089 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (181.98s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (42.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-498089
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-498089-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-498089-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (57.250159ms)

                                                
                                                
-- stdout --
	* [multinode-498089-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19349
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19349-8084/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19349-8084/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-498089-m02' is duplicated with machine name 'multinode-498089-m02' in profile 'multinode-498089'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-498089-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-498089-m03 --driver=kvm2  --container-runtime=crio: (41.312937707s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-498089
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-498089: exit status 80 (206.228108ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-498089 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-498089-m03 already exists in multinode-498089-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-498089-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (42.36s)

                                                
                                    
x
+
TestScheduledStopUnix (109.75s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-930814 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-930814 --memory=2048 --driver=kvm2  --container-runtime=crio: (38.223048086s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-930814 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-930814 -n scheduled-stop-930814
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-930814 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-930814 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-930814 -n scheduled-stop-930814
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-930814
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-930814 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-930814
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-930814: exit status 7 (63.71829ms)

                                                
                                                
-- stdout --
	scheduled-stop-930814
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-930814 -n scheduled-stop-930814
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-930814 -n scheduled-stop-930814: exit status 7 (61.089993ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-930814" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-930814
--- PASS: TestScheduledStopUnix (109.75s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (254.75s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3388776840 start -p running-upgrade-262154 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3388776840 start -p running-upgrade-262154 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m20.735851034s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-262154 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-262154 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m49.723582498s)
helpers_test.go:175: Cleaning up "running-upgrade-262154" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-262154
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-262154: (1.89959118s)
--- PASS: TestRunningBinaryUpgrade (254.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-231031 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-231031 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (66.345422ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-231031] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19349
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19349-8084/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19349-8084/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (70.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-231031 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-231031 --driver=kvm2  --container-runtime=crio: (1m9.973657941s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-231031 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (70.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (2.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-985288 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-985288 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (94.675923ms)

                                                
                                                
-- stdout --
	* [false-985288] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19349
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19349-8084/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19349-8084/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 17:47:20.324211   51569 out.go:291] Setting OutFile to fd 1 ...
	I0731 17:47:20.324318   51569 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 17:47:20.324327   51569 out.go:304] Setting ErrFile to fd 2...
	I0731 17:47:20.324336   51569 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 17:47:20.324501   51569 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19349-8084/.minikube/bin
	I0731 17:47:20.325014   51569 out.go:298] Setting JSON to false
	I0731 17:47:20.325960   51569 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5384,"bootTime":1722442656,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 17:47:20.326019   51569 start.go:139] virtualization: kvm guest
	I0731 17:47:20.328157   51569 out.go:177] * [false-985288] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 17:47:20.329645   51569 notify.go:220] Checking for updates...
	I0731 17:47:20.329651   51569 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 17:47:20.331093   51569 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 17:47:20.332323   51569 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19349-8084/kubeconfig
	I0731 17:47:20.333467   51569 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19349-8084/.minikube
	I0731 17:47:20.334647   51569 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 17:47:20.335697   51569 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 17:47:20.337169   51569 config.go:182] Loaded profile config "NoKubernetes-231031": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 17:47:20.337285   51569 config.go:182] Loaded profile config "offline-crio-195412": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 17:47:20.337397   51569 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 17:47:20.371327   51569 out.go:177] * Using the kvm2 driver based on user configuration
	I0731 17:47:20.372661   51569 start.go:297] selected driver: kvm2
	I0731 17:47:20.372677   51569 start.go:901] validating driver "kvm2" against <nil>
	I0731 17:47:20.372690   51569 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 17:47:20.374559   51569 out.go:177] 
	W0731 17:47:20.375801   51569 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0731 17:47:20.377012   51569 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-985288 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-985288

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-985288

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-985288

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-985288

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-985288

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-985288

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-985288

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-985288

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-985288

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-985288

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985288"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985288"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985288"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-985288

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985288"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985288"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-985288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-985288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-985288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-985288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-985288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-985288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-985288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-985288" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985288"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985288"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985288"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985288"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985288"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-985288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-985288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-985288" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985288"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985288"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985288"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985288"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985288"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-985288

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985288"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985288"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985288"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985288"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985288"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985288"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985288"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985288"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985288"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985288"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985288"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985288"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985288"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985288"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985288"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985288"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985288"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985288"

                                                
                                                
----------------------- debugLogs end: false-985288 [took: 2.621430266s] --------------------------------
helpers_test.go:175: Cleaning up "false-985288" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-985288
--- PASS: TestNetworkPlugins/group/false (2.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (40.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-231031 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-231031 --no-kubernetes --driver=kvm2  --container-runtime=crio: (39.685003508s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-231031 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-231031 status -o json: exit status 2 (226.773612ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-231031","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-231031
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-231031: (1.00398042s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (40.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (72.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-231031 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-231031 --no-kubernetes --driver=kvm2  --container-runtime=crio: (1m12.180519056s)
--- PASS: TestNoKubernetes/serial/Start (72.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-231031 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-231031 "sudo systemctl is-active --quiet service kubelet": exit status 1 (212.299364ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (1.005402926s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-231031
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-231031: (1.284244892s)
--- PASS: TestNoKubernetes/serial/Stop (1.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (38.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-231031 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-231031 --driver=kvm2  --container-runtime=crio: (38.178898543s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (38.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-231031 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-231031 "sudo systemctl is-active --quiet service kubelet": exit status 1 (180.539121ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.18s)

                                                
                                    
x
+
TestPause/serial/Start (121.32s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-957141 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-957141 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (2m1.32417175s)
--- PASS: TestPause/serial/Start (121.32s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.27s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (123.06s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.4031118765 start -p stopped-upgrade-246118 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.4031118765 start -p stopped-upgrade-246118 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m0.307378926s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.4031118765 -p stopped-upgrade-246118 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.4031118765 -p stopped-upgrade-246118 stop: (1.414007359s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-246118 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0731 17:52:40.054506   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/functional-496242/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-246118 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m1.334018007s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (123.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (60.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-985288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-985288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m0.782515165s)
--- PASS: TestNetworkPlugins/group/auto/Start (60.78s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.91s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-246118
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (80.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-985288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-985288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m20.799763639s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (80.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (113.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-985288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
E0731 17:54:05.346636   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-985288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m53.976074105s)
--- PASS: TestNetworkPlugins/group/calico/Start (113.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-985288 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-985288 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-d75f7" [bab9339d-db53-4965-ab48-d49056de7f7a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-d75f7" [bab9339d-db53-4965-ab48-d49056de7f7a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004196765s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-985288 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-985288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-985288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (78.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-985288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-985288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m18.020402207s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (78.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-2kbcz" [6eb94161-d1e1-447e-8d3d-bc0164f05674] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005829295s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-985288 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-985288 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-7l5lx" [b7308c48-3c81-4e18-a1d1-6439f24da0b6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-7l5lx" [b7308c48-3c81-4e18-a1d1-6439f24da0b6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004071598s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-985288 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-985288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-985288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (98.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-985288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-985288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m38.784656313s)
--- PASS: TestNetworkPlugins/group/bridge/Start (98.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-d7wkk" [f9c3e8e1-c74b-4933-9e2c-bd87c4e19143] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006639389s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-985288 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-985288 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-pmvqn" [193bfe18-82f0-48f9-ba4d-909697d5fbe6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-pmvqn" [193bfe18-82f0-48f9-ba4d-909697d5fbe6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004935805s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-985288 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-985288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-985288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-985288 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-985288 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-c9czd" [fbbd1aae-6d78-4725-8454-05fba9f04b71] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-c9czd" [fbbd1aae-6d78-4725-8454-05fba9f04b71] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.004020305s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (89.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-985288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-985288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m29.739981779s)
--- PASS: TestNetworkPlugins/group/flannel/Start (89.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-985288 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-985288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-985288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (108.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-985288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-985288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m48.678828999s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (108.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-985288 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-985288 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-gwk6s" [22377baa-59b5-4929-b7a4-122822839cd1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-gwk6s" [22377baa-59b5-4929-b7a4-122822839cd1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003955333s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-985288 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-985288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-985288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-zxgmj" [6141224e-e374-4342-8d1f-8230107d11b0] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004876485s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-985288 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-985288 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-9ttrl" [794c720e-c0a7-4a8b-a852-961e6434e077] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-9ttrl" [794c720e-c0a7-4a8b-a852-961e6434e077] Running
E0731 17:57:57.004999   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/functional-496242/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003459697s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (114.62s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-673754 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-673754 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (1m54.620447076s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (114.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-985288 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-985288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-985288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (116.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-436067 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-436067 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (1m56.108584601s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (116.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-985288 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-985288 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-fh7f2" [da844e22-d121-4d70-a4c7-bb008f697e73] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-fh7f2" [da844e22-d121-4d70-a4c7-bb008f697e73] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.006379178s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-985288 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-985288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-985288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)
E0731 18:27:57.005458   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/functional-496242/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (69.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-094310 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
E0731 17:59:05.345977   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/addons-190022/client.crt: no such file or directory
E0731 17:59:21.706309   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/auto-985288/client.crt: no such file or directory
E0731 17:59:21.711546   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/auto-985288/client.crt: no such file or directory
E0731 17:59:21.721800   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/auto-985288/client.crt: no such file or directory
E0731 17:59:21.742065   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/auto-985288/client.crt: no such file or directory
E0731 17:59:21.782392   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/auto-985288/client.crt: no such file or directory
E0731 17:59:21.862796   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/auto-985288/client.crt: no such file or directory
E0731 17:59:22.023345   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/auto-985288/client.crt: no such file or directory
E0731 17:59:22.344217   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/auto-985288/client.crt: no such file or directory
E0731 17:59:22.985210   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/auto-985288/client.crt: no such file or directory
E0731 17:59:24.266014   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/auto-985288/client.crt: no such file or directory
E0731 17:59:26.826872   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/auto-985288/client.crt: no such file or directory
E0731 17:59:31.947972   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/auto-985288/client.crt: no such file or directory
E0731 17:59:42.188851   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/auto-985288/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-094310 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (1m9.823289266s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (69.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-673754 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [da5fdf31-6093-4e84-baf1-ff5285f9798f] Pending
helpers_test.go:344: "busybox" [da5fdf31-6093-4e84-baf1-ff5285f9798f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [da5fdf31-6093-4e84-baf1-ff5285f9798f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004621848s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-673754 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-673754 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-673754 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-094310 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fe7c4955-d32a-4b1f-80ba-a844d22ebf84] Pending
E0731 18:00:02.669634   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/auto-985288/client.crt: no such file or directory
E0731 18:00:02.921824   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/kindnet-985288/client.crt: no such file or directory
helpers_test.go:344: "busybox" [fe7c4955-d32a-4b1f-80ba-a844d22ebf84] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0731 18:00:05.482937   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/kindnet-985288/client.crt: no such file or directory
helpers_test.go:344: "busybox" [fe7c4955-d32a-4b1f-80ba-a844d22ebf84] Running
E0731 18:00:10.603899   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/kindnet-985288/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.004163661s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-094310 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-436067 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [038d8a35-32bb-4e9b-827e-926dfd1b3d35] Pending
helpers_test.go:344: "busybox" [038d8a35-32bb-4e9b-827e-926dfd1b3d35] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [038d8a35-32bb-4e9b-827e-926dfd1b3d35] Running
E0731 18:00:20.844533   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/kindnet-985288/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003698098s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-436067 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-094310 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-094310 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-436067 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-436067 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (646.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-673754 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
E0731 18:02:29.831657   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/custom-flannel-985288/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-673754 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (10m46.399048875s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-673754 -n no-preload-673754
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (646.65s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (593.79s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-094310 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
E0731 18:02:50.992392   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/flannel-985288/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-094310 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (9m53.534909815s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-094310 -n default-k8s-diff-port-094310
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (593.79s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (603.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-436067 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
E0731 18:02:54.569224   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/bridge-985288/client.crt: no such file or directory
E0731 18:02:57.005307   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/functional-496242/client.crt: no such file or directory
E0731 18:03:01.232656   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/flannel-985288/client.crt: no such file or directory
E0731 18:03:19.836810   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/calico-985288/client.crt: no such file or directory
E0731 18:03:21.713449   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/flannel-985288/client.crt: no such file or directory
E0731 18:03:26.933278   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/enable-default-cni-985288/client.crt: no such file or directory
E0731 18:03:26.938514   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/enable-default-cni-985288/client.crt: no such file or directory
E0731 18:03:26.948761   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/enable-default-cni-985288/client.crt: no such file or directory
E0731 18:03:26.969014   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/enable-default-cni-985288/client.crt: no such file or directory
E0731 18:03:27.009436   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/enable-default-cni-985288/client.crt: no such file or directory
E0731 18:03:27.089838   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/enable-default-cni-985288/client.crt: no such file or directory
E0731 18:03:27.250257   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/enable-default-cni-985288/client.crt: no such file or directory
E0731 18:03:27.570864   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/enable-default-cni-985288/client.crt: no such file or directory
E0731 18:03:28.211514   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/enable-default-cni-985288/client.crt: no such file or directory
E0731 18:03:29.492043   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/enable-default-cni-985288/client.crt: no such file or directory
E0731 18:03:32.053093   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/enable-default-cni-985288/client.crt: no such file or directory
E0731 18:03:35.529408   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/bridge-985288/client.crt: no such file or directory
E0731 18:03:37.174019   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/enable-default-cni-985288/client.crt: no such file or directory
E0731 18:03:47.414445   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/enable-default-cni-985288/client.crt: no such file or directory
E0731 18:03:51.752774   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/custom-flannel-985288/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-436067 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (10m2.861191521s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-436067 -n embed-certs-436067
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (603.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-276459 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-276459 --alsologtostderr -v=3: (2.359682519s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-276459 -n old-k8s-version-276459
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-276459 -n old-k8s-version-276459: exit status 7 (64.381781ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-276459 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (48.71s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-094683 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-094683 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (48.713794507s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (48.71s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-094683 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-094683 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.200531125s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-094683 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-094683 --alsologtostderr -v=3: (11.324125472s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-094683 -n newest-cni-094683
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-094683 -n newest-cni-094683: exit status 7 (64.501061ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-094683 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (35.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-094683 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-094683 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (34.901492444s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-094683 -n newest-cni-094683
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (35.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-094683 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-f6ad1f6e
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-094683 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-094683 -n newest-cni-094683
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-094683 -n newest-cni-094683: exit status 2 (226.516412ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-094683 -n newest-cni-094683
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-094683 -n newest-cni-094683: exit status 2 (223.452763ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-094683 --alsologtostderr -v=1
E0731 18:29:21.706214   15259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19349-8084/.minikube/profiles/auto-985288/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-094683 -n newest-cni-094683
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-094683 -n newest-cni-094683
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.26s)

                                                
                                    

Test skip (40/320)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.3/cached-images 0
15 TestDownloadOnly/v1.30.3/binaries 0
16 TestDownloadOnly/v1.30.3/kubectl 0
23 TestDownloadOnly/v1.31.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.31.0-beta.0/binaries 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0
38 TestAddons/serial/Volcano 0
47 TestAddons/parallel/Olm 0
57 TestDockerFlags 0
60 TestDockerEnvContainerd 0
62 TestHyperKitDriverInstallOrUpdate 0
63 TestHyperkitDriverSkipUpgrade 0
114 TestFunctional/parallel/DockerEnv 0
115 TestFunctional/parallel/PodmanEnv 0
135 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
136 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
137 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
138 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
139 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
140 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
141 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
142 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
163 TestGvisorAddon 0
185 TestImageBuild 0
212 TestKicCustomNetwork 0
213 TestKicExistingNetwork 0
214 TestKicCustomSubnet 0
215 TestKicStaticIP 0
247 TestChangeNoneUser 0
250 TestScheduledStopWindows 0
252 TestSkaffold 0
254 TestInsufficientStorage 0
258 TestMissingContainerUpgrade 0
263 TestNetworkPlugins/group/kubenet 2.83
272 TestNetworkPlugins/group/cilium 3.1
278 TestStartStop/group/disable-driver-mounts 0.17
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-985288 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-985288

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-985288

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-985288

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-985288

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-985288

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-985288

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-985288

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-985288

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-985288

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-985288

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985288"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985288"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985288"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-985288

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985288"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985288"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-985288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-985288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-985288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-985288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-985288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-985288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-985288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-985288" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985288"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985288"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985288"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985288"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985288"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-985288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-985288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-985288" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985288"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985288"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985288"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985288"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985288"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-985288

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985288"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985288"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985288"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985288"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985288"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985288"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985288"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985288"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985288"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985288"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985288"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985288"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985288"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985288"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985288"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985288"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985288"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985288"

                                                
                                                
----------------------- debugLogs end: kubenet-985288 [took: 2.700062386s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-985288" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-985288
--- SKIP: TestNetworkPlugins/group/kubenet (2.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-985288 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-985288

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-985288

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-985288

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-985288

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-985288

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-985288

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-985288

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-985288

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-985288

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-985288

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985288"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985288"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985288"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-985288

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985288"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985288"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-985288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-985288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-985288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-985288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-985288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-985288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-985288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-985288" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985288"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985288"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985288"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985288"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985288"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-985288

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-985288

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-985288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-985288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-985288

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-985288

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-985288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-985288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-985288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-985288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-985288" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985288"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985288"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985288"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985288"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985288"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-985288

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985288"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985288"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985288"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985288"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985288"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985288"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985288"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985288"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985288"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985288"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985288"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985288"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985288"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985288"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985288"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985288"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985288"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-985288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985288"

                                                
                                                
----------------------- debugLogs end: cilium-985288 [took: 2.960464654s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-985288" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-985288
--- SKIP: TestNetworkPlugins/group/cilium (3.10s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-280161" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-280161
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard